TVs. Consoles. Projectors and accessories. Technologies. Digital TV

Basic rules for compiling a semantic core. Semantic core - how to compose it correctly? Applications and services for automatic grouping of search queries

What's happened semantic core site? The semantic core of the site (hereinafter referred to as SY) is a set of keywords and phrases for which the resource progressing in search engines and which indicate that the site belongs to a certain topics.

For successful promotion in search engines, keywords must be correctly grouped and distributed across the pages of the site and in a certain form contained in meta descriptions ( title, description, keywords), as well as in headings H1-H6. At the same time, overspam should not be allowed, so as not to “fly away” to Baden-Baden.

In this article we will try to look at the issue not only from a technical point of view, but also to look at the problem through the eyes of business owners and marketers.

What is the collection of SY?

  • Manual— possible for small sites (up to 1000 keywords).
  • Automatic— programs do not always correctly determine the context of the request, so problems may arise with the distribution of keywords across pages.
  • Semi-automatic— phrases and frequency are collected automatically, phrases are distributed and refined manually.

In our article we will consider a semi-automatic approach to creating a semantic core, as it is the most effective.

In addition, there are two typical cases when compiling a synonym:

  • for a site with a ready-made structure;
  • for a new site.

The second option is more preferable, since it is possible to create an ideal site structure for search engines.

What does the process of compiling a NL consist of?

Work on the formation of the semantic core is divided into the following stages:

  1. Identification of directions in which the site will be promoted.
  2. Collecting keywords, analyzing similar queries and search suggestions.
  3. Frequency parsing, filtering out “empty” requests.
  4. Clustering (grouping) of requests.
  5. Distribution of requests across site pages (creation of an ideal site structure).
  6. Recommendations for use.

The better you create the core of the site, and quality in this case means the breadth and depth of semantics, the more powerful and reliable the flow search traffic You can direct them to the site and the more you will attract customers.

How to create a semantic core of a website

So, let's look at each point in more detail with various examples.

At the first step, it is important to determine which products and services present on the site will be promoted in the search results of Yandex and Google.

Example No. 1. Let’s say the site has two areas of services: computer repair at home and training to work with Word/Exel at home. IN in this case it was decided that training was no longer in demand, so there was no point in promoting it, and therefore collecting semantics on it. Another important point, you need to collect not only queries containing "computer repair at home", but also "laptop repair, PC repair" and others.

Example No. 2. The company is engaged in low-rise construction. But at the same time he builds only wooden houses. Accordingly, queries and semantics by directions "construction of houses from aerated concrete" or "construction of brick houses" may not be collected.

Collection of semantics

We will look at two main sources of keywords: Yandex and Google. We’ll tell you how to collect semantics for free and briefly review paid services that can speed up and automate this process.

In Yandex, key phrases are collected from the Yandex.Wordstat service and in Google through query statistics in Google AdWords. If available, you can use data from Yandex Webmaster and Yandex Metrics, Google Webmaster and Google Analytics as additional sources of semantics.

Collecting keywords from Yandex.Wordstat

Collecting queries from Wordstat can be considered free. To view the data of this service, you only need a Yandex account. So let's go to wordstat.yandex.ru and enter the keyword. Let's consider an example of collecting semantics for a car rental company website.

What do we see in this screenshot?

  1. Left column. Here is the basic query and its various variations with "tail". Opposite each request is a number showing how much this request is in in general has been used by various users.
  2. Right column. Requests similar to the main one and indicators of their overall frequency. Here we see that a person who wants to rent a car, in addition to the request "car rental" can use "car rental", "car rental", "car rental" and others. This is very important data that you need to pay attention to so as not to miss a single request.
  3. Regionality and history. By choosing one of possible options, you can check the distribution of requests by region, the number of requests in a particular region or city, as well as the trend of changes over time or with the change of season.
  4. Devices, from which the request was made. By switching tabs, you can find out which devices are most often searched from.

Check different versions of key phrases and record the received data in Excel tables or Google spreadsheets. For convenience, install the plugin Yandex Wordstat Helper. After installing it, plus signs will appear next to the search phrases; when you click on them, the words will be copied; you will not need to select and paste the frequency indicator manually.

Collecting keywords from Google AdWords

Unfortunately, Google does not have an open source of search queries with their frequency indicators, so here you need to work around it. And for this we need a working account in Google AdWords.

We register an account in Google AdWords and top up the balance with the minimum possible amount - 300 rubles (on an account that is inactive in terms of budget, approximate data is displayed). After that, go to “Tools” - “Keyword Planner”.

Will open new page, where in the “Search for new keywords by phrase, site or category” tab, enter the keyword.

Scroll down, click “Get options” and see something like this.

  1. Top request and average number of requests per month. If the account is not paid, then you will see approximate data, that is, the average number of requests. When there are funds on the account, exact data will be shown, as well as the dynamics of changes in the frequency of the entered keyword.
  2. Keywords by relevance. This is the same as similar queries in Yandex Wordstat.
  3. Downloading data. This tool is convenient because the data obtained in it can be downloaded.

We looked at working with two main sources of statistics on search queries. Now let's move on to automating this process, because collecting semantics manually takes too much time.

Programs and services for collecting keywords

Key Collector

The program is installed on the computer. The program connects work accounts from which statistics will be collected. Next, it creates new project and a folder for keywords.

Select “Batch collection of words from the left column of Yandex.Wordstat”, enter the queries for which we collect data.

An example is included in the screenshot, in fact, for a more complete syntax, here you additionally need to collect all query options with car brands and classes. For example, “bmw for rent”, “buy a toyota with option to buy”, “rent an SUV” and so on.

WordEb

Free analogue previous program. This can be considered both a plus - you don’t need to pay, and a minus - the program’s functionality is significantly reduced.

To collect keywords, the steps are the same.

Rush-analytics.ru

Online service. Its main advantage is that you don’t need to download or install anything. Register and use it. The service is paid, but when you register, you have 200 coins in your account, which is enough to collect small semantics (up to 5000 requests) and parse frequency.

The downside is that semantics are collected only from Wordstat.

Checking the frequency of keywords and queries

And again we notice a decrease in the number of requests. Let's go ahead and try another word form of the same query.

We note that in the singular, this request is searched by a much smaller number of users, which means the initial request is a higher priority for us.

Such manipulations must be carried out with every word and phrase. Those requests for which the final frequency is equal to zero (using quotes and exclamation point), are eliminated, because “0” means that no one enters such queries and these queries are only part of others. The point of compiling a semantic core is to select the queries that people use to search. All queries are then placed in an Excel table, grouped by meaning and distributed across the pages of the site.

It’s simply not possible to do this manually, so there are many services on the Internet, paid and free, that allow you to do this automatically. Let's give a few:

  • megaindex.com;
  • rush-analytics.ru;
  • tools.pixelplus.ru;
  • key-collector.ru.

Removing non-target requests

After sifting through the keywords, you should remove unnecessary ones. What search queries can be removed from the list?

  • requests with the names of competitors' companies (can be left in contextual advertising);
  • requests for goods or services that you do not sell;
  • requests that indicate a district or region in which you do not work.

Clustering (grouping) of requests for site pages

The essence of this stage is to combine queries that are similar in meaning into clusters, and then determine which pages they will be promoted to. How can you understand which requests to promote to one page and which to another?

1. By request type.

We already know that everything queries in search engines are divided into several types, depending on the purpose of the search:

  • commercial (buy, sell, order) - promoted to landing pages, pages of product categories, product cards, pages with services, price lists;
  • informational (where, how, why, why) - articles, forum topics, answer to question section;
  • navigation (telephone, address, brand name) - page with contacts.

If you are in doubt what type of request it is, enter its search string and analyze the results. For commercial requests there will be more pages offering services, for informational requests there will be more articles.

There is also geo-dependent and geo-independent queries. Most commercial requests are geo-dependent, as people are more likely to trust companies located in their city.

2. Request logic.

  • “buy iphone x” and “iphone x price” - need to be promoted to one page, since in both the first and second cases, the same product is searched, and more detailed information about him;
  • “buy iphone” and “buy iphone x” - need to be promoted to different pages, since in the first request we are dealing with a general request (suitable for the product category where iPhones are located), and in the second the user is looking for a specific product and this request should promote to the product card;
  • "how to choose good smartphone“—it is more logical to promote this request to a blog article with the appropriate title.

View search results for them. If you check which pages on different sites lead to the queries “construction of houses made of timber” and “construction of houses made of bricks”, then in 99% of cases these are different pages.

4. Automatic grouping using software and manual refinement.

The 1st and 2nd methods are excellent for compiling the semantic core of small sites where a maximum of 2-3 thousand keywords are collected. For a large system (from 10,000 to infinity of requests), the help of machines is needed. Here are several programs and services that allow you to perform clustering:

  • KeyAssistant - assistant.contentmonster.ru;
  • semparser.ru;
  • just-magic.org;
  • rush-analytics.ru;
  • tools.pixelplus.ru;
  • key-collector.ru.

After automatic clustering is completed, it is necessary to check the results of the program manually and, if errors are made, correct them.

Example: the program can send the following requests to one cluster: “vacation in Sochi 2018 hotel” and “vacation in Sochi 2018 hotel breeze” - in the first case the user is looking for various options hotels for accommodation, and in the second, a specific hotel.

To eliminate the occurrence of such inaccuracies, you need to manually check everything and, if errors are found, edit.

What to do next after compiling the semantic core?

Based on the collected semantic core, we then:

  1. we create the ideal structure (hierarchy) of the site from the point of view of search engines;
    or in agreement with the customer, we change the structure of the old website;
  2. we write technical assignments for copywriters to write text, taking into account the cluster of requests that will be promoted to this page;
    or We are updating old articles and texts on the site.

It looks something like this.

For each generated request cluster, we create a page on the site and determine its place in the site structure. The most popular queries are promoted to the most top pages in the resource hierarchy, less popular ones are located below them.

And for each of these pages, we have already collected requests that we will promote on them. Next, we write technical specifications to copywriters to create text for these pages.

Technical specifications for a copywriter

As with the site structure, we will describe this stage in general terms. So, technical specifications for the text:

  • number of characters without spaces;
  • page title;
  • subheadings (if any);
  • a list of words (based on our core) that should be in the text;
  • uniqueness requirement (always require 100% uniqueness);
  • desired text style;
  • other requirements and wishes in the text.

Remember, don’t try to promote +100500 requests on one page, limit yourself to 5-10 + tail, otherwise you will get banned for over-optimization and will be out of the game for a long time for places in the TOP.

Conclusion

Compiling the semantic core of a site is painstaking and hard work, which needs to be given especially close attention, because it is on this that the further promotion of the site is based. Follow the simple instructions given in this article and take action.

  1. Choose the direction of promotion.
  2. Collect all possible queries from Yandex and Google (use special programs and services).
  3. Check the frequency of queries and get rid of dummies (those with a frequency of 0).
  4. Remove non-target requests - services and goods that you do not sell, requests mentioning competitors.
  5. Form query clusters and distribute them across pages.
  6. Create an ideal site structure and draw up technical specifications for the content of the site.

In our article, we explained what a semantic core is and gave general recommendations on how to compose it.

It's time to look at this process in detail, creating a semantic core for your site step by step. Stock up on pencils and paper, and most importantly, time. And join...

We create a semantic core for the site

As an example, let's take the site http://promo.economsklad.ru/.

The company's field of activity: warehouse services in Moscow.

The site was developed by specialists of our website service, and the semantic core of the site was developed in stages in 6 steps:

Step 1. Compile a primary list of keywords.

After conducting a survey of several potential clients, studying three sites close to our topic and using our own brains, we compiled a simple list of keywords that, in our opinion, reflect the content of our site: warehouse complex, warehouse rental, storage services, logistics, storage space rental, warm and cold warehouses.

Task 1: Browse competitors’ websites, consult with colleagues, conduct “ brainstorming” and write down all the words that you think describe YOUR site.

Step 2. Expanding the list.

Let's use the service http://wordstat.yandex.ru/. In the search line, enter each of the words from the primary list one by one:


We copy the refined queries from the left column into an Excel table, look through the associative queries from the right column, select among them those that are relevant to our site, and also enter them into the table.

After analyzing the phrase “Warehouse rental,” we received a list of 474 refined and 2 associative queries.

Having carried out a similar analysis of the remaining words from the primary list, we received a total of 4,698 refined and associative queries that were entered by real users in the past month.

Task 2: Collect a complete list of queries on your site by running each of the words in your primary list through Yandex.Wordstat query statistics.

Step 3. Stripping

First, we remove all phrases with a frequency of impressions below 50: “ how much does it cost to rent a warehouse?" - 45 views, " Warehouse rental 200 m" - 35 impressions, etc.

Secondly, we remove phrases that are not related to our site, for example, “ Warehouse rental in St. Petersburg" or " Warehouse rental in Yekaterinburg", since our warehouse is located in Moscow.

Also, the phrase “ warehouse lease agreement download" - this sample may be present on our website, but is actively promoted on this request there is no point, since a person who is looking for a sample contract is unlikely to become a client. Most likely, he has already found a warehouse or is the owner of the warehouse himself.

Once you remove all unnecessary queries, the list will be significantly reduced. In our case with “warehouse rental,” out of 474 refined queries, only 46 relevant to the site remained.

And when we cleaned the full list of refined queries (4,698 phrases), we received the Semantic Core of the site, consisting of 174 key queries.

Task 3: Clean up the previously created list of refined queries, excluding from it low-frequency keywords with less than 50 impressions and phrases that are not related to your site.

Step 4. Revision

Since you can use 3-5 different keywords on each page, we won’t need all 174 queries.

Considering that the site itself is small (maximum 4 pages), then from full list We select 20 that, in our opinion, most accurately describe the company’s services.

Here they are: warehouse rental in Moscow, warehouse space rental, warehouse and logistics, customs services, safekeeping warehouse, warehouse logistics, logistics services, office and warehouse rental, safekeeping of goods and so on….

These keywords include low-frequency, mid-frequency, and high-frequency queries.

Please note this list differs significantly from the primary one taken from the head. And it is definitely more accurate and efficient.

Task 4: Reduce the list of remaining words to 50, leaving only those that, in your experience and opinion, are most optimal for your site. Don't forget that the final list should contain queries of varying frequencies.

Conclusion

Your semantic core is ready, now is the time to put it into practice:

  • review the texts on your site, maybe they should be rewritten.
  • write several articles on your topic using selected key phrases, post the articles on the site, and after search engines index them, register in article directories. Read “One unusual approach to article promotion.”
  • pay attention to search advertising. Now that you have a semantic core, the effect of advertising will be much higher.

Step 2.We continue to expand semantics using the “ Similar phrases", which . By implementing the keywords from this report, you will maximize your . And the “Strength of connection” parameter will tell you whether your competitors from the top 20 use this phrase in their semantic core. The higher the number, the more sites are using the research phrase and the suggested synonym.

The pronounced result is shown on products that people can search for in different ways. For example, back pillows.

Step 3.The final step in extending semantics is to collect search engine suggestions. The advantage is that the services collect information in real time and immediately pull out all the search tips that Yandex/Google can offer. Search engines offer only up to 12 hints per phrase.

To unload all the tips, go to the “ Search tips” and upload the list.

Pay attention to the cloud of popular phrases; these are the words most often searched by people with the phrase “orthopedic mattresses”. If among the phrases there are certain sizes, brands or types of products, then it is worth including them in the assortment of the online store.

Also under information type keywords like “the best mattresses for spinal problems”, you can prepare an article for your blog, what will happen additional source traffic and sales.

Step 4.We bring all reports into a single table and clear duplicates using the plugin Remove Duplicate.

Time spent - up to 5 minutes. Depends on the number of key queries.

I use services you are already gaining time over those who collect, clean and cluster semantics manually. To understand the difference, try all the steps described, pulling out key phrases and search suggestions in Wordstat, and then repeat the instructions.

Clustering

Also saves up to 8 hours automatic clustering. This is a breakdown of all key phrases into semantic groups for which the site structure, filters, product categories, and so on are created.

To do this, upload your file with all the key phrases to the tool clustering and within 10–30 minutes, depending on the number of keywords, you will receive a report.

If the grouping does not satisfy the quality, without leaving the project, click on the “ icon settings” and set the connection strength stronger/weaker. Changing settings within one project is free, regrouping semantics lasts no more than 1 minute.

Stage 3. Automation pro level

If you're already collecting semantics using front-end services, it's time to introduce you to the API. This is a set of functions that allow users to access data or components of a service, in our case Serpstat. Advantages of working via API:


Now let’s repeat all the steps to collect semantics from the second stage using the API.

Step 1. Copy this table with the script to your Google Drive.

Step 2.Copy your token in your Serpstat personal profile and paste it into the appropriate field in the table. Also select the desired search engine database and fill in the parameters for selecting key phrases, add a list of key phrases for which you want to download reports.

Run the script, parsing reports on the selection of phrases, search suggestions and similar/slang phrases one by one (see screenshot):

The program will ask you to log in through your gmail account and request access for permission to work. Confirm running the script, bypassing the security warning.

Step 3.After 30–60 seconds, the script will complete its work and collect keywords within the specified parameters.

Also in this script you can set up a filter for negative keywords and any others.

In total, we saved several more hours of work for an SEO specialist by consolidating all reports into one and collecting data for each keyword in the interface.

Scripts for working with the API can be written by your SEO specialists, or you can findofficial in the public domain .

Conclusions

The following actions speed up the collection of the semantic core as much as possible without loss of quality:

  1. Clustering using special services.
  2. Parsing keywords, tips and slang expressions using SEO platform APIs.

If you know the pain of search engines’ “dislike” for the pages of your online store, read this article. I will talk about the path to increasing the visibility of a site, or more precisely, about its first stage - collecting keywords and compiling a semantic core. About the algorithm for its creation and the tools that are used.

Order the collection of the semantic core from SEO specialists of the Netpeak agency:

Why create a semantic core?

To increase the visibility of site pages. Make sure that Yandex and Google search robots begin to find pages of your site based on user requests. Of course, collecting keywords (compiling semantics) is the first step towards this goal. Next, a conditional “skeleton” is sketched out to distribute keywords among different landing pages. And then articles/meta tags are written and implemented.

By the way, on the Internet you can find many definitions of the semantic core.

1. “The semantic core is an ordered set of search words, their morphological forms and phrases that most accurately characterize the type of activity, product or service offered by the site.” Wikipedia.

To collect competitor semantics in Serpstat, enter one of the key queries, select a region, click “Search” and go to the “Key phrase analysis” category. Then select “SEO Analysis” and click “Phrase Selection”. Export results:

2.3. We use Key Collector/Slovoeb to create a semantic core

If you need to create a semantic core for a large online store, you cannot do without Key Collector. But if you are a beginner, it is more convenient to use free tool— Word fucker (don’t let this name scare you). Download the program, and in the Yandex.Direct settings, specify the login and password for your Yandex.Mail:
Create a new project. In the “Data” tab, select the “Add phrases” function. Select your region and enter the requests you received earlier:
Advice: create a separate project for each new domain, and create a separate group for each category/landing page. For example: Now collect semantics from Yandex.Wordstat. Open the “Data collection” tab - “Batch collection of words from the left column of Yandex.Wordstat”. In the window that opens, select the checkbox “Do not add phrases if they are already in any other groups.” Enter a few of the most popular (high-frequency) phrases among users and click “Start collecting”:

By the way, for large projects in Key Collector you can collect statistics from competitor analysis services SEMrush, SpyWords, Serpstat (ex. Prodvigator) and other additional sources.

Currently for search engine promotion maximum important role factors such as content and structure play a role. However, how to understand what to write text about, what sections and pages to create on the site? In addition to this, you need to find out exactly what the target visitor to your resource is interested in. To answer all these questions you need to collect a semantic core.

Semantic core— a list of words or phrases that fully reflect the theme of your site.

In the article I will tell you how to pick it up, clean it and break it down into structure. The result will be a complete structure with queries clustered across pages.

Here is an example of a query core broken down into a structure:


By clustering I mean breaking your search queries down into separate pages. This method will be relevant for both promotion in Yandex and Google PS. In the article I will describe completely free way creating a semantic core, however, I will also show options with various paid services.

After reading the article, you will learn

  • Choose correct queries to suit your theme
  • Collect the most complete core of phrases
  • Clean up uninteresting requests
  • Group and create structure

Having collected the semantic core you can

  • Create a meaningful structure on the site
  • Create a multi-level menu
  • Fill pages with texts and write meta descriptions and titles on them
  • Collect your website's positions for queries from search engines

Collection and clustering of the semantic core

Correct compilation for Google and Yandex begins with identifying the main key phrases of your topic. As an example, I will demonstrate its composition using a fictitious online clothing store. There are three ways to collect the semantic core:

  1. Manual. Using the Yandex Wordstat service, you enter your keywords and manually select the phrases you need. This method is quite fast if you need to collect keys on one page, however, there are two disadvantages.
    • The accuracy of the method is poor. You may always miss some important words if you use this method.
    • You will not be able to assemble a semantic core for a large online store, although you can use the Yandex Wordstat Assistant plugin to simplify it - this will not solve the problem.
  2. Semi-automatic. In this method, I assume using a program to collect the kernel and further manually breaking it down into sections, subsections, pages, etc. This method compilation and clustering of the semantic core, in my opinion, is the most effective because has a number of advantages:
    • Maximum coverage of all topics.
    • Qualitative breakdown
  3. Auto. Nowadays, there are several services that offer fully automatic kernel collection or clustering of your requests. Fully automatic option - I do not recommend using it, because... The quality of collection and clustering of the semantic core is currently quite low. Automatic query clustering is gaining popularity and has its place, but you still need to merge some pages manually, because the system does not give an ideal ready-made solution. And in my opinion, you will simply get confused and will not be able to immerse yourself in the project.

To compile and cluster a full-fledged correct semantic core for any project, in 90% of cases I use a semi-automatic method.

So, in order we need to follow these steps:

  1. Selection of queries for topics
  2. Collecting the kernel based on requests
  3. Cleaning up non-target requests
  4. Clustering (breaking phrases into structure)

I showed an example of selecting a semantic core and grouping into a structure above. Let me remind you that we have an online clothing store, so let’s start looking at point 1.

1. Selection of phrases for your topic

At this stage we will need the Yandex Wordstat tool, your competitors and logic. In this step, it is important to collect a list of phrases that are thematic high-frequency queries.

How to select queries to collect semantics from Yandex Wordstat

Go to the service, select the city(s)/region(s) you need, enter the most “fat” queries in your opinion and look at the right column. There you will find the thematic words you need, both for other sections, and frequency synonyms for the entered phrase.

How to select queries before compiling a semantic core using competitors

Enter the most popular queries into the search engine and select one of the most popular sites, many of which you most likely already know.

Pay attention to the main sections and save the phrases you need.

At this stage, it is important to do the right thing: to cover as much as possible all possible words from your topic and not miss anything, then your semantic core will be as complete as possible.

Applicable to our example, we need to create a list of the following phrases/keywords:

  • Cloth
  • Shoes
  • Boots
  • Dresses
  • T-shirts
  • Underwear
  • Shorts

What phrases are pointless to enter?: women's clothing, buy shoes, a prom dress, etc. Why?— These phrases are the “tails” of the queries “clothes”, “shoes”, “dresses” and will be added to the semantic core automatically at the 2nd stage of collection. Those. you can add them, but it will be pointless double work.

What keys do I need to enter?“low boots”, “boots” are not the same thing as “boots”. It is the word form that is important, and not whether these words have the same root or not.

For some, the list of key phrases will be long, but for others it consists of one word - don’t be alarmed. For example, for an online store of doors, the word “door” may well be enough to compile a semantic core.

And so, at the end of this step we should have a list like this.

2. Collecting queries for the semantic core

For proper, full collection, we need a program. I will show an example using two programs simultaneously:

  • On the paid version - KeyCollector. For those who have it or want to buy it.
  • Free - Slovoeb. Free program for those who are not ready to spend money.

Open the program

Create a new project and call it, for example, Mysite

Now to further collect the semantic core, we need to do several things:

Create new account on Yandex mail (the old one is not recommended due to the fact that it can be banned for many requests). So you created an account, for example [email protected] with password super2018. Now you need to specify this account in the settings as ivan.ivanov:super2018 and click the “save changes” button below. More details in the screenshots.

We select a region to compile a semantic core. You need to select only those regions in which you are going to promote and click save. The frequency of requests and whether they will be included in the collection in principle will depend on this.

All settings are completed, all that remains is to add our list of key phrases prepared in the first step and click the “start collecting” button of the semantic core.

The process is completely automatic and quite long. You can make coffee for now, but if the topic is broad, for example, like the one we are collecting, then this will last for several hours 😉

Once all the phrases are collected, you will see something like this:

And this stage is over - let's move on to the next step.

3. Cleaning the semantic core

First, we need to remove requests that are not interesting to us (non-target):

  • Related to another brand, for example, Gloria Jeans, Ekko
  • Information queries, for example, “I wear boots”, “jeans size”
  • Similar in topic, but not related to your business, for example, “used clothing”, “clothing wholesale”
  • Queries that are in no way related to the topic, for example, “Sims dresses”, “puss in boots” (there are quite a lot of such queries after selecting the semantic core)
  • Requests from other regions, metro, districts, streets (it doesn’t matter which region you collected requests for - another region still comes across)

Cleaning must be done manually as follows:

We enter a word, press “Enter”, if in our created semantic core it finds exactly the phrases that we need, select what we found and press delete.

I recommend entering the word not as a whole, but using a construction without prepositions and endings, i.e. if we write the word “glory”, it will find the phrases “buy jeans at Gloria” and “buy jeans at Gloria”. If you spelled "gloria" - "gloria" would not be found.

Thus, you need to go through all the points and remove unnecessary queries from the semantic core. This may take a significant amount of time, and you may end up deleting most of the collected queries, but the result will be a complete, clean and correct list of all possible promoted queries for your site.

Now upload all your queries to excel

You can also remove non-target queries from semantics en masse, provided you have a list. This can be done using stop words, and this is easy to do for a typical group of words with cities, subways, and streets. You can download a list of words that I use at the bottom of the page.

4. Clustering of the semantic core

This is the most important and interesting part - we need to divide our requests into pages and sections, which together will create the structure of your site. A little theory - what to follow when separating requests:

  • Competitors. You can pay attention to how the semantic core of your competitors from the TOP is clustered and do the same, at least with the main sections. And also see which pages are in the search results for low-frequency queries. For example, if you are not sure whether or not to create a separate section for the request “red leather skirts,” then enter the phrase in search engine and look at the output. If the search results contain resources with such sections, then it makes sense to create a separate page.
  • Logics. Do the entire grouping of the semantic core using logic: the structure should be clear and represent in your head a structured tree of pages with categories and subcategories.

And a couple more tips:

  • It is not recommended to place less than 3 requests per page.
  • Don’t make too many levels of nesting, try to have 3-4 of them (site.ru/category/subcategory/sub-subcategory)
  • Do not make long URLs, if you have many levels of nesting when clustering the semantic core, try to shorten the urls of categories high in the hierarchy, i.e. instead of “your-site.ru/zhenskaya-odezhda/palto-dlya-zhenshin/krasnoe-palto” do “your-site.ru/zhenshinam/palto/krasnoe”

Now to practice

Kernel clustering as an example

First, let's break down all requests into main categories. Looking at the logic of competitors, the main categories for a clothing store will be: men's clothing, women's clothing, children's clothing, as well as a bunch of other categories that are not tied to gender/age, such as simply “shoes”, “outerwear”.

We group the semantic core using Excel. Open our file and act:

  1. We break it down into main sections
  2. Take one section and break it into subsections

I will show you the example of one section - men's clothing and its subsection. In order to separate some keys from others, you need to select the entire sheet and click conditional formatting->cell selection rules->text contains

Now in the window that opens, write “husband” and press enter.

Now all our keys for men's clothing are highlighted. It is enough to use a filter to separate the selected keys from the rest of our collected semantic core.

So let’s turn on the filter: you need to select the column with queries and click sort and filter->filter

And now let's sort

Create a separate sheet. Cut the highlighted lines and paste them there. You will need to split the kernel in the future using this method.

Change the name of this sheet to “Men’s clothing”, a sheet where the rest of the semantic core is called “All queries”. Then create another sheet, call it “Structure” and put it as the very first one. On the structure page, create a tree. You should get something like this:

Now we need to divide the large men's clothing section into subsections and sub-subsections.

For ease of use and navigation through your clustered semantic core, provide links from the structure to the appropriate sheets. To do this, right-click on the desired item in the structure and do as in the screenshot.

And now you need to methodically separate the requests manually, simultaneously deleting what you may not have been able to notice and delete at the kernel cleaning stage. Ultimately, thanks to clustering of the semantic core, you should end up with a structure similar to this:

So. What we learned to do:

  • Select the queries we need to collect the semantic core
  • Collect all possible phrases for these queries
  • Clean out "garbage"
  • Cluster and create structure

What, thanks to the creation of such a clustered semantic core, can you do next:

  • Create a structure on the site
  • Create a menu
  • Write texts, meta descriptions, titles
  • Collect positions to track dynamics of requests

Now a little about programs and services

Programs for collecting the semantic core

Here I will describe not only programs, but also plugins and online services which I use

  • Yandex Wordstat Assistant is a plugin that makes it convenient to select queries from Wordstat. Great for quickly compiling the core of a small site or 1 page.
  • Keycollector (word - free version) is a full-fledged program for clustering and creating a semantic core. It is very popular. A huge amount of functionality in addition to the main direction: Selection of keys from a bunch of other systems, the possibility of auto-clustering, collecting positions in Yandex and Google and much more.
  • Just-magic is a multifunctional online service for compiling a kernel, auto-breaking, checking the quality of texts and other functions. The service is shareware; to fully operate, you need to pay a subscription fee.

Thank you for reading the article. Thanks to this step by step manual you will be able to compose the semantic core of your site for promotion in Yandex and Google. If you have any questions, ask in the comments. Below are the bonuses.



Related publications