AI for your catalog: not hype, but KPIs. How we implement ChatGPT and what we measure

By: WebGoodPeople, Author

The key question when introducing AI into e-commerce is not "does ChatGPT work" but "what exactly changes in revenue and operational metrics." Without KPIs, AI becomes an expensive experiment that is hard to justify to owners.

We have been implementing AI modules in 1C-Bitrix and Next.js projects for over a year. Here is how we define metrics before the work begins — and what we actually get at the end.

Three tasks AI solves in a catalog

In most e-commerce projects, AI delivers practical value in three areas:

  1. Generating and updating product descriptions. Especially relevant for catalogs with 5,000+ SKUs where content managers simply cannot keep up.
  2. SEO texts and meta tags. Bulk generation of titles and descriptions targeting keyword clusters.
  3. Moderation and normalization. Detecting duplicates, conflicting attributes, and broken data.

Metrics before we start: what we measure

Before starting, we always capture a baseline across four parameters: percentage of SKUs with descriptions under 100 characters (typical 40-70%, target under 10%); time to fill 100 cards in person-hours (typical 8-12h, target 0.5-1h revision only); organic traffic from long-tail queries (target +25-40%); CTR in search results via Search Console (target +0.3-0.8 percentage points). These numbers come from the client actual data for the past 90 days — otherwise there is nothing to compare against.

How the system works: background tasks without blocking

We do not implement AI in the request path: each call to the OpenAI API takes 1-5 seconds, and embedding this in a synchronous user request is not acceptable. Instead — a queue of background tasks: new or updated product goes to queue, then AI worker, then draft, then moderator or auto-approve, then publish.

On the Bitrix side this is the event agent OnAfterIBlockElementAdd/Update. On the Next.js side — a periodic job that fetches products missing descriptions on a schedule.

The key parameter: auto-approve threshold. For technical specifications (weight, dimensions, material) — manual review is mandatory. For marketing descriptions — auto-approve with periodic spot-check audits.

What clients actually get

On a project with 12,000 SKUs (steel distribution) after three months: product description coverage went from 34% to 91%; time to process 100 cards dropped from 10 hours to 45 minutes (revision only); organic traffic from long-tail queries increased by 31% per Search Console.

On a project with 3,500 SKUs (watches, multi-market): CTR in search results increased by 0.4 percentage points over the first 6 weeks; unique keyword phrases in index grew by 18%.

What AI does not do (and should not)

A common mistake is expecting full autonomy from AI. We are explicit with clients: AI does not replace an expert in technical niches (tolerances, standards, certifications); generation without review can create SEO duplicates; without logging requests and responses, it is impossible to track quality degradation.

The last point is critical: we log every OpenAI request with a hash of the input and output — this lets us audit quality and catch model drift when the underlying model is updated.

Bottom line: AI as an operational tool, not R&D

AI integration pays off when it solves a specific bottleneck with a measurable KPI. The most common case — catalogs with incomplete content where the team is fully occupied. If you have 5,000+ SKUs and your content manager cannot keep up — that is exactly the task AI solves predictably and cheaply.

Want to find out whether this approach fits your catalog? Tell us about your project and we will assess it within 48 hours.

Tell us about your project

Our offices

  • Russia
    Saint Petersburg, Rizhskaya st. 5, bldg. 1, office 402
    +7 (967) 555-90-32
  • Kazakhstan
    Almaty
    +7 (707) 340-29-12
AI for your catalog: not hype, but KPIs. How we implement ChatGPT and what we measure — WebGoodPeople