← Back to Blog

How I Used AI to Add Search Exclusions Across a Multi-Site CMS in Just a Few Hours

A developer's honest account of collaborating with Claude Code on a cross-cutting feature that touched 5 indexing strategies, 3 projects, and 8 content types — bumps and all.
How I Used AI to Add Search Exclusions Across a Multi-Site CMS in Just a Few Hours

Last week, a content editor on our team pinged me with a question that sounded simple: "Can we exclude certain pages from showing up in search results?"

We run four websites off a single Kentico Xperience 31.3.0 instance. Our Lucene.Net search spans five separate indexes across three .NET projects. The "simple" ask meant touching every one of them.

I decided to try something different. Instead of spending my morning reading through unfamiliar indexing strategy code and working out the implementation plan on a whiteboard, I opened Claude Code and started a conversation with it about the codebase.

What happened next was the most productive few hours of development I've had in months. It was also messy, imperfect, and occasionally wrong — which is exactly why I think the story is worth telling.

The Setup: Five Indexes, Three Projects, Zero Documentation

Our Lucene search implementation is spread across the solution like this:

  • CON_SEARCH — indexes Video, NewsArticle, PressRelease, and MagazineArticle content types on the main site
  • NM_ARTICLE_SEARCH — indexes Article content
  • NS_SESSION_SEARCH — indexes Workshop content for in-person sessions
  • NS_SPEAKER_SEARCH — indexes Speaker content for the sessions
  • NS_SITE_SEARCH — indexes Workshop, Page, and Speaker content

Each index has its own indexing strategy class that inherits from DefaultLuceneIndexingStrategy and implements MapToLuceneDocumentOrNull. That method is the key: it receives a content item event, queries Kentico for the full content, maps it to a Lucene document, and returns it. If it returns null, the item gets skipped.

There was no existing mechanism for editors to flag individual pages for exclusion. The strategies just indexed everything of their registered content types, no questions asked.

Starting the Conversation

My first message to Claude Code was deliberately open-ended: "Is there a way with the current Lucene search to exclude just certain pages from search?"

I wasn't asking it to write code yet. I wanted to see if it could understand our search architecture well enough to propose something sensible.

It did something I wouldn't have expected from an autocomplete tool: it analyzed all five indexing strategy files, identified the MapToLuceneDocumentOrNull pattern as the natural extension point, and came back with three distinct options:

  1. Exclude by URL path — hardcode a list of paths to skip. Simplest, but requires a developer for every change.
  2. Exclude by content field — add a boolean field to content types that editors control from the CMS admin. The AI's recommendation.
  3. Exclude by ItemGuid — maintain a list of GUIDs for known one-offs.

This wasn't a parlor trick. The AI had to understand that returning null from MapToLuceneDocumentOrNull already skips indexing, that content types in Kentico Xperience can be extended with reusable field schemas, and that our indexing strategies query content items where those fields would be accessible.

I chose Option 2. An editor-controlled boolean is the right call for a CMS — the whole point is to give content people control without filing Jira tickets.

The Implementation Plan

The AI laid out a step-by-step plan:

  1. Create a Reusable Field Schema in Kentico Admin called SearchSettings with a single boolean field: ExcludedFromSearch
  2. Assign that schema to all 8 indexed content types (Video, NewsArticle, PressRelease, MagazineArticle, Article, Workshop, Page, Speaker)
  3. Run Kentico's code generation to produce the ISearchSettings interface
  4. Update all 5 indexing strategies with an exclusion check
  5. Use Kentico CI to export the schema changes, commit everything, and let other environments restore

Where Things Got Messy

Here's the part that most AI success stories leave out: it wasn't a smooth ride.

Wrong directory. Our development machines have the project in multiple locations for different purposes. The AI initially searched the wrong path when the active working copy was elsewhere. I caught it, corrected it, and we moved on. A small thing, but a reminder that the human in the loop isn't optional.

The .csproj glob problem. Our main project file had a pattern I'd forgotten about: a glob exclusion for generated schema files (<Compile Remove="Models\Generated\ReusableFieldSchemas\**" />) with explicit includes for specific files. The new ISearchSettings interface file wasn't in the include list, so the build silently ignored it. The AI caught the pattern in the .csproj, understood the glob/include interaction, and added the new file to the include list. This was genuinely helpful — I might have spent 20 minutes wondering why the interface wasn't resolving.

Speaker strategy refactoring. The SpeakerSearchIndexingStrategy had a structural quirk: the main code path was inside an if (speaker is not null) { ... } else { return null; } block, which made inserting the exclusion check awkward. The AI recognized this and suggested refactoring to an early return pattern — if (speaker is null) return null; — before adding the search exclusion check. Clean, and the kind of thing you want a second pair of eyes for.

The Actual Code Change

After all the setup, the code change itself was almost anticlimactic. In each of the five indexing strategies, after the null check for the queried page, we added:

if (page is ISearchSettings s && s.ExcludedFromSearch)
    return null;

That's it. One line per content type branch, leveraging C# pattern matching to safely check whether the content type implements the interface and whether the field is set. If the content type doesn't implement ISearchSettings (say, someone adds a new type later and forgets the schema), the check silently passes — it doesn't break indexing, it just doesn't exclude.

The ISearchSettings interface itself was equally minimal:

public interface ISearchSettings
{
    bool ExcludedFromSearch { get; set; }
}

Placed in Portal.Core so it's accessible from all projects without circular references.

Build Verification

The final dotnet build XBK.sln came back with zero compilation errors.

Five files changed across three projects. One new interface. One .csproj update. Eight content types gained editor-controlled search exclusion. The whole thing took under an hour.

What the AI Did Well

Architecture analysis. Before writing a single line of code, the AI mapped out the search implementation across the entire solution — five strategies, three projects, the MapToLuceneDocumentOrNull extension point, the content types each one indexes. This is the kind of survey work that takes a human 30–45 minutes of file hopping.

Option generation. Presenting three approaches with tradeoffs, rather than jumping straight to an implementation, let me make an informed architectural decision. The AI recommended the right option (editor-controlled) but gave me the information to disagree.

Cross-project awareness. It understood that the interface needed to live in Portal.Core because that's the shared dependency, and it knew to use a global namespace to avoid import conflicts between the namespaces.

Build system knowledge. Catching the .csproj glob exclusion pattern was a genuine save. That's the kind of thing that burns an hour of debugging if you don't know to look for it.

What I Still Had to Do Myself

All the Kentico Admin work. Creating the Reusable Field Schema, configuring the boolean field, assigning it to eight content types — that's point-and-click CMS administration. The AI can't drive the Kentico admin UI.

Correcting the AI's assumptions. The wrong directory, the missing interface file — these required domain knowledge about our specific setup and the Kentico platform's code generation behavior.

Architectural decision-making. Choosing Option 2 over Options 1 or 3 was a judgment call based on how our content team works and what kind of maintenance burden we're willing to accept.

Testing. The AI can verify a build succeeds. It can't verify that an editor toggling a boolean in the CMS actually removes a page from search results on all four sites. That's manual QA against a running instance.

Takeaways for Developers Considering AI-Assisted Development

Start with analysis, not generation. My most productive AI interactions begin with "help me understand this" rather than "write me code for this." The codebase analysis set up every subsequent decision.

Keep your hands on the wheel. The AI was wrong about the directory and wrong about code generation behavior. Both were easy corrections because I was paying attention. If I'd walked away and come back to a PR, I'd have been debugging for longer than the feature took to build.

AI shines at cross-cutting changes. Modifying five files across three projects in a consistent way is exactly the kind of work where humans make mistakes — you update four strategies and forget the fifth, or you use slightly different logic in one. The AI applied the same pattern uniformly.

The boring stuff matters most. The headline feature was one line of C#. The real work was the interface placement, the .csproj fix, the Speaker strategy refactoring. The AI handled all of that competently, which is where the time savings actually came from.

It's a collaboration, not a delegation. I didn't hand off a task and receive a result. I had a working session with an unusually knowledgeable colleague who could read every file in the solution simultaneously but occasionally needed correcting on platform-specific details. That mental model — collaborative session, not service request — is the one that actually works.


A few hours of focused work. Five indexing strategies updated. Eight content types extended. Zero compilation errors. One content editor who can now exclude pages from search without filing a ticket.

Not bad for a Wednesday morning.