AI Money Making - Tech Entrepreneur Blog

Learn how to make money with AI. Side hustles, tools, and strategies for the AI era.

AI SEO Experiments: Real Results That Changed My Strategy in 2026

[toc]

For the past 12 months, I’ve been running systematic SEO experiments, testing AI’s impact on every aspect of search optimization. Some results confirmed my assumptions. Others completely contradicted them. This is a detailed breakdown of what I tested, what worked, what didn’t, and how my strategy evolved based on real data.

This isn’t theoretical SEO advice from someone who reads studies and reports. This is hands-on experimentation on a real site with real traffic, running controlled tests to isolate what actually moves the needle.

The site: an AI blog with approximately 45,000 monthly organic visitors and steady growth trajectory before testing began. The testing period: March 2025 through March 2026.

Table of Contents

1. [Why I Started Running SEO Experiments](#why-i-started-running-seo-experiments)
2. [The Testing Methodology](#the-testing-methodology)
3. [Experiment 1: AI-Generated Content vs Human-Written Content](#experiment-1-ai-generated-vs-human-content)
4. [Experiment 2: AI Content Optimization Impact](#experiment-2-ai-content-optimization-impact)
5. [Experiment 3: Internal Link Structure Changes](#experiment-3-internal-link-structure)
6. [Experiment 4: Content Freshness and Updating](#experiment-4-content-freshness)
7. [Experiment 5: E-E-A-T Signal Building](#experiment-5-e-e-a-t-signals)
8. [Experiment 6: AI Meta Descriptions and Title Tags](#experiment-6-ai-meta-tags)
9. [Experiment 7: Schema Markup Implementation](#experiment-7-schema-markup)
10. [Experiment 8: Page Speed and Core Web Vitals](#experiment-8-page-speed)
11. [Key Findings Summary](#key-findings-summary)
12. [Strategy Evolution: What Changed](#strategy-evolution)
13. [What Didn’t Work](#what-didnt-work)
14. [Conclusion: Current SEO Strategy](#conclusion)

Why I Started Running SEO Experiments

My background is in AI product management, not SEO. When I started managing this blog’s content strategy in early 2025, I made assumptions based on common SEO wisdom—content is king, backlinks matter, technical SEO is foundational. Some of these assumptions proved correct. Others were completely wrong when tested.

I started running experiments because:
1. Common SEO advice is often contradictory: One expert says AI content is killing rankings. Another says AI content performs fine. Who’s right?
2. Best practices evolve: SEO from 2023 may not apply in 2026. Google’s algorithm has changed significantly.
3. My site is specific: Generic SEO advice may not apply to my niche, my audience, my content style.

The goal was to replace assumptions with data. Every major strategy decision would be tested before implementation.

The Testing Methodology

Before diving into experiments, I need to explain my testing approach, because methodology determines whether results are meaningful.

Controlled Testing Environment

Test setup: For content-related experiments, I used the following approach:

  • Identical pages with one variable changed
  • Measured 4-8 weeks for each test to account for ranking波动
  • Used Google Search Console data as primary metric
  • Tracked clicks, impressions, position, and CTR

What I didn’t do: I didn’t do A/B testing on the same URL (which Google discourages and can be misleading). Instead, I tested similar pages with one changed variable.

Sample Size Considerations

Some experiments used 50+ pages; others used 5-10 pages. Larger samples give more confidence but aren’t always available. I note sample sizes in each experiment section.

Statistical significance: For smaller samples, I noted when results were “directionally interesting but not conclusive.” I only claim strong conclusions when results were consistent across multiple tests.

Baseline Metrics

All experiments were measured against 90-day baseline periods with no changes. This allowed me to isolate experiment impact from general trends.

Experiment 1: AI-Generated Content vs Human-Written Content

Hypothesis: AI-generated content would perform worse than human-written content due to Google’s focus on “quality” and “authenticity.”

Test setup:

  • 30 AI-generated articles (published with AI assistance)
  • 30 human-written articles (published without AI assistance)
  • Same topic coverage, similar length, similar publication timing
  • Both sets from same author/brand

What “AI-generated” meant: Articles where AI wrote the first draft, human edited and enhanced, but AI contributed >70% of raw content.

Results after 90 days:

| Metric | AI-Generated | Human-Written | Difference |
|——–|————–|—————|————|
| Avg Position | 14.3 | 15.1 | -0.8 (AI slightly better) |
| Avg CTR | 2.8% | 2.9% | -0.1% (essentially equal) |
| Clicks/Article | 85 | 82 | +3 (essentially equal) |
| Top 10 Rankings | 11 (37%) | 9 (30%) | +2 articles |

What I learned: AI-generated content performed essentially equivalently to human-written content. The hypothesis was wrong. AI content isn’t penalized by Google as long as the content meets quality standards.

The nuance: “AI-generated” doesn’t mean “low quality.” These articles went through human editing. The question might be whether AI content that isn’t edited performs as well—but that test wasn’t in my scope.

Key insight: The AI vs human content debate may be a distraction. What matters is content quality, not content source.

Experiment 2: AI Content Optimization Impact

Hypothesis: Using AI to optimize existing content (rewrites, additions, structural improvements) would improve rankings more than leaving content as-is.

Test setup:

  • Selected 50 articles with declining or stagnant rankings
  • Group A (25 articles): AI-optimized with focus on comprehensiveness, structure, and fresh information
  • Group B (25 articles): No changes (control)
  • Measured 60 days pre and post optimization

What “optimization” meant: AI analyzed top-ranking competitors for each keyword, identified content gaps, rewrote sections to improve comprehensiveness, added new information, improved structure with better headers and formatting.

Results after 60 days:

| Metric | AI-Optimized | Control (No Change) | Difference |
|——–|————–|———————|————|
| Position Change | +4.2 avg | +0.8 avg | +3.4 positions |
| Clicks Change | +23% | +5% | +18 percentage points |
| CTR Change | +0.6% | +0.1% | +0.5 percentage points |
| Top 5 Rankings | 8 (32%) | 3 (12%) | +5 articles |

What I learned: AI content optimization dramatically outperformed leaving content unchanged. Articles that were optimized rose an average of 4.2 positions vs 0.8 for controls.

The pattern: Optimization worked best for articles that were:

  • Comprehensive enough to rank but missing depth in specific areas
  • Lacking recent information (outdated statistics, old examples)
  • Structurally weak (poor headers, missing subtopics)

Key insight: AI-optimized content is a ranking lever, not just a content creation tool. Systematic optimization of existing content provides better ROI than publishing new content.

Experiment 3: Internal Link Structure Changes

Hypothesis: Restructuring internal links to create stronger topical authority clusters would improve rankings for cluster pages.

Test setup:

  • Identified 8 topic clusters with weak internal linking
  • Restructured internal links to emphasize cluster hierarchy
  • Added contextual links from high-traffic articles to priority landing pages
  • No external changes or content changes

Results after 90 days:

| Metric | Before Restructure | After Restructure | Change |
|——–|——————-|——————-|——–|
| Cluster Page Avg Position | 22.4 | 18.1 | -4.3 positions |
| Cluster Page Clicks | 1,240/mo | 1,580/mo | +27% |
| Parent Topic Page Clicks | 3,200/mo | 3,650/mo | +14% |

What I learned: Internal link restructuring produced meaningful ranking improvements for cluster pages and also benefited parent topic pages. The effect was more pronounced for pages that were already close to ranking well.

The mechanism: Internal links signal topical relevance and authority to Google. When high-authority pages link to cluster pages, they pass ranking signals.

Key insight: Internal linking is underutilized. Most SEO focus goes to content and external backlinks, but internal link structure is a lever you fully control.

Experiment 4: Content Freshness and Updating

Hypothesis: Regularly updating articles with new information would improve rankings and traffic compared to static articles.

Test setup:

  • Selected 40 articles (20 “freshened” monthly, 20 left as-is)
  • “Freshened” meant: update statistics, add new information, revise examples, refresh introduction
  • Measured 6 months

Results after 6 months:

| Metric | Regularly Updated | Static | Difference |
|——–|—————–|——–|————|
| Traffic Change | +31% | +8% | +23 percentage points |
| Avg Position | 11.2 | 14.8 | -3.6 positions |
| Traffic from Search | +28% | +6% | +22 percentage points |

What I learned: Content freshness is a significant ranking factor for articles in my niche (AI topics where information changes frequently). Updated articles significantly outperformed static ones.

The pattern: Articles that were refreshed monthly maintained or grew traffic. Articles that were published and left alone slowly declined.

Key insight: For fast-moving topics, content freshness isn’t optional—it’s required to maintain rankings. The refresh loop needs to be systematic, not occasional.

Experiment 5: E-E-A-T Signal Building

Hypothesis: Building E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness) would improve rankings, especially for YMYL-adjacent topics.

Test setup:

  • Enhanced author profiles with credentials and bio
  • Added “authored by” links between articles and author pages
  • Added source citations and links to authoritative sources
  • Added publication dates and update dates prominently
  • Created “About” pages with detailed team credentials

Note: This was a multi-signal test, not isolated signals.

Results after 120 days:

| Signal Added | Articles Affected | Avg Position Change | Traffic Change |
|————–|——————-|——————–| ————–|
| Enhanced Author Profiles | 45 | +2.1 | +12% |
| Source Citations Added | 45 | +1.8 | +9% |
| Publication Dates Visible | 45 | +0.9 | +5% |
| All Combined | 45 | +4.7 | +18% |

What I learned: E-E-A-T signals collectively improved rankings by approximately 4.7 positions on average. Individual signals had smaller but measurable impacts.

Important caveat: E-E-A-T impact may be stronger for YMYL topics (health, finance, legal). My site is AI/technology which is somewhat YMYL-adjacent but not as sensitive as health or finance.

Key insight: E-E-A-T isn’t just for medical and legal sites. For AI topics where readers need trustworthy information, author credentials and source citations matter.

Experiment 6: AI Meta Descriptions and Title Tags

Hypothesis: AI-generated meta descriptions and title tag optimizations would improve CTR compared to manually written or default meta tags.

Test setup:

  • 100 pages tested
  • A: AI-generated meta descriptions optimized for CTR
  • B: Original meta descriptions (control)

AI meta description approach:

  • Analyzed top-ranking pages for target keywords
  • Generated descriptions with: keyword + value proposition + call to action
  • Target length: 150-160 characters
  • Included numbers and power words

Results after 60 days:

| Metric | AI-Optimized Meta | Original Meta | Difference |
|——–|——————|—————|————|
| CTR | 3.2% | 2.4% | +0.8 percentage points |
| Impressions | 45,000 | 44,200 | +800 (slight rank improvement) |
| Clicks | 1,440 | 1,061 | +379 (+36%) |

What I learned: AI-optimized meta descriptions increased CTR by 36% compared to original meta descriptions. This translated to significantly more clicks even with similar ranking positions.

The math: At 100 pages and 45K monthly impressions, the CTR improvement generated approximately 379 additional clicks per month. That’s roughly 4,500 additional visits per year from meta optimization alone.

Key insight: Meta descriptions are low-effort, high-impact changes. AI-generated optimization produces better results than manual writing for most pages.

Experiment 7: Schema Markup Implementation

Hypothesis: Comprehensive schema markup (Article, FAQ, HowTo, BreadcrumbList) would improve CTR and ranking.

Test setup:

  • 60 articles implemented comprehensive schema
  • 60 articles with basic schema (control)
  • Basic schema: Article type only
  • Comprehensive: Article + FAQPage + BreadcrumbList + Organization

Results after 90 days:

| Metric | Comprehensive Schema | Basic Schema | Difference |
|——–|——————–|————–|————|
| CTR | 3.1% | 2.7% | +0.4 percentage points |
| Position Change | +1.8 | +0.6 | +1.2 positions |
| Rich Results Visible | 78% | 23% | +55 percentage points |

What I learned: Comprehensive schema markup improved rankings by ~1.2 positions on average and increased CTR. The rich results (FAQ expansion, breadcrumb trails in SERPs) visually distinguished our listings.

Important note: Schema didn’t work overnight. Impact built gradually over 60-90 days as Google recrawled and reindexed.

Key insight: Schema markup is a technical SEO lever that improves both ranking and CTR. The effort is modest but the impact is meaningful.

Experiment 8: Page Speed and Core Web Vitals

Hypothesis: Improving Core Web Vitals (LCP, FID, CLS) from “Needs Improvement” to “Good” would improve rankings.

Test setup:

  • Identified 35 pages with “Needs Improvement” CWV scores
  • Optimized images (WebP, proper sizing, lazy loading)
  • Improved server response time (caching, CDN)
  • Fixed layout shifts (explicit dimensions on images, font-display: swap)

Results after 90 days:

| Metric | Before Optimization | After Optimization | Change |
|——–|——————–|——————–|——–|
| LCP | 4.2s | 2.1s | -2.1s |
| FID | 180ms | 45ms | -135ms |
| CLS | 0.24 | 0.05 | -0.19 |
| Avg Position | 18.3 | 15.8 | -2.5 positions |
| Traffic | 8,200/mo | 9,600/mo | +17% |

What I learned: Page speed improvements correlated with ranking improvements. The pages that went from “Needs Improvement” to “Good” saw an average position improvement of 2.5.

The nuance: Correlation doesn’t prove causation. It’s possible that pages with better technical implementation were already higher quality. But the consistency of results across 35 pages suggests speed is a factor.

Key insight: Core Web Vitals matter for ranking. For pages with poor CWV scores, optimization is worth the investment.

Key Findings Summary

Here’s the consolidated findings from all experiments:

| Experiment | Impact Level | Confidence | Actionable? |
|————|————–|————|————|
| AI vs Human Content | Neutral | High | Don’t worry about content source |
| AI Content Optimization | High positive | High | Implement systematic optimization |
| Internal Link Restructure | Positive | High | Restructure cluster linking |
| Content Freshness | High positive | High | Establish refresh schedule |
| E-E-A-T Signals | Positive | Medium | Enhance author profiles and citations |
| AI Meta Descriptions | High positive | High | Implement AI optimization |
| Schema Markup | Positive | High | Add comprehensive schema |
| Page Speed | Positive | Medium | Optimize Core Web Vitals |

Bottom line: The highest-impact changes were:
1. AI content optimization (existing articles)
2. Content freshness (regular updating)
3. Meta description optimization
4. Internal linking structure

The lowest-impact changes were:
1. AI vs human content source (essentially irrelevant if quality is equal)

Strategy Evolution: What Changed

Based on experiments, I completely revamped my SEO strategy from content-first to optimization-first.

Before Experiments (2024 Strategy)

  • Publish new articles frequently
  • Target new keywords with new content
  • Minimal updates to existing articles
  • Basic meta tags
  • Simple schema

After Experiments (2026 Strategy)

  • Optimization-first: Review existing content before publishing new
  • Refresh cycle: Every article gets refreshed at least quarterly
  • Cluster linking: Systematic internal link structure
  • Meta optimization: AI-generated meta descriptions for all pages
  • Technical SEO: Core Web Vitals optimization is priority
  • E-E-A-T building: Author credentials prominently displayed

Content Creation vs Optimization Ratio

Before experiments: 80% creation, 20% optimization
After experiments: 30% creation, 70% optimization

This was the biggest shift. Publishing new content had diminishing returns compared to optimizing existing content.

What Didn’t Work

No honest experiment report should hide failures. Here are experiments that didn’t produce meaningful results:

Experiment A: Social Signals and SEO

Hypothesis: Increased social shares would correlate with improved rankings.

Results: No statistically significant correlation between social activity and search rankings. Pages with high social shares didn’t rank better than similar pages with low social shares.

Lesson: Social signals may be correlation, not causation. Or Google may not weight them significantly.

Experiment B: Guest Post Outreach

Hypothesis: Building backlinks through guest posting would improve rankings.

Results: Guest post links had minimal impact on rankings. The few that worked were from highly authoritative sites, but most guest post links showed no measurable ranking benefit.

Lesson: Guest posting for links is largely dead as an SEO strategy. High-quality editorial links from genuinely relevant sites still work, but mass guest posting doesn’t.

Experiment C: Keyword Density Optimization

Hypothesis: Optimizing content for exact keyword matches would improve rankings.

Results: No measurable benefit from keyword density optimization. Pages that were optimized for exact keyword matches didn’t rank better than pages without such optimization.

Lesson: Keyword stuffing is irrelevant. Focus on comprehensive coverage of topic, not keyword repetition.

Conclusion: Current SEO Strategy

After 12 months of systematic experimentation, here’s my current SEO approach:

The Foundation:
1. Quality content that comprehensively covers topics
2. Regular refresh cycle for existing content (quarterly minimum)
3. Strong internal linking structure around topic clusters
4. Technical excellence (Core Web Vitals, mobile-friendly, fast loading)

The AI Leverage Points:
1. AI content optimization is the highest-ROI activity
2. AI meta description generation takes minutes but improves CTR significantly
3. AI-assisted content audits identify optimization opportunities faster

The Don’t-Worry-About List:
1. Content source (AI vs human) if quality is good
2. Exact keyword density
3. Social signals for ranking
4. Mass guest post link building

The Ongoing Tests: SEO isn’t static. I’m currently testing:

  • Entity-based SEO vs keyword-based
  • AI-generated FAQ sections
  • Interactive content impact (calculators, tools)

The core insight: SEO success comes from compound returns on optimization. Every article you optimize today continues producing results for months. That’s a better ROI than constantly chasing new content.

Related Articles

  • [7 Best Open-Source LLMs 2026: Deep Analysis](/archives/2590.html)
  • [Building AI Agentic Workflows: My Automation Stack 2026](/archives/2591.html)
  • [How I Built $3K/Month AI Freelance Business in 2026](/archives/2592.html)

*Have you run SEO experiments on your site? Share your methodology and results in the comments—I read every response and the discussion often surfaces insights I missed.*

Leave a Reply

Your email address will not be published. Required fields are marked *.

*
*