CROP
ProjectsCROP Frontend

Testing Strategy - CROP Frontend Platform

Last Updated: 2025-11-08 Status: Active Coverage Target: >90% for critical paths

Testing Strategy - CROP Frontend Platform

Last Updated: 2025-11-08 Status: Active Coverage Target: >90% for critical paths


Overview

This document outlines the comprehensive testing strategy for the CROP frontend platform, with special focus on the Search Service API integration. Our testing approach combines unit tests, integration tests, E2E tests, and performance benchmarks to ensure reliability and performance.

Testing Philosophy

Core Principles:

  • Write tests alongside implementation (not as an afterthought)
  • Focus on critical user paths and business logic
  • Maintain >90% coverage for core functionality
  • Use appropriate testing tools for each layer
  • Keep tests fast, reliable, and maintainable

Test Pyramid:

        /\
       /  \     E2E Tests (Playwright)
      /    \    - Critical user flows
     /------\   - Cross-browser testing
    /        \
   / Integration\ Integration Tests (Vitest)
  /    Tests    \ - API client integration
 /--------------\ - Component interactions
/                \
/   Unit Tests    \ Unit Tests (Vitest)
/------------------\ - Pure functions
                     - Adapters, type guards
                     - Business logic

Test Coverage Status

Current Status: 93/97 tests passing (95% pass rate, >90% coverage)

Breakdown by Category

CategoryTestsPassingCoverageStatus
Unit Tests757292%⚠️ 3 failures (pre-existing)
Integration Tests1515100%✅ All passing
E2E Tests7685%⚠️ 1 failure (pre-existing)
Total979391%⚠️ 95% passing

Note: All 4 test failures are pre-existing and NOT related to search-service integration work.


Unit Testing Strategy

Philosophy

Unit tests focus on isolated, pure functions with no external dependencies. We test:

  • Adapters (backend-to-frontend transformations)
  • Type guards (payload type validation)
  • Utility functions (formatting, parsing, validation)
  • Business logic (calculations, derivations)

Tools & Configuration

Test Runner: Vitest Location: Co-located with source files (*.test.ts) Coverage: Enabled with --coverage flag

Run Commands:

# Run all unit tests
bun test

# Run with coverage
bunx vitest run --coverage

# Watch mode (development)
bunx vitest watch

# Run specific test file
bunx vitest run path/to/file.test.ts

Test Examples

1. Adapter Functions

File: lib/search-service/adapters/part-adapter.test.ts

What We Test:

  • Backend-provided fields are preserved
  • Missing fields are derived correctly
  • Fallback logic works when backend doesn't provide fields
  • Edge cases (null, undefined, empty arrays)

Example:

import { describe, it, expect } from "vitest";
import { enhanceManufacturer, adaptBackendPart } from "./part-adapter";

describe("enhanceManufacturer", () => {
  it("preserves backend-provided navigation fields", () => {
    const input = {
      name: "New Holland",
      code: "NH",
      slug: "new-holland",
      catalogHref: "/manufacturers/new-holland",
      partsHref: "/parts?manufacturer=new-holland"
    };

    const result = enhanceManufacturer(input);
    expect(result).toEqual(input);
  });

  it("derives navigation fields when missing", () => {
    const input = { name: "New Holland", code: "NH" };

    const result = enhanceManufacturer(input);

    expect(result.slug).toBe("new-holland");
    expect(result.catalogHref).toBe("/manufacturers/new-holland");
    expect(result.partsHref).toContain("manufacturer=new-holland");
  });

  it("handles null manufacturer gracefully", () => {
    const input = null;
    const result = enhanceManufacturer(input);
    expect(result).toBeNull();
  });
});

2. Type Guards

File: lib/search-service/types.test.ts

What We Test:

  • Type guards correctly identify payload types
  • Type guards reject invalid payloads
  • Edge cases (missing optional fields, extra fields)

Example:

import { describe, it, expect } from "vitest";
import {
  isPartSuggestionPayload,
  isPartMatchPayload,
  isEquipmentSuggestionPayload,
  isQuerySuggestionPayload
} from "./types";

describe("Type Guards", () => {
  describe("isPartSuggestionPayload", () => {
    it("returns true for basic part payload", () => {
      const payload = { id: "abc123", slug: "test-part" };
      expect(isPartSuggestionPayload(payload)).toBe(true);
    });

    it("returns false for part match payload", () => {
      const payload = {
        partNumber: "100715",
        manufacturer: "New Holland",
        thumbnail: "https://...",
        price: 99.99,
        inStock: true
      };
      expect(isPartSuggestionPayload(payload)).toBe(false);
    });

    it("returns false for null/undefined", () => {
      expect(isPartSuggestionPayload(null)).toBe(false);
      expect(isPartSuggestionPayload(undefined)).toBe(false);
    });
  });
});

3. Utility Functions

File: features/search-bar/model/suggestion-helpers.test.ts

What We Test:

  • Formatting functions produce correct output
  • Edge cases (empty strings, null values)
  • HTML escaping and sanitization

Example:

import { describe, it, expect } from "vitest";
import { formatEquipmentSubtitle, parseHighlightSegments } from "./suggestion-helpers";

describe("suggestion helpers", () => {
  it("formats equipment subtitle correctly", () => {
    const payload = {
      manufacturer: "New Holland",
      category: "Tractors",
      intent: "model"
    };

    const result = formatEquipmentSubtitle(payload);
    expect(result).toBe("New Holland | Tractors");
  });

  it("parses highlight segments into fragments", () => {
    const input = "Filter <em>ABC</em> kit";

    const result = parseHighlightSegments(input);

    expect(result).toEqual([
      { text: "Filter ", highlighted: false, start: 0 },
      { text: "ABC", highlighted: true, start: 7 },
      { text: " kit", highlighted: false, start: 10 }
    ]);
  });
});

Coverage Goals

Minimum Coverage: 90% for critical paths Target Coverage: 95%+ for adapters and type guards

Priority Areas:

  1. Adapters - 95%+ (business-critical transformations)
  2. Type guards - 100% (safety-critical)
  3. API client - 90%+ (integration layer)
  4. Utilities - 85%+ (helper functions)

Integration Testing Strategy

Philosophy

Integration tests verify that multiple modules work together correctly. We test:

  • API client with adapters
  • Search flows with filters
  • State management interactions
  • Error handling across layers

Tools & Configuration

Test Runner: Vitest (same as unit tests) Approach: Mock external APIs, test real integration between modules

Test Examples

API Client Integration

File: lib/search-service/client.test.ts

What We Test:

  • Adapters are applied to API responses
  • Error handling works correctly
  • Pagination state is managed
  • Filters are encoded properly

Example:

import { describe, it, expect, vi } from "vitest";
import { createSearchServiceClient } from "./client";

describe("searchParts integration", () => {
  it("applies adapters to search results", async () => {
    const client = createSearchServiceClient();
    const response = await client.searchParts({ q: "pump" });

    // Verify adapters were applied
    response.parts.forEach(part => {
      expect(part.vendorCode).toBeDefined();
      expect(part.manufacturer.slug).toBeDefined();
      expect(part.manufacturer.catalogHref).toBeDefined();
    });
  });

  it("handles empty search results", async () => {
    const client = createSearchServiceClient();
    const response = await client.searchParts({ q: "nonexistent12345" });

    expect(response.parts).toEqual([]);
    expect(response.totalParts).toBe(0);
  });
});

E2E Testing Strategy

Philosophy

E2E tests verify complete user journeys from the browser perspective. We test:

  • Critical user flows (search, filter, navigate)
  • Cross-browser compatibility
  • Accessibility compliance
  • Performance metrics

Tools & Configuration

Test Runner: Playwright Browsers: Chromium, Firefox, WebKit Location: e2e/ directory

Run Commands:

# Run all E2E tests
bunx playwright test

# Run specific test file
bunx playwright test e2e/search-flow.spec.ts

# Run in headed mode (see browser)
bunx playwright test --headed

# Run with UI mode (debug)
bunx playwright test --ui

# Generate test report
bunx playwright show-report

Test Examples

Search Flow

File: e2e/search-flow.spec.ts

What We Test:

  • User can search for parts
  • Filters can be applied and cleared
  • Results update correctly
  • Pagination works

Example:

import { test, expect } from "@playwright/test";

test.describe("Parts Search Flow", () => {
  test("user can search and filter parts", async ({ page }) => {
    await page.goto("/parts");

    // Enter search query
    await page.fill('[placeholder="Search for parts..."]', "pump");
    await page.press('[placeholder="Search for parts..."]', "Enter");

    // Wait for results
    await page.waitForSelector('[data-testid="part-card"]');

    // Apply manufacturer filter
    await page.click('[data-testid="filter-manufacturer-new-holland"]');

    // Verify results update
    await page.waitForSelector('[data-testid="part-card"]');
    const cards = await page.$$('[data-testid="part-card"]');
    expect(cards.length).toBeGreaterThan(0);

    // Verify filter is applied
    const manufacturer = await page.textContent('[data-testid="active-filter-manufacturer"]');
    expect(manufacturer).toContain("New Holland");
  });
});

Autocomplete Direct Navigation

File: e2e/autocomplete.spec.ts

What We Test:

  • Autocomplete suggestions appear
  • User can click suggestion to navigate
  • Direct navigation works for part numbers
  • Analytics events fire

Example:

import { test, expect } from "@playwright/test";

test.describe("Autocomplete Direct Navigation", () => {
  test("navigates directly to part detail from part number suggestion", async ({ page }) => {
    await page.goto("/parts");

    // Type part number
    await page.fill('[placeholder="Search for parts..."]', "100715");

    // Wait for autocomplete
    await page.waitForSelector('[data-testid="autocomplete-suggestion"]');

    // Click first part number suggestion
    await page.click('[data-testid="suggestion-part-number"]');

    // Should navigate to part detail page
    await expect(page).toHaveURL(/\/parts\/[a-z0-9-]+/);

    // Verify part details loaded
    await expect(page.locator('h1')).toContainText("100715");
  });
});

Coverage Goals

Critical Flows (Must have E2E tests):

  • ✅ Search with query
  • ✅ Apply filters (manufacturer, category, price)
  • ✅ Autocomplete direct navigation
  • ✅ Part detail page load
  • ✅ 360° viewer interaction
  • 🔄 Add to cart (future)

Performance Benchmarking Strategy

Philosophy

Performance benchmarks measure API response times and page load metrics. We track:

  • Search API response times
  • Autocomplete latency
  • Page load performance
  • Bundle size

Tools & Configuration

Benchmark Runner: Vitest (bench mode) File: scripts/benchmark-search-api.bench.ts

Run Commands:

# Run benchmarks
bun benchmark

# Run with detailed output
bunx vitest bench --reporter=verbose

Benchmark Examples

File: scripts/benchmark-search-api.bench.ts

What We Measure:

  • Simple search (no filters): Target <200ms
  • Complex search (3+ filters): Target <300ms
  • Autocomplete: Target <100ms
  • Facets loading: Target <150ms
  • Part detail fetch: Target <100ms

Example:

import { bench, describe } from "vitest";
import { createSearchServiceClient } from "@/lib/search-service/client";

const client = createSearchServiceClient({ timeoutMs: 5000 });

describe("Search API Performance", () => {
  bench("simple search (no filters)", async () => {
    await client.searchParts({ q: "pump" });
  }, { iterations: 100 });

  bench("complex search (3+ filters)", async () => {
    await client.searchParts({
      q: "pump",
      manufacturer: "new-holland",
      inStock: true,
      hasImage: true
    });
  }, { iterations: 100 });

  bench("autocomplete latency", async () => {
    await client.fetchAutocomplete({ q: "100715" });
  }, { iterations: 100 });

  bench("facets loading", async () => {
    await client.fetchFilters({ q: "pump" });
  }, { iterations: 50 });

  bench("part detail fetch", async () => {
    await client.getPartById("nh-100715-68f943e0");
  }, { iterations: 100 });
});

Performance Targets

OperationTargetCurrentStatus
Simple search<200msTBD🔄 To measure
Complex search<300msTBD🔄 To measure
Autocomplete<100msTBD🔄 To measure
Facets loading<150msTBD🔄 To measure
Part detail<100msTBD🔄 To measure

Note: Benchmarks created but currently skipped. To be enabled and executed in Phase 2.


CI/CD Integration

GitHub Actions Workflow

File: .github/workflows/ci.yml

What Runs on CI:

  1. Biome lint check
  2. TypeScript type check (tsc --noEmit)
  3. Production build (bun run build)
  4. Security scan (gitleaks)
  5. Unit tests (bun test)
  6. E2E tests (bunx playwright test) - on main/dev branches only

Triggers:

  • Push to main or dev branches
  • Pull requests to main

Pre-commit Hooks

Tool: Husky + lint-staged

What Runs on Commit:

  1. Biome lint + format on staged *.{ts,tsx,js,jsx,json,md} files
  2. TypeScript type check on staged *.{ts,tsx} files with tsc-files

Setup:

# Hooks installed automatically on bun install
# Configured in .husky/pre-commit

QA Process

Development Workflow

  1. Write tests first (TDD approach recommended)
  2. Implement feature alongside tests
  3. Run local tests: bun test and bunx playwright test
  4. Check coverage: bunx vitest run --coverage
  5. Commit changes: Pre-commit hooks run automatically
  6. Create PR: CI pipeline runs full test suite
  7. Code review: Tests must pass before merge
  8. Merge to main: Deploy to staging
  9. QA verification: Manual testing on staging
  10. Deploy to production: After QA approval

Manual QA Checklist

Before Production Deployment:

  • All critical flows tested manually
  • Cross-browser testing (Chrome, Firefox, Safari)
  • Mobile responsiveness verified
  • Accessibility audit passed
  • Performance metrics acceptable
  • Analytics events firing correctly
  • Error handling working as expected
  • SEO tags correct

Production Verification

After Deployment:

  • Smoke test critical paths in production
  • Verify analytics data flowing
  • Check error monitoring (Sentry/etc)
  • Monitor performance metrics
  • Verify all facets working
  • Test autocomplete suggestions
  • Verify 360° viewers loading

Test Maintenance

When to Update Tests

Required Updates:

  • Backend API changes (update mocks and assertions)
  • New features added (add corresponding tests)
  • Bug fixes (add regression test)
  • Refactoring (ensure tests still pass)

Flaky Test Prevention

Strategies:

  1. Avoid time-dependent tests (use fixed timestamps)
  2. Mock external dependencies (API calls, timers)
  3. Use proper waits in E2E tests (avoid fixed delays)
  4. Isolate tests (no shared state between tests)
  5. Clean up after each test (reset mocks, clear cache)

Test Cleanup Checklist

Monthly Review:

  • Remove obsolete tests
  • Update snapshots if needed
  • Fix flaky tests
  • Update documentation
  • Review coverage gaps
  • Optimize slow tests

Resources

Documentation

  • vitest.config.ts - Vitest configuration
  • playwright.config.ts - Playwright configuration
  • .github/workflows/ci.yml - CI pipeline
  • .husky/pre-commit - Pre-commit hooks
  • scripts/benchmark-search-api.bench.ts - Performance benchmarks

Commands Reference

# Unit Tests
bun test                          # Run all tests
bunx vitest run --coverage        # With coverage
bunx vitest watch                 # Watch mode

# E2E Tests
bunx playwright test              # Run all E2E tests
bunx playwright test --headed     # See browser
bunx playwright test --ui         # Debug mode

# Benchmarks
bun benchmark                     # Run performance benchmarks

# Type Checking
bun run type-check                # TypeScript check

# Linting
bun run lint                      # Biome lint
bun run lint:fix                  # Auto-fix

# Full Check
bun run check:all                 # All checks before PR

Recommendations for Future

Phase 2 Priorities

  1. Fix Pre-existing Test Failures

    • Update test assertions for v1.3.0 backend format
    • Fix mock data in client tests
    • Update 360° viewer test expectations
  2. Enable Performance Benchmarks

    • Remove .skip() from benchmark suite
    • Execute benchmarks and document results
    • Create docs/performance-benchmarks.md
  3. Expand E2E Coverage

    • Add tests for equipment mode
    • Add tests for quality filters
    • Add tests for PIT pagination
    • Add accessibility tests

Long-term Goals

  1. Visual Regression Testing

    • Add Playwright visual comparisons
    • Create baseline screenshots
    • Automate visual diffs in CI
  2. Load Testing

    • Add load tests for high traffic scenarios
    • Test concurrent user behavior
    • Identify bottlenecks
  3. Accessibility Testing

    • Automated a11y checks with axe-core
    • Screen reader testing
    • Keyboard navigation tests
  4. Security Testing

    • Add OWASP ZAP scanning
    • XSS/CSRF prevention tests
    • Dependency vulnerability scanning

Last Updated: 2025-11-08 Next Review: Phase 2 (after benchmark execution) Maintained By: Frontend Team

On this page