Skip to Content
ServicesAPI GatewayTesting

Testing

Test Coverage

The API Gateway has 40 passing tests covering all components:

  • 2 health endpoint tests
  • 9 AI endpoint tests (mocked AI service)
  • 13 AI client tests
  • 7 database model tests
  • 8 schema validation tests
  • 1 integration test

Running Tests

Run All Tests

uv run pytest

Run with Coverage

uv run pytest --cov=src --cov-report=html

View coverage report:

open htmlcov/index.html

Run Specific Test File

uv run pytest src/api/health_test.py uv run pytest src/api/ai_test.py uv run pytest src/core/ai_client_test.py uv run pytest src/database/models_test.py uv run pytest src/schemas/anamnesis_test.py

Run Specific Test

uv run pytest src/api/health_test.py::test_health_endpoint

Testing Strategy

Unit Tests

External Dependencies Mocked

  • AI Service responses are mocked
  • Azure Storage client is mocked
  • Focus on testing business logic in isolation

Example:

@pytest.mark.asyncio async def test_predict_diagnosis(client, mock_ai_service): mock_ai_service.return_value = { "diagnoses": [{"disease": "Psoriasis", "probability": 0.85}] } response = client.post("/api/ai/predict_diagnosis", json={...}) assert response.status_code == 200

Integration Tests

Real Database via Testcontainers

  • Tests run against real PostgreSQL
  • Database spins up in Docker
  • Migrations applied automatically
  • Clean slate for each test

Example:

@pytest.mark.asyncio async def test_database_integration(db_session): case = RnRequest(uuid="test-123", status=0) db_session.add(case) await db_session.commit() result = await db_session.get(RnRequest, "test-123") assert result.uuid == "test-123"

Test Fixtures

Location: src/conftest.py

Key fixtures available in all tests:

client

FastAPI test client with app context:

def test_endpoint(client): response = client.get("/healthy") assert response.status_code == 200

db_session

Async database session with testcontainers:

@pytest.mark.asyncio async def test_with_db(db_session): # Use db_session for database operations pass

mock_ai_service

Mocked AI service client:

def test_ai_call(client, mock_ai_service): mock_ai_service.predict_diagnosis.return_value = {...} # Test code

Test Organization

src/ β”œβ”€β”€ api/ β”‚ β”œβ”€β”€ ai.py # Route handlers β”‚ β”œβ”€β”€ ai_test.py # Tests for ai.py β”‚ β”œβ”€β”€ health.py β”‚ └── health_test.py β”œβ”€β”€ core/ β”‚ β”œβ”€β”€ ai_client.py β”‚ └── ai_client_test.py β”œβ”€β”€ database/ β”‚ β”œβ”€β”€ models.py β”‚ └── models_test.py └── schemas/ β”œβ”€β”€ anamnesis.py └── anamnesis_test.py

Tests are co-located with the code they test for easy navigation.

Performance

Tests run in approximately 15 seconds:

  • Unit tests (mocked): ~5 seconds
  • Integration tests (PostgreSQL): ~10 seconds

Fast feedback loop for development.

Continuous Integration

Tests run automatically on:

  • Every push to GitHub
  • Every pull request
  • Before merge to main

CI configuration ensures:

  • All tests pass
  • Code coverage meets threshold
  • No linting errors
Last updated on