Testing
Test Coverage
The API Gateway has 40 passing tests covering all components:
- 2 health endpoint tests
- 9 AI endpoint tests (mocked AI service)
- 13 AI client tests
- 7 database model tests
- 8 schema validation tests
- 1 integration test
Running Tests
Run All Tests
uv run pytestRun with Coverage
uv run pytest --cov=src --cov-report=htmlView coverage report:
open htmlcov/index.htmlRun Specific Test File
uv run pytest src/api/health_test.py
uv run pytest src/api/ai_test.py
uv run pytest src/core/ai_client_test.py
uv run pytest src/database/models_test.py
uv run pytest src/schemas/anamnesis_test.pyRun Specific Test
uv run pytest src/api/health_test.py::test_health_endpointTesting Strategy
Unit Tests
External Dependencies Mocked
- AI Service responses are mocked
- Azure Storage client is mocked
- Focus on testing business logic in isolation
Example:
@pytest.mark.asyncio
async def test_predict_diagnosis(client, mock_ai_service):
mock_ai_service.return_value = {
"diagnoses": [{"disease": "Psoriasis", "probability": 0.85}]
}
response = client.post("/api/ai/predict_diagnosis", json={...})
assert response.status_code == 200Integration Tests
Real Database via Testcontainers
- Tests run against real PostgreSQL
- Database spins up in Docker
- Migrations applied automatically
- Clean slate for each test
Example:
@pytest.mark.asyncio
async def test_database_integration(db_session):
case = RnRequest(uuid="test-123", status=0)
db_session.add(case)
await db_session.commit()
result = await db_session.get(RnRequest, "test-123")
assert result.uuid == "test-123"Test Fixtures
Location: src/conftest.py
Key fixtures available in all tests:
client
FastAPI test client with app context:
def test_endpoint(client):
response = client.get("/healthy")
assert response.status_code == 200db_session
Async database session with testcontainers:
@pytest.mark.asyncio
async def test_with_db(db_session):
# Use db_session for database operations
passmock_ai_service
Mocked AI service client:
def test_ai_call(client, mock_ai_service):
mock_ai_service.predict_diagnosis.return_value = {...}
# Test codeTest Organization
src/
βββ api/
β βββ ai.py # Route handlers
β βββ ai_test.py # Tests for ai.py
β βββ health.py
β βββ health_test.py
βββ core/
β βββ ai_client.py
β βββ ai_client_test.py
βββ database/
β βββ models.py
β βββ models_test.py
βββ schemas/
βββ anamnesis.py
βββ anamnesis_test.pyTests are co-located with the code they test for easy navigation.
Performance
Tests run in approximately 15 seconds:
- Unit tests (mocked): ~5 seconds
- Integration tests (PostgreSQL): ~10 seconds
Fast feedback loop for development.
Continuous Integration
Tests run automatically on:
- Every push to GitHub
- Every pull request
- Before merge to main
CI configuration ensures:
- All tests pass
- Code coverage meets threshold
- No linting errors
Last updated on