Implement apix-registry with IoT sunset/decommission lifecycle and full BDD suite
- REST API: register, patch, O-level, replacements, history, search endpoints - IoT lifecycle validations: future sunset, lock-before-release, sunset-passed-before-decommission - DB schema: Liquibase changesets 001–008 (services, versions, replacements, sunset-at column) - @ColumnTransformer(write="?::jsonb") on bsm_payload fields to avoid JDBC varchar→jsonb rejection - Jandex plugin on apix-common + quarkus.index-dependency so @NotBlank validators resolve at runtime - quarkus-logging-json extension added; quarkus.log.console.json=false is now a recognised key - Fix requireSunsetBeforeLockRelease: Boolean.TRUE.equals instead of !Boolean.FALSE.equals (null guard) - BDD suite: 27 scenarios / 213 steps across 5 feature files (sunset-lock, decommission, replacement, discovery, anonymity) - Test infrastructure: JDBC TRUNCATE in @Before for DB isolation, Arc.container() for clock control — no test endpoints in production code - sunsetAt truncated to microseconds in BDD steps to match Postgres timestamptz precision - Cucumber step fixes: singular/plural candidate(s), lastResponse propagation in replacementsReturnsNCandidates Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,37 @@
|
||||
# APIX local development environment
|
||||
# Copy to .env and adjust values before running setup-dev.sh.
|
||||
# NEVER commit .env to version control.
|
||||
|
||||
# ── Database ──────────────────────────────────────────────────────────────────
|
||||
APIX_DB_USER=apix
|
||||
APIX_DB_PASSWORD=apix
|
||||
APIX_DB_NAME=apix
|
||||
APIX_DB_PORT=5432
|
||||
|
||||
# Quarkus datasource (consumed by apix-registry and apix-spider)
|
||||
QUARKUS_DATASOURCE_JDBC_URL=jdbc:postgresql://localhost:5432/apix
|
||||
QUARKUS_DATASOURCE_USERNAME=apix
|
||||
QUARKUS_DATASOURCE_PASSWORD=apix
|
||||
|
||||
# ── API security ──────────────────────────────────────────────────────────────
|
||||
# Protects write endpoints (POST /api/register, etc.)
|
||||
# Change to a random string in any non-local environment.
|
||||
APIX_API_KEY=dev-insecure-key-change-in-prod
|
||||
|
||||
# ── External APIs (optional in local dev — leave blank to disable) ─────────────
|
||||
GLEIF_API_URL=https://api.gleif.org/api/v1
|
||||
OPENCORPORATES_API_KEY=
|
||||
|
||||
# Path where the spider caches downloaded sanctions lists
|
||||
SANCTIONS_CACHE_PATH=./sanctions-cache
|
||||
|
||||
# ── Spider ────────────────────────────────────────────────────────────────────
|
||||
# Check interval in minutes (2 = dev, 15 = prod)
|
||||
SPIDER_INTERVAL_MINUTES=2
|
||||
|
||||
# ── Portal → Registry ─────────────────────────────────────────────────────────
|
||||
REGISTRY_BASE_URL=http://localhost:8180
|
||||
|
||||
# ── Logging ───────────────────────────────────────────────────────────────────
|
||||
# DEBUG in dev; INFO in prod
|
||||
LOG_LEVEL=DEBUG
|
||||
+24
@@ -0,0 +1,24 @@
|
||||
# Script run logs
|
||||
logs/
|
||||
|
||||
# flatten-maven-plugin resolves ${revision} into installed POMs
|
||||
.flattened-pom.xml
|
||||
|
||||
# Local environment — never commit real credentials
|
||||
.env
|
||||
|
||||
# Maven build output
|
||||
target/
|
||||
**/target/
|
||||
|
||||
# Sanctions list cache (downloaded at runtime)
|
||||
sanctions-cache/
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
*.iml
|
||||
.vscode/
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
+571
@@ -0,0 +1,571 @@
|
||||
# APIX MVP — Work Log
|
||||
|
||||
**Goal:** Deployable, publicly queryable APIX registry PoC on Hetzner by end of 2026.
|
||||
**Constraint:** Solo development, LLM-assisted. Security hygiene (not hardening). Speed over completeness.
|
||||
**Success criterion:** A reviewer can open a public URL, run a capability query, see a result from a registered service, and confirm the Spider has checked it.
|
||||
|
||||
---
|
||||
|
||||
## Status Legend
|
||||
|
||||
| Symbol | Meaning |
|
||||
|---|---|
|
||||
| `[ ]` | Not started |
|
||||
| `[~]` | In progress |
|
||||
| `[x]` | Done |
|
||||
| `[!]` | Blocked / decision needed |
|
||||
|
||||
---
|
||||
|
||||
## Arc42 Documentation Deliverables
|
||||
|
||||
### 1. Introduction and Goals → `docs/arc42/01-introduction-goals.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-01 | MVP goal statement (what must be provable at the end) | `[ ]` |
|
||||
| D-02 | Quality goals table (top 3–5, measurable) | `[ ]` |
|
||||
| D-03 | Stakeholder table (STF reviewer, agent developer, service registrant, BSF) | `[ ]` |
|
||||
| D-04 | Explicit out-of-scope list for MVP (billing, full trust model, multi-region) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 2. Architecture Constraints → `docs/arc42/02-constraints.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-05 | Technical constraints (Hetzner, Docker Compose, Python, PostgreSQL, open source stack) | `[ ]` |
|
||||
| D-06 | Organisational constraints (solo dev, LLM-assisted, public GitHub repo required) | `[ ]` |
|
||||
| D-07 | Regulatory constraints (HTTPS mandatory, GDPR-lite: no PII stored beyond registrant email) | `[ ]` |
|
||||
| D-08 | Convention constraints (HATEOAS API style, IETF Internet-Draft alignment) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 3. Context and Scope → `docs/arc42/03-context-scope.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-09 | System context diagram — PlantUML, external actors: Agent, Service Registrant, Spider, Admin | `[ ]` |
|
||||
| D-10 | External interface table (what each actor sends/receives) | `[ ]` |
|
||||
| D-11 | Technical context diagram — PlantUML, network boundary (Caddy → API → DB, Spider → external services) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 4. Solution Strategy → `docs/arc42/04-solution-strategy.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-12 | Tech stack decision table with rationale (FastAPI, PostgreSQL JSONB, Caddy, HTMX) | `[ ]` |
|
||||
| D-13 | Architectural pattern decisions (REST + HATEOAS, async Spider, single-process portal) | `[ ]` |
|
||||
| D-14 | Quality goal → architecture decision mapping | `[ ]` |
|
||||
| D-15 | MVP shortcuts explicitly listed (manual O-level assignment, no billing, no rate limiting beyond Caddy) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 5. Building Block View → `docs/arc42/05-building-blocks.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-16 | Level 1 diagram — PlantUML: API service, Spider service, Portal, PostgreSQL, Caddy | `[ ]` |
|
||||
| D-17 | Level 2 diagram — API internals: router, service layer, BSM validator, DB adapter | `[ ]` |
|
||||
| D-18 | Level 2 diagram — Spider internals: scheduler, fetcher, liveness evaluator, DB writer | `[ ]` |
|
||||
| D-19 | Component responsibility table (one sentence per component) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 6. Runtime View → `docs/arc42/06-runtime-view.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-20 | Scenario 1: Agent queries registry by capability — PlantUML sequence diagram | `[ ]` |
|
||||
| D-21 | Scenario 2: Service registrant submits BSM via portal — PlantUML sequence diagram | `[ ]` |
|
||||
| D-22 | Scenario 3: Spider runs liveness check and updates service status — PlantUML sequence diagram | `[ ]` |
|
||||
| D-23 | Scenario 4: Agent navigates index via HATEOAS links — PlantUML sequence diagram | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 7. Deployment View → `docs/arc42/07-deployment-view.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-24 | Hetzner deployment diagram — PlantUML: VPS, Docker Compose services, Caddy, volumes | `[ ]` |
|
||||
| D-25 | Environment table (dev / staging / prod differences) | `[ ]` |
|
||||
| D-26 | Backup and restore strategy (pg_dump, Hetzner volume snapshot) | `[ ]` |
|
||||
| D-27 | Domain and DNS setup notes | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 8. Crosscutting Concepts → `docs/arc42/08-crosscutting-concepts.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-28 | Logging concept (structured JSON, levels, what is logged per component) | `[ ]` |
|
||||
| D-29 | Error handling concept (HTTP error codes, error response schema) | `[ ]` |
|
||||
| D-30 | Security hygiene concept (HTTPS via Caddy, API key on write endpoints, rate limiting, no PII logging) | `[ ]` |
|
||||
| D-31 | BSM validation concept (schema version, required fields, validation error format) | `[ ]` |
|
||||
| D-32 | Liveness check concept (what "live" means, check frequency, status transition rules) | `[ ]` |
|
||||
| D-33 | Idempotency concept (re-registration behaviour, Spider re-check behaviour) | `[ ]` |
|
||||
| D-33a | i18n concept (locale resolution order, @MessageBundle keys, help content localisation, language switcher) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 9. Architecture Decisions → `docs/arc42/09-architecture-decisions.md`
|
||||
|
||||
| # | ADR | Status |
|
||||
|---|---|---|
|
||||
| D-34 | ADR-001: Python + FastAPI over Node/Go — rationale: AI ecosystem, speed, solo dev | `[ ]` |
|
||||
| D-35 | ADR-002: PostgreSQL + JSONB over MongoDB — rationale: relational integrity for registry + flexible BSM payload | `[ ]` |
|
||||
| D-36 | ADR-003: Caddy over nginx/traefik — rationale: auto-TLS, zero config, solo maintainability | `[ ]` |
|
||||
| D-37 | ADR-004: HTMX + Jinja2 over React/Vue — rationale: no JS build pipeline, speed, portal is admin-grade not consumer-grade | `[ ]` |
|
||||
| D-38 | ADR-005: Automated O-1/O-2/O-3 verification in MVP — DNS + GLEIF + OpenCorporates + HTTP hygiene checks; O-4/O-5 post-MVP | `[ ]` |
|
||||
| D-39 | ADR-006: Two-VPS Hetzner deployment — apix-app (Swarm) + apix-gitea (Gitea + CI) | `[ ]` |
|
||||
| D-47 | ADR-007: Register verification APIs (GLEIF, OpenCorporates, EU Sanctions) as reference APIX entries | `[ ]` |
|
||||
| D-48 | ADR-009: Maven multi-module with separated Spider module | `[ ]` |
|
||||
| D-49 | ADR-010: Self-hosted Gitea primary; GitHub push mirror | `[ ]` |
|
||||
| D-50 | ADR-011: Docker Swarm single-node for zero-downtime production deployment | `[ ]` |
|
||||
| D-51 | ADR-012: Three-stage CI/CD pipeline (fast / native build / deploy) | `[ ]` |
|
||||
| D-52 | ADR-013: Server-side i18n via Quarkus @MessageBundle; EN + DE; cookie + Accept-Language locale resolution | `[ ]` |
|
||||
| D-53 | ADR-014: Client-side help overlay engine; server-rendered locale-aware tour content; 5 tours (register, search, trust, admin, agent-setup) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 10. Quality Requirements → `docs/arc42/10-quality-requirements.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-40 | Quality tree (functionality, reliability, security hygiene, operability) | `[ ]` |
|
||||
| D-41 | Quality scenarios table (stimulus → response → measurable outcome) | `[ ]` |
|
||||
| D-42 | MVP acceptance criteria (what "done" looks like — the STF reviewer test) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 11. Risks and Technical Debt → `docs/arc42/11-risks-technical-debt.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-43 | Risk register (chicken-and-egg, big tech competition, single point of failure, solo bus factor) | `[ ]` |
|
||||
| D-44 | Technical debt log (MVP shortcuts accepted, with explicit exit path for each) | `[ ]` |
|
||||
| D-45 | Mitigation actions per risk | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### 12. Glossary → `docs/arc42/12-glossary.md`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| D-46 | Glossary: APIX, BSM, Spider, O-level, S-level, Liveness, AE, HATEOAS, DC-1 | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
## Code Deliverables
|
||||
|
||||
**Stack:** Java 21 + Quarkus 3.x, Maven multi-module. Five modules: two plain Java libraries (`apix-common`, `apix-verification`) and three independently deployable Quarkus apps (`apix-registry`, `apix-spider`, `apix-portal`). Spider runs as its own process — separate module, separate container, separate lifecycle. Build tool: Maven with parent POM as BOM. Tests: plain JUnit 5 for library modules; JUnit 5 + `@QuarkusTest` + RestAssured + WireMock for Quarkus modules.
|
||||
|
||||
Source root per module: `<module>/src/main/java/org/botstandards/apix/<module-suffix>/`
|
||||
|
||||
---
|
||||
|
||||
### Parent POM — `pom.xml`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-00 | `pom.xml` — parent; imports Quarkus BOM; declares all five modules; manages `maven-compiler-plugin` (Java 21 release); `quarkus-maven-plugin` config inherited by Quarkus modules | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### apix-common — Shared Library (plain Java 21)
|
||||
|
||||
No Quarkus dependency. Shared enums and DTOs used by all modules.
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-01 | `OLevel.java` — enum: UNVERIFIED, IDENTITY_VERIFIED, LEGAL_ENTITY_VERIFIED, HYGIENE_VERIFIED, OPERATIONALLY_VERIFIED, AUDITED | `[ ]` |
|
||||
| C-02 | `LivenessStatus.java` — enum: PENDING, LIVE, DEGRADED, UNREACHABLE | `[ ]` |
|
||||
| C-03 | `BsmPayload.java` — Java record; all BSM fields per Internet-Draft; Bean Validation annotations (`@NotBlank`, `@Valid`, `@URL`) | `[ ]` |
|
||||
| C-04 | `ServiceSummaryDto.java` — Java record; fields exposed in search results (id, name, capabilities, olevel, liveness_status, endpoint, last_checked_at) | `[ ]` |
|
||||
| C-05 | `VerificationResult.java` — Java record: olevel achieved, step that blocked (if any), message | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### apix-verification — Verification Library (plain Java 21)
|
||||
|
||||
No Quarkus dependency. Uses `java.net.http.HttpClient` and `dnsjava`. All classes are plain CDI-free POJOs — fully testable with plain JUnit. Depends on `apix-common`.
|
||||
|
||||
| # | Deliverable | What it does | O-level unlocked |
|
||||
|---|---|---|---|
|
||||
| C-06 | `O1DnsVerifier.java` | DNS TXT record lookup (dnsjava); business email MX check | O-1 |
|
||||
| C-07 | `O2GleifVerifier.java` | GLEIF REST API via `java.net.http.HttpClient`; LEI lookup by company name + jurisdiction | O-2 |
|
||||
| C-08 | `O2OpenCorporatesVerifier.java` | OpenCorporates REST API fallback for registrants without LEI; covers 130+ jurisdictions | O-2 (fallback) |
|
||||
| C-09 | `O3HygieneVerifier.java` | HTTP fetch of `/.well-known/security.txt`; DNS DMARC + SPF check; policy URL reachability | O-3 |
|
||||
| C-10 | `SanctionsScreener.java` | Screens name + jurisdiction against locally cached OFAC/EU/UN lists (CSV/XML read from file path) | Pre-condition for O-2 |
|
||||
| C-11 | `VerificationPipeline.java` | Orchestrates O-1 → sanctions → O-2 → O-3 in sequence; returns `VerificationResult`; no I/O side effects (caller persists result) | All |
|
||||
| C-12 | `VerificationConfig.java` | Plain POJO: gleifApiUrl, openCorporatesApiKey, sanctionsCachePath — injected by caller (Quarkus `@ConfigProperty` in registry) | — |
|
||||
|
||||
---
|
||||
|
||||
### apix-registry — REST API (Quarkus 3.x app)
|
||||
|
||||
Depends on `apix-common` + `apix-verification`. Owns the database schema (runs Liquibase at startup). Quarkus extensions: RESTEasy Reactive, Hibernate ORM + Panache, PostgreSQL JDBC, Liquibase, SmallRye Health, Quarkus Security.
|
||||
|
||||
#### Resource layer
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-13 | `resource/IndexResource.java` — `GET /` HATEOAS root; navigation links JSON | `[ ]` |
|
||||
| C-14 | `resource/ServiceResource.java` — `GET /services` (capability search + filters), `GET /services/{id}` | `[ ]` |
|
||||
| C-15 | `resource/RegisterResource.java` — `POST /register` (API-key protected); triggers `VerificationOrchestrator` asynchronously | `[ ]` |
|
||||
|
||||
#### Service layer
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-16 | `service/RegistryService.java` — register (UPSERT on endpoint URL), search by capability (JPQL + JSONB), dedup | `[ ]` |
|
||||
| C-17 | `service/VerificationOrchestrator.java` — CDI bean; injects `VerificationConfig` from Quarkus `@ConfigProperty`; calls `VerificationPipeline`; persists `VerificationResult` to `ServiceRecord`; fires admin notification event if manual step needed | `[ ]` |
|
||||
|
||||
#### Model + repository
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-18 | `model/ServiceRecord.java` — Panache entity; `services` table; JSONB via `@JdbcTypeCode(SqlTypes.JSON)` for `bsm_payload` | `[ ]` |
|
||||
| C-19 | `repository/ServiceRepository.java` — capability search query; upsert; O-level update | `[ ]` |
|
||||
|
||||
#### Liquibase migrations
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-20 | `resources/db/changelog/db.changelog-master.xml` | `[ ]` |
|
||||
| C-21 | `changes/001-initial-schema.xml` — `services` table: id, endpoint_url (unique), bsm_payload (JSONB), olevel, slevel, liveness_status, registered_at | `[ ]` |
|
||||
| C-22 | `changes/002-verification-columns.xml` — verification_status, olevel_checked_at, sanctions_cleared, gleif_lei | `[ ]` |
|
||||
| C-23 | `changes/003-liveness-metrics.xml` — last_checked_at, uptime_30d_percent, avg_response_ms, consecutive_failures | `[ ]` |
|
||||
|
||||
#### Configuration
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-24 | `resources/application.properties` — datasource, Liquibase enabled, port 8180, GLEIF URL, OpenCorporates key, sanctions cache path, log level | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### apix-spider — Liveness Scheduler (Quarkus 3.x app)
|
||||
|
||||
Separate module, separate container, separate lifecycle. Depends on `apix-common`. Connects to same PostgreSQL DB; Liquibase **disabled** (`quarkus.liquibase.migrate-at-start=false`). Quarkus extensions: Quarkus Scheduler, Hibernate ORM + Panache, PostgreSQL JDBC, REST Client Reactive, SmallRye Health.
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-25 | `SpiderScheduler.java` — `@Scheduled(every="${spider.interval:15m}")`; loads all active services; dispatches to `LivenessFetcher` via virtual thread pool | `[ ]` |
|
||||
| C-26 | `LivenessFetcher.java` — `@RestClient`; async HTTP GET with 5s timeout; `@RunOnVirtualThread` | `[ ]` |
|
||||
| C-27 | `LivenessEvaluator.java` — pure logic: HTTP status code + response time ms → `LivenessStatus`; no I/O; no Quarkus dependency | `[ ]` |
|
||||
| C-28 | `OpenApiParser.java` — fetch + parse OpenAPI spec; verify declared capabilities present | `[ ]` |
|
||||
| C-29 | `McpParser.java` — fetch + parse MCP spec URL | `[ ]` |
|
||||
| C-30 | `model/SpiderServiceView.java` — Panache entity (read/write subset of `services` table): endpoint_url, liveness_status, last_checked_at, uptime_30d_percent, avg_response_ms, consecutive_failures | `[ ]` |
|
||||
| C-31 | `repository/SpiderRepository.java` — load services due for check; write liveness result | `[ ]` |
|
||||
| C-32 | `resources/application.properties` — datasource, Liquibase disabled, scheduler interval, spider HTTP timeout, port 8082 (internal only) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### apix-portal — Web Portal (Quarkus 3.x app)
|
||||
|
||||
Depends on `apix-common`. Calls `apix-registry` via REST Client — no direct DB access. Quarkus extensions: RESTEasy Reactive, Qute, REST Client Reactive, SmallRye Health.
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-33 | `resource/PortalResource.java` — `GET /`, `/services/{id}`, `/search`, `/register`; delegates to `RegistryClient` | `[ ]` |
|
||||
| C-34 | `resource/AdminResource.java` — `GET /admin`, `POST /admin/olevel` (API-key protected) | `[ ]` |
|
||||
| C-35 | `client/RegistryClient.java` — `@RegisterRestClient`; two methods: `search(capability, page)` → `List<ServiceSummaryDto>`; `getDetail(id)` → `ServiceDetailDto` (full BSM payload + all registry metadata); typed REST client calling `apix-registry` | `[ ]` |
|
||||
| C-36 | `templates/index.html` (Qute, `@CheckedTemplate`) — registry stats, search box | `[ ]` |
|
||||
| C-37 | `templates/register.html` (Qute) — BSM registration form; HTMX inline validation | `[ ]` |
|
||||
| C-38 | `templates/service.html` (Qute) — human-targeted service detail page; see layout spec below | `[ ]` |
|
||||
| C-39 | `templates/search.html` (Qute) — capability search results, HTMX pagination | `[ ]` |
|
||||
| C-40 | `templates/admin.html` (Qute) — pending verifications, O-level assignment, reference registration flags | `[ ]` |
|
||||
| C-41 | `resources/META-INF/resources/style.css` — minimal CSS, no framework | `[ ]` |
|
||||
| C-42 | `resources/application.properties` — registry base URL, port 8081, log level | `[ ]` |
|
||||
|
||||
#### service.html — human-readable service detail page (C-38 layout spec)
|
||||
|
||||
A human visiting `/services/{id}` is evaluating whether to use this service. The page answers four questions in order: *Who is this? Can I trust them? What exactly does it do? How do I call it?*
|
||||
|
||||
**Section 1 — Identity hero**
|
||||
- Service `name` (h1, prominent)
|
||||
- `description` (full text, not truncated)
|
||||
- Two inline trust chips: O-level badge (colored pill: grey=O-0, blue=O-1/O-2/O-3, green=O-4/O-5) + liveness dot (green/amber/red) — both visible above the fold without scrolling
|
||||
|
||||
**Section 2 — Trust verification card**
|
||||
- O-level: large badge icon + level name (e.g. "Legal Entity Verified") + 2-sentence plain-English explanation (e.g. *"The provider's legal incorporation has been confirmed against the GLEIF global legal entity database. This means APIX has independently verified that the registrant is a real legal entity — not self-declared."*) + GLEIF LEI link if present
|
||||
- S-level: same pattern (badge + name + explanation)
|
||||
- "Verified on" date — relative ("3 days ago") with absolute date as tooltip; "Reference entry by BSF" badge if applicable (grey label "Registered by Bot Standards Foundation — operator not yet self-registered")
|
||||
|
||||
**Section 3 — Liveness status card**
|
||||
- Colored status row: dot + label (LIVE / DEGRADED / UNREACHABLE / PENDING)
|
||||
- Uptime last 30 days: percentage bar + number (e.g. "98.4%")
|
||||
- Average response time: "142 ms"
|
||||
- Last checked: relative time ("8 minutes ago")
|
||||
- Check frequency note: "Automatically verified every 15 minutes by the APIX Spider"
|
||||
|
||||
**Section 4 — Capabilities**
|
||||
- Each `capabilities[]` entry as a visual chip (tag-style)
|
||||
- If BSM includes a description per capability, show it beneath the chip
|
||||
|
||||
**Section 5 — Pricing** (conditional; omit section if BSM has no pricing object)
|
||||
- Per-call price + currency + billing unit (e.g. "€0.02 / call")
|
||||
- Billing model if declared
|
||||
- "Pricing not declared — contact provider" if the BSM pricing object is absent
|
||||
|
||||
**Section 6 — Contact & registration**
|
||||
- Registrant contact email (mailto link)
|
||||
- Registered since (human date)
|
||||
- BSM version
|
||||
|
||||
**Section 7 — Integration** (`<details>` collapsible; closed by default)
|
||||
- Endpoint URL with one-click copy button
|
||||
- OpenAPI spec URL link + MCP spec URL link (if declared in BSM)
|
||||
- Minimal HTTP example snippet:
|
||||
```
|
||||
GET {endpoint}
|
||||
X-APIX-Caller: your-agent-id
|
||||
```
|
||||
- Link to machine-readable entry: `GET /api/services/{id}` (JSON)
|
||||
|
||||
#### ServiceDetailViewModel — portal-internal view model
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-65 | `model/ServiceDetailViewModel.java` — Java record; portal-internal (not in `apix-common`); carries: all BSM payload fields; `oLevelLabel` (e.g. "Legal Entity Verified"); `oLevelDescription` (2-sentence plain-English, locale-resolved from Messages); `oLevelColorClass` (CSS class: `trust-unverified`/`trust-basic`/`trust-strong`); same pattern for `sLevel*`; `livenessLabel`, `livenessColorClass`, `livenessExplanation`; `formattedUptime` ("98.4%"), `formattedAvgResponseMs` ("142 ms"); `registeredAtRelative`, `registeredAtIso`; `lastCheckedAtRelative`, `lastCheckedAtIso`; `gleifLeiUrl` (null or `https://search.gleif.org/#/record/{lei}`); `isReferenceEntry` boolean | `[ ]` |
|
||||
| C-66 | Update `PortalResource.java` — `GET /services/{id}`: call `RegistryClient.getDetail(id)`, build `ServiceDetailViewModel` via `ServiceDetailViewModelFactory`, pass to `service.html` template | `[ ]` |
|
||||
| C-67 | `service/ServiceDetailViewModelFactory.java` — CDI bean; receives `ServiceDetailDto` + `Locale` + injected `Messages`; resolves O-level description string by level index (keys: `service.oLevel.0.description` … `service.oLevel.5.description`); computes relative timestamps via `java.time`; builds `ServiceDetailViewModel` | `[ ]` |
|
||||
|
||||
**O-level description message keys** (to be added to C-54 `Messages.java` + C-55/C-56 properties files):
|
||||
|
||||
| Key | EN value |
|
||||
|---|---|
|
||||
| `service.oLevel.0.name` | Unverified |
|
||||
| `service.oLevel.0.description` | Self-declared only. No identity claims have been independently confirmed by APIX. Treat output with appropriate caution. |
|
||||
| `service.oLevel.1.name` | Identity Verified |
|
||||
| `service.oLevel.1.description` | The provider's domain ownership has been confirmed via DNS verification and their business email is reachable. APIX has confirmed this registrant controls the declared domain. |
|
||||
| `service.oLevel.2.name` | Legal Entity Verified |
|
||||
| `service.oLevel.2.description` | The provider's legal incorporation has been confirmed against the GLEIF global legal entity database or an authoritative company register. This is a real, registered legal entity — not self-declared. |
|
||||
| `service.oLevel.3.name` | Hygiene Verified |
|
||||
| `service.oLevel.3.description` | The provider publishes a security contact, enforces email authentication (DMARC/SPF), and maintains reachable legal documents. They meet baseline operational transparency standards. |
|
||||
| `service.oLevel.4.name` | Operationally Verified |
|
||||
| `service.oLevel.4.description` | An accredited APIX Verifier has assessed this provider's operational practices, incident response, and SLA track record. This level requires a human review process. |
|
||||
| `service.oLevel.5.name` | Audited |
|
||||
| `service.oLevel.5.description` | The provider holds a third-party security audit certificate (SOC 2 Type II or ISO 27001) confirmed by an APIX Accredited Verifier. The highest trust level available. |
|
||||
|
||||
Same key pattern for `service.sLevel.*`. German equivalents go in `messages_de.properties`.
|
||||
|
||||
#### i18n — server-side message bundles (Quarkus @MessageBundle)
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-54 | `Messages.java` — `@MessageBundle` interface; one method per translatable string key; sections: `nav.*`, `home.*`, `register.*`, `search.*`, `service.*`, `admin.*`, `help.*`, `error.*` | `[ ]` |
|
||||
| C-55 | `resources/i18n/messages.properties` — English strings (all keys defined; this is the build-time default) | `[ ]` |
|
||||
| C-56 | `resources/i18n/messages_de.properties` — German strings (same key set as EN) | `[ ]` |
|
||||
| C-57 | `LocaleResolver.java` — CDI bean; reads `apix-locale` cookie first, then `Accept-Language` header, then falls back to `Locale.ENGLISH`; returns `java.util.Locale` | `[ ]` |
|
||||
| C-58 | `resource/LocaleResource.java` — `POST /locale`; validates `lang` param against `["en","de"]`; sets `apix-locale` cookie (HttpOnly, SameSite=Lax, path `/`); redirects to `Referer` | `[ ]` |
|
||||
|
||||
#### Help system — overlay engine + tour content
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-59 | `resources/META-INF/resources/help.js` — client-side tour engine; spotlight four-wing dimming (`help-dim-top/left/right/bottom`) + highlight ring; draggable tour card (header grip); progress dots; state indicator; page-help drawer (slide-in right); context filter (shows only tours whose `pages` includes current `<body data-page-id>`); reads `window.PAGE_TOURS` + `window.PAGE_HELP`; no external dependency | `[ ]` |
|
||||
| C-60 | `templates/layout.html` — Qute base layout extended by all portal pages; contains: nav bar with help button (?) and language switcher form; overlay HTML (4 wing divs + highlight ring + tour card shell + progress dots); help drawer HTML (guided tour list + page help section); `<script src="/help.js">`; all nav/chrome strings via `{inject:msg.*}` | `[ ]` |
|
||||
| C-61 | `model/TourDefinition.java` + `model/TourStep.java` — Java records; `TourDefinition`: tourId, pages (list of page IDs), title, steps; `TourStep`: targetSelector (CSS selector), title, body, checkFn (optional JS function name for step validation) | `[ ]` |
|
||||
| C-62 | `service/HelpContentService.java` — CDI bean; builds locale-resolved list of `TourDefinition` using injected `Messages`; serializes to JSON (Jackson); defines 5 MVP tours: `tour-agent-setup` (3 steps, home), `tour-register` (5 steps, register), `tour-search` (3 steps, search), `tour-trust` (4 steps, service detail), `tour-admin` (4 steps, admin) | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
### Tests
|
||||
|
||||
Tests live in each module's `src/test/`. Library module tests are plain JUnit (fast). Quarkus module tests use `@QuarkusTest` with `@QuarkusTestResource` for Testcontainers (PostgreSQL).
|
||||
|
||||
#### apix-verification tests (plain JUnit 5 + WireMock)
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-43 | `O1DnsVerifierTest` — valid domain, missing TXT record, wrong value | `[ ]` |
|
||||
| C-44 | `O2GleifVerifierTest` — LEI match, unknown LEI, timeout (WireMock HTTP server) | `[ ]` |
|
||||
| C-45 | `O3HygieneVerifierTest` — security.txt present/absent, DMARC/SPF checks | `[ ]` |
|
||||
| C-46 | `SanctionsScreenerTest` — match, no match, false positive (local CSV fixture) | `[ ]` |
|
||||
| C-47 | `VerificationPipelineTest` — O-0 → O-3 happy path; blocked at O-2 (sanctions); O-3 failure preserves O-2 | `[ ]` |
|
||||
| C-48 | `LivenessEvaluatorTest` — pure logic: 200+fast=LIVE, 200+slow=DEGRADED, 503=UNREACHABLE, timeout=UNREACHABLE | `[ ]` |
|
||||
|
||||
#### apix-registry tests (@QuarkusTest + Testcontainers + RestAssured)
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-49 | `RegistryServiceTest` — register happy path, duplicate UPSERT, invalid BSM rejected | `[ ]` |
|
||||
| C-50 | `ServiceResourceTest` — capability search match/no-match/partial; HATEOAS root links present | `[ ]` |
|
||||
| C-51 | `RegisterResourceTest` — valid registration 201, missing API key 401, invalid payload 400 | `[ ]` |
|
||||
|
||||
#### apix-spider tests (@QuarkusTest + WireMock)
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-52 | `SpiderSchedulerTest` — trigger one check cycle; verify liveness written to DB | `[ ]` |
|
||||
|
||||
#### apix-portal tests (@QuarkusTest + WireMock for registry)
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| C-53 | `PortalResourceTest` — homepage renders; search form submits; registration form renders | `[ ]` |
|
||||
| C-63 | `LocaleResolverTest` — DE `Accept-Language` header → `Locale.GERMAN`; EN cookie overrides DE header → `Locale.ENGLISH`; absent header + absent cookie → `Locale.ENGLISH` default | `[ ]` |
|
||||
| C-64 | `HelpContentServiceTest` — register page returns exactly 5 tour steps; home page tour excludes `tour-admin`; DE locale produces DE strings in tour title/body JSON; page filter returns only tours referencing current page ID | `[ ]` |
|
||||
| C-68 | `ServiceDetailViewModelFactoryTest` — O-0 produces colorClass `trust-unverified` + EN description; O-2 with LEI → `gleifLeiUrl` non-null and correct; `registeredAtRelative` computed from fixed `Instant`; `isReferenceEntry` true when flag set; DE locale → DE description strings from properties | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure Deliverables
|
||||
|
||||
### Docker / Compose — `infra/`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| I-01 | `docker-compose.yml` — five services: registry (:8180), spider (:8082 internal), portal (:8081), db (postgres:16-alpine), caddy (:80/:443) | `[ ]` |
|
||||
| I-02 | `docker-compose.override.yml` — dev overrides: JVM mode for all three Quarkus apps (`quarkus dev`); all ports exposed; no TLS; hot reload | `[ ]` |
|
||||
| I-03 | `Caddyfile` — HTTPS auto-cert; `/api/*` → registry:8180; `/*` → portal:8081; spider has no public route | `[ ]` |
|
||||
| I-04 | `apix-registry/Dockerfile` — multi-stage: GraalVM 21 Maven builder → UBI Minimal 8 runtime; non-root user; exposes 8180 | `[ ]` |
|
||||
| I-05 | `apix-spider/Dockerfile` — multi-stage: GraalVM 21 Maven builder → UBI Minimal 8 runtime; non-root user; no exposed port (internal only) | `[ ]` |
|
||||
| I-06 | `apix-portal/Dockerfile` — multi-stage: GraalVM 21 Maven builder → UBI Minimal 8 runtime; non-root user; exposes 8081 | `[ ]` |
|
||||
| I-07 | `.env.example` — QUARKUS_DATASOURCE_JDBC_URL, API_KEY, GLEIF_API_URL, OPENCORPORATES_API_KEY, SANCTIONS_CACHE_PATH, SPIDER_INTERVAL, LOG_LEVEL, REGISTRY_BASE_URL (portal→registry) | `[ ]` |
|
||||
|
||||
### Hetzner — `infra/hetzner/apix-app/` (APIX application VPS)
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| I-08 | `provision.sh` — VPS bootstrap: Docker install + Swarm init (`docker swarm init`), firewall (80/443 only), swap, non-root user, Hetzner volume mount | `[ ]` |
|
||||
| I-09 | `backup.sh` — pg_dump to Hetzner volume; retain 7 dumps; run via cron at 03:00 UTC | `[ ]` |
|
||||
| I-10 | DNS setup notes — A record for `registry.botstandards.org` (or agreed domain) → apix-app VPS IP | `[ ]` |
|
||||
| I-11 | `sanctions/download.sh` — download OFAC SDN, EU consolidated, UN SC sanctions lists; run weekly via cron | `[ ]` |
|
||||
| I-12 | `docker-stack.yml` — production Swarm stack; all five services (registry, spider, portal, db, caddy); `deploy.update_config.order: start-first`; `deploy.rollback_config`; health check references; image tags pulled from Gitea registry | `[ ]` |
|
||||
|
||||
### Hetzner — `infra/hetzner/apix-gitea/` (Gitea VPS)
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| I-13 | `provision.sh` — Gitea VPS bootstrap: Docker install, firewall (22/80/443), swap, non-root user | `[ ]` |
|
||||
| I-14 | `docker-compose.yml` — Gitea + Caddy on Gitea VPS; Gitea data volume; SQLite (no external DB) | `[ ]` |
|
||||
| I-15 | `gitea-config/app.ini` — Gitea configuration: container registry enabled, Gitea Actions enabled, organisation `bot-standards-foundation`, domain `gitea.botstandards.org` | `[ ]` |
|
||||
| I-16 | GitHub push mirror setup — configure Gitea push mirror to `github.com/bot-standards-foundation/*` for all repositories; triggered on every push to `main` | `[ ]` |
|
||||
| I-17 | act_runner JVM install — Gitea Actions runner on Gitea VPS; Java 21 + Maven; handles Stage 1 (fast cycle) and Stage 3 (deploy) CI jobs | `[ ]` |
|
||||
| I-18 | act_runner GraalVM install — Gitea Actions runner on Gitea VPS (or apix-app VPS off-hours); GraalVM 21; handles Stage 2 (native image build); configured to run max 1 concurrent native job | `[ ]` |
|
||||
| I-19 | DNS setup notes — A record `gitea.botstandards.org` → Gitea VPS IP | `[ ]` |
|
||||
|
||||
### CI/CD Pipelines — `.gitea/workflows/`
|
||||
|
||||
| # | Deliverable | Trigger | Status |
|
||||
|---|---|---|---|
|
||||
| I-20 | `ci-fast.yml` — Stage 1: `mvn verify` (JVM); all module unit tests + `@QuarkusTest`; WireMock-based verification tests; ~3–5 min | Every push | `[ ]` |
|
||||
| I-21 | `ci-native.yml` — Stage 2: `mvn package -Pnative` per Quarkus module; Docker multi-stage build; `@QuarkusIntegrationTest` against native container; push to Gitea registry as `<module>:main-<sha>` | Merge to `main` | `[ ]` |
|
||||
| I-22 | `deploy.yml` — Stage 3: SSH to apix-app VPS; `docker service update --image <new> apix_<service>` for registry + spider + portal; poll `/q/health` until UP; fail + alert on timeout (Swarm auto-rollbacks) | Git tag `v*` | `[ ]` |
|
||||
|
||||
### Local Developer Scripts — `scripts/`
|
||||
|
||||
| # | Deliverable | Status |
|
||||
|---|---|---|
|
||||
| I-23 | `setup-dev.sh` — idempotent local setup: check/install Java 21 (SDKMAN prompt), Maven, Docker; copy `.env.example` → `.env` if missing; start PostgreSQL container; run Liquibase migrations via Maven; print next steps | `[ ]` |
|
||||
| I-24 | `dev.sh` — start all three Quarkus modules in dev mode concurrently; each in its own named tmux pane (or background with log files if tmux absent); prints URLs on start | `[ ]` |
|
||||
| I-25 | `restart.sh [registry|spider|portal|all]` — kill and restart the specified dev-mode process; defaults to `all`; reads PID file or tmux session name | `[ ]` |
|
||||
| I-26 | `stop.sh` — stop all dev-mode Quarkus processes; stop PostgreSQL container; clean PID files | `[ ]` |
|
||||
| I-27 | `reset.sh` — stop everything; drop and recreate local DB; re-run migrations; restart dev mode | `[ ]` |
|
||||
| I-28 | `logs.sh [registry|spider|portal]` — tail logs for a specific dev-mode service; or `all` for multiplexed output | `[ ]` |
|
||||
|
||||
---
|
||||
|
||||
## Build Sequence
|
||||
|
||||
Develop in JVM mode (`quarkus dev`) throughout — fast reload, continuous testing. Native build only in Block 5 for the production image.
|
||||
|
||||
### Block 1 — Multi-Module Foundation (Weeks 1–2)
|
||||
- C-00 (parent `pom.xml` — BOM, module declarations, plugin config)
|
||||
- Create five Maven modules: `apix-common`, `apix-verification`, `apix-registry`, `apix-spider`, `apix-portal`
|
||||
- Generate three Quarkus modules via code.quarkus.io (or `mvn quarkus:create`):
|
||||
- `apix-registry`: RESTEasy Reactive, Hibernate ORM + Panache, PostgreSQL JDBC, Liquibase, SmallRye Health, Quarkus Security
|
||||
- `apix-spider`: Quarkus Scheduler, Hibernate ORM + Panache, PostgreSQL JDBC, REST Client Reactive, SmallRye Health
|
||||
- `apix-portal`: RESTEasy Reactive, Qute, REST Client Reactive, SmallRye Health
|
||||
- C-01 to C-05 (apix-common: enums + DTOs)
|
||||
- C-20 to C-24 (Liquibase changelogs + registry application.properties)
|
||||
- I-01, I-07 (docker-compose skeleton + .env.example)
|
||||
- D-05 to D-08, D-34 to D-47 (constraints + ADRs)
|
||||
|
||||
### Block 2 — Core Registry API (Weeks 3–4)
|
||||
- C-06 to C-12 (apix-verification: all verifiers + pipeline + config)
|
||||
- C-13 to C-19 (apix-registry: resources, services, model, repository)
|
||||
- C-43 to C-48 (apix-verification tests — plain JUnit, no Quarkus context needed)
|
||||
- C-49 to C-51 (apix-registry tests — @QuarkusTest + Testcontainers)
|
||||
- D-09 to D-11 (context + technical diagrams)
|
||||
|
||||
### Block 3 — Spider Module (Weeks 5–6)
|
||||
- C-25 to C-32 (apix-spider: scheduler, fetcher, evaluator, parsers, entity, repository, config)
|
||||
- C-52 (SpiderSchedulerTest)
|
||||
- I-05 (spider Dockerfile — multi-stage native)
|
||||
- D-20 to D-23 (runtime view sequence diagrams)
|
||||
|
||||
### Block 4 — Portal Module + i18n + Help System + Human-Readable Service Detail (Weeks 7–9)
|
||||
- C-33 to C-42 (apix-portal: resources, REST client, Qute templates, CSS, config)
|
||||
- C-65 to C-67 (ServiceDetailViewModel, PortalResource update, ServiceDetailViewModelFactory)
|
||||
- C-54 to C-58 (i18n: Messages interface with O/S-level description keys, EN/DE properties files, LocaleResolver, LocaleResource)
|
||||
- C-59 to C-62 (help system: help.js, base layout, tour model records, HelpContentService with 5 tours)
|
||||
- Update C-36 to C-40 (Qute templates: extend layout.html; replace hardcoded strings with `{inject:msg.*}`; add `<body data-page-id="...">` attribute; inject PAGE_TOURS + PAGE_HELP script block via HelpContentService)
|
||||
- C-38 service.html: 7-section human-friendly layout (identity hero, trust card, liveness card, capabilities, pricing, contact, integration collapsible)
|
||||
- C-53 (PortalResourceTest)
|
||||
- C-63, C-64, C-68 (LocaleResolverTest, HelpContentServiceTest, ServiceDetailViewModelFactoryTest)
|
||||
- I-03 (Caddyfile)
|
||||
- I-06 (portal Dockerfile — multi-stage native)
|
||||
- D-52 to D-53 (ADR-013 i18n, ADR-014 help system)
|
||||
|
||||
### Block 5 — Gitea + CI/CD Infrastructure (Weeks 9–10)
|
||||
- I-13 to I-19 (provision Gitea VPS, install Gitea + Caddy, configure container registry, GitHub mirror, act_runners)
|
||||
- I-20 (ci-fast.yml — Stage 1 pipeline; verify it passes on first push)
|
||||
- I-21 (ci-native.yml — Stage 2; first successful native build + push to Gitea registry)
|
||||
- I-23 to I-28 (local dev scripts: setup-dev, dev, restart, stop, reset, logs)
|
||||
- I-02 (docker-compose.override.yml — JVM dev mode for all three Quarkus modules)
|
||||
|
||||
### Block 6 — Production Deployment + Zero Downtime (Weeks 10–11)
|
||||
- I-08 to I-12 (provision apix-app VPS as Swarm node, backup, sanctions download, docker-stack.yml)
|
||||
- I-04 to I-06 (Dockerfiles for all three Quarkus modules — multi-stage native)
|
||||
- I-22 (deploy.yml — Stage 3; first tagged release deployed with zero-downtime verification)
|
||||
- D-24 to D-27 (deployment view — two-VPS diagram + Swarm rolling update sequence)
|
||||
- Fix any GraalVM reflection hints discovered during native integration tests
|
||||
|
||||
### Block 7 — Arc42 Completion + Real Services (Weeks 12)
|
||||
- Remaining arc42 docs (D-01 to D-04, D-40 to D-46)
|
||||
- Register RS-01 to RS-08 (self + Lexnexum + reference registrations)
|
||||
- Manual outreach to founding member candidates (RS-09)
|
||||
|
||||
---
|
||||
|
||||
## Real Services Target List
|
||||
|
||||
At least 5 live registered services required for a credible PoC. Candidates:
|
||||
|
||||
| # | Service | Type | Source |
|
||||
|---|---|---|---|
|
||||
| RS-01 | APIX index itself (`/health`, `/services`) | Self-registration | Day 1 |
|
||||
| RS-02 | Public OpenAPI test service (Petstore or equivalent) | Demo | Day 1 |
|
||||
| RS-03 | BSF website bot endpoint | Self-registration | Week 4 |
|
||||
| RS-04 | Lexnexum — legal search / legal portal bot endpoint | innoit.de / lexnexum.ai — controlled registrant | Week 4 |
|
||||
| RS-05 | GLEIF API — `legal-entity.lookup` capability | Reference registration by BSF; invite GLEIF to self-upgrade | Week 5 |
|
||||
| RS-06 | OpenCorporates API — `company.lookup` capability | Reference registration by BSF | Week 5 |
|
||||
| RS-07 | EU Sanctions API (eu-sanctions.io or direct list endpoint) — `sanctions.screen` capability | Reference registration by BSF | Week 5 |
|
||||
| RS-08 | Companies House UK API — `org.verify.uk` capability | Reference registration by BSF | Week 5 |
|
||||
| RS-09 | Early adopter from founding member outreach | Real | Week 8+ |
|
||||
| RS-10 | Developer community volunteer (HN / LangChain) | Community | Week 10+ |
|
||||
|
||||
---
|
||||
|
||||
## Open Questions
|
||||
|
||||
| # | Question | Owner | Status |
|
||||
|---|---|---|---|
|
||||
| OQ-MVP-01 | Domain name for public PoC — `registry.apix.dev`, `index.botstandards.org`, other? | Carsten | `[ ]` |
|
||||
| OQ-MVP-02 | GitHub organisation for open source repo — `bot-standards-foundation` or personal? | Carsten | `[ ]` |
|
||||
| OQ-MVP-03 | API key distribution for write access during PoC — how are early registrants onboarded? | Carsten | `[ ]` |
|
||||
| OQ-MVP-04 | Spider check frequency — every 15 min is reasonable for PoC; confirm acceptable load on registrant services | Carsten | `[ ]` |
|
||||
| OQ-MVP-05 | Hetzner server size — CX22 (2 vCPU, 4GB) sufficient for PoC; upgrade path if needed | Carsten | `[ ]` |
|
||||
| OQ-MVP-06 | OpenCorporates API key — free tier sufficient for PoC volume? Confirm rate limits before build | Carsten | `[ ]` |
|
||||
| OQ-MVP-07 | Sanctions list update cadence — weekly download of OFAC/EU/UN lists acceptable? Check if any list requires a licence for automated use | Carsten | `[ ]` |
|
||||
| OQ-MVP-08 | GLEIF API — confirm no API key required for public LEI lookup (currently free/open); check terms of use for automated batch use | Carsten | `[ ]` |
|
||||
| OQ-MVP-09 | Reference registrations (RS-05 to RS-08): BSF registers third-party APIs at O-0. Confirm BSF ToS allows this. Prepare outreach template inviting self-registration upgrade. | Carsten | `[ ]` |
|
||||
| OQ-MVP-10 | Gitea domain — `gitea.botstandards.org` or separate domain (e.g. `code.apix.dev`)? Confirm before provisioning Caddy TLS cert | Carsten | `[ ]` |
|
||||
| OQ-MVP-11 | GitHub org name — `bot-standards-foundation` exists? Create before configuring push mirror | Carsten | `[ ]` |
|
||||
| OQ-MVP-12 | Native build runner placement — on Gitea VPS or apix-app VPS? apix-app VPS risks resource contention during native build; Gitea VPS is cleaner but requires GraalVM installed there | Carsten | `[ ]` |
|
||||
| OQ-MVP-13 | Zero-downtime validation — how to confirm health check passes before declaring deploy successful in CI? Poll `/q/health` with retry loop or use Swarm service inspection | Carsten | `[ ]` |
|
||||
@@ -0,0 +1,57 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
|
||||
https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<parent>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-parent</artifactId>
|
||||
<version>${revision}</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>apix-common</artifactId>
|
||||
<name>APIX :: Common</name>
|
||||
<description>Shared enums, DTOs, and value types. No Quarkus dependency.</description>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>io.smallrye</groupId>
|
||||
<artifactId>jandex-maven-plugin</artifactId>
|
||||
<version>3.2.0</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>make-index</id>
|
||||
<goals>
|
||||
<goal>jandex</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
|
||||
<dependencies>
|
||||
<!-- Bean Validation API + @URL from Hibernate Validator annotations -->
|
||||
<dependency>
|
||||
<groupId>jakarta.validation</groupId>
|
||||
<artifactId>jakarta.validation-api</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.hibernate.validator</groupId>
|
||||
<artifactId>hibernate-validator</artifactId>
|
||||
</dependency>
|
||||
<!-- Jackson annotations for JSON field naming / null-handling hints -->
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-annotations</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.junit.jupiter</groupId>
|
||||
<artifactId>junit-jupiter</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</project>
|
||||
@@ -0,0 +1,46 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonInclude;
|
||||
import jakarta.validation.Valid;
|
||||
import jakarta.validation.constraints.Email;
|
||||
import jakarta.validation.constraints.NotBlank;
|
||||
import jakarta.validation.constraints.NotEmpty;
|
||||
import org.hibernate.validator.constraints.URL;
|
||||
|
||||
import java.math.BigDecimal;
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
@JsonInclude(JsonInclude.Include.NON_NULL)
|
||||
public record BsmPayload(
|
||||
@NotBlank String name,
|
||||
@NotBlank String description,
|
||||
@NotBlank @URL String endpoint,
|
||||
@NotEmpty List<@NotBlank String> capabilities,
|
||||
@NotBlank @Email String registrantEmail,
|
||||
@NotBlank String registrantName,
|
||||
@NotBlank String registrantJurisdiction,
|
||||
OrgType registrantOrgType,
|
||||
String registrantLei,
|
||||
@URL String openApiSpecUrl,
|
||||
@URL String mcpSpecUrl,
|
||||
@URL String policyUrl,
|
||||
@URL String securityContactUrl,
|
||||
@Valid Pricing pricing,
|
||||
@NotBlank String bsmVersion,
|
||||
ServiceStage serviceStage,
|
||||
// IoT transition fields — null for non-IoT services
|
||||
Boolean locked,
|
||||
Instant sunsetAt,
|
||||
@URL String migrationGuideUrl,
|
||||
List<UUID> replacesServiceIds
|
||||
) {
|
||||
@JsonInclude(JsonInclude.Include.NON_NULL)
|
||||
public record Pricing(
|
||||
String billingModel,
|
||||
BigDecimal pricePerCall,
|
||||
String currency,
|
||||
String billingUnit
|
||||
) {}
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
public enum ChangeType {
|
||||
REGISTERED,
|
||||
BSM_UPDATED,
|
||||
ORG_TYPE_CHANGED,
|
||||
OLEVEL_CHANGED,
|
||||
STAGE_CHANGED,
|
||||
OWNERSHIP_TRANSFERRED,
|
||||
REGISTRY_STATUS_CHANGED,
|
||||
SUNSET_DECLARED,
|
||||
LOCK_RELEASED,
|
||||
REPLACEMENT_DECLARED
|
||||
}
|
||||
@@ -0,0 +1,8 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
public enum LivenessStatus {
|
||||
PENDING,
|
||||
LIVE,
|
||||
DEGRADED,
|
||||
UNREACHABLE
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
public enum OLevel {
|
||||
UNVERIFIED,
|
||||
IDENTITY_VERIFIED,
|
||||
LEGAL_ENTITY_VERIFIED,
|
||||
HYGIENE_VERIFIED,
|
||||
OPERATIONALLY_VERIFIED,
|
||||
AUDITED
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
public enum OrgType {
|
||||
INDIVIDUAL,
|
||||
COMMERCIAL,
|
||||
NON_PROFIT,
|
||||
GOVERNMENT,
|
||||
ACADEMIC
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
public enum RegistryStatus {
|
||||
ACTIVE,
|
||||
SUSPENDED,
|
||||
ARCHIVED
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
public enum ServiceStage {
|
||||
DEVELOPMENT,
|
||||
BETA,
|
||||
PRODUCTION,
|
||||
DEPRECATED,
|
||||
DECOMMISSIONED
|
||||
}
|
||||
@@ -0,0 +1,17 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
public record ServiceSummaryDto(
|
||||
UUID id,
|
||||
String name,
|
||||
List<String> capabilities,
|
||||
OLevel oLevel,
|
||||
LivenessStatus livenessStatus,
|
||||
ServiceStage serviceStage,
|
||||
RegistryStatus registryStatus,
|
||||
String endpoint,
|
||||
Instant lastCheckedAt
|
||||
) {}
|
||||
@@ -0,0 +1,7 @@
|
||||
package org.botstandards.apix.common;
|
||||
|
||||
public record VerificationResult(
|
||||
OLevel oLevelAchieved,
|
||||
String blockedAtStep,
|
||||
String message
|
||||
) {}
|
||||
@@ -0,0 +1,86 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
|
||||
https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<parent>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-parent</artifactId>
|
||||
<version>${revision}</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>apix-portal</artifactId>
|
||||
<name>APIX :: Portal</name>
|
||||
<description>Web portal (Qute templates + HTMX). Calls registry via REST client — no direct DB access. Port 8081.</description>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-common</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- REST + templating -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest-qute</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- REST client to call apix-registry -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest-client</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest-client-jackson</artifactId>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-smallrye-health</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Test -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-junit5</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.rest-assured</groupId>
|
||||
<artifactId>rest-assured</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.wiremock</groupId>
|
||||
<artifactId>wiremock</artifactId>
|
||||
<version>3.5.4</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>${quarkus.platform.group-id}</groupId>
|
||||
<artifactId>quarkus-maven-plugin</artifactId>
|
||||
<extensions>true</extensions>
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>build</goal>
|
||||
<goal>generate-code</goal>
|
||||
<goal>generate-code-tests</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
@@ -0,0 +1,152 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
|
||||
https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<parent>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-parent</artifactId>
|
||||
<version>${revision}</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>apix-registry</artifactId>
|
||||
<name>APIX :: Registry</name>
|
||||
<description>REST API service. Owns the database schema (Liquibase). Port 8180.</description>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-common</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-verification</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- REST -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest-jackson</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Persistence -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-hibernate-orm-panache</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-jdbc-postgresql</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-liquibase</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Logging -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-logging-json</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Validation / Security / Health -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-hibernate-validator</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-security</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-smallrye-health</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Test -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-junit5</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.rest-assured</groupId>
|
||||
<artifactId>rest-assured</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.wiremock</groupId>
|
||||
<artifactId>wiremock</artifactId>
|
||||
<version>3.5.4</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- AssertJ — explicit because Quarkus BOM does not export it as a transitive -->
|
||||
<dependency>
|
||||
<groupId>org.assertj</groupId>
|
||||
<artifactId>assertj-core</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- Cucumber BDD -->
|
||||
<dependency>
|
||||
<groupId>io.cucumber</groupId>
|
||||
<artifactId>cucumber-java</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.cucumber</groupId>
|
||||
<artifactId>cucumber-junit-platform-engine</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.junit.platform</groupId>
|
||||
<artifactId>junit-platform-suite</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- Allure reporting -->
|
||||
<dependency>
|
||||
<groupId>io.qameta.allure</groupId>
|
||||
<artifactId>allure-cucumber7-jvm</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>${quarkus.platform.group-id}</groupId>
|
||||
<artifactId>quarkus-maven-plugin</artifactId>
|
||||
<extensions>true</extensions>
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>build</goal>
|
||||
<goal>generate-code</goal>
|
||||
<goal>generate-code-tests</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
<!-- Used by setup-dev.sh: mvn liquibase:update -pl apix-registry -Dliquibase.url=... -->
|
||||
<plugin>
|
||||
<groupId>org.liquibase</groupId>
|
||||
<artifactId>liquibase-maven-plugin</artifactId>
|
||||
<configuration>
|
||||
<changeLogFile>src/main/resources/db/changelog/db.changelog-master.xml</changeLogFile>
|
||||
</configuration>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.postgresql</groupId>
|
||||
<artifactId>postgresql</artifactId>
|
||||
<version>42.7.3</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
+25
@@ -0,0 +1,25 @@
|
||||
package org.botstandards.apix.registry.dto;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonInclude;
|
||||
import org.botstandards.apix.common.OLevel;
|
||||
import org.botstandards.apix.common.ServiceStage;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
@JsonInclude(JsonInclude.Include.NON_NULL)
|
||||
public record ReplacementsResponse(
|
||||
Boolean locked,
|
||||
Instant sunsetAt,
|
||||
List<Candidate> candidates
|
||||
) {
|
||||
@JsonInclude(JsonInclude.Include.NON_NULL)
|
||||
public record Candidate(
|
||||
UUID id,
|
||||
String name,
|
||||
String endpoint,
|
||||
OLevel oLevel,
|
||||
ServiceStage serviceStage
|
||||
) {}
|
||||
}
|
||||
+34
@@ -0,0 +1,34 @@
|
||||
package org.botstandards.apix.registry.dto;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
|
||||
import org.botstandards.apix.common.BsmPayload;
|
||||
import org.botstandards.apix.common.OrgType;
|
||||
import org.botstandards.apix.common.ServiceStage;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
@JsonIgnoreProperties(ignoreUnknown = true)
|
||||
public record ServicePatchRequest(
|
||||
String name,
|
||||
String description,
|
||||
String endpoint,
|
||||
List<String> capabilities,
|
||||
String registrantEmail,
|
||||
String registrantName,
|
||||
String registrantJurisdiction,
|
||||
OrgType registrantOrgType,
|
||||
String registrantLei,
|
||||
String openApiSpecUrl,
|
||||
String mcpSpecUrl,
|
||||
String policyUrl,
|
||||
String securityContactUrl,
|
||||
BsmPayload.Pricing pricing,
|
||||
String bsmVersion,
|
||||
ServiceStage serviceStage,
|
||||
Boolean locked,
|
||||
Instant sunsetAt,
|
||||
String migrationGuideUrl,
|
||||
List<UUID> replacesServiceIds
|
||||
) {}
|
||||
@@ -0,0 +1,71 @@
|
||||
package org.botstandards.apix.registry.dto;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonInclude;
|
||||
import org.botstandards.apix.common.*;
|
||||
import org.botstandards.apix.registry.entity.ServiceEntity;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
@JsonInclude(JsonInclude.Include.NON_NULL)
|
||||
public record ServiceResponse(
|
||||
UUID id,
|
||||
String name,
|
||||
String description,
|
||||
String endpoint,
|
||||
List<String> capabilities,
|
||||
String registrantEmail,
|
||||
String registrantName,
|
||||
String registrantJurisdiction,
|
||||
OrgType registrantOrgType,
|
||||
String registrantLei,
|
||||
String openApiSpecUrl,
|
||||
String mcpSpecUrl,
|
||||
String policyUrl,
|
||||
String securityContactUrl,
|
||||
BsmPayload.Pricing pricing,
|
||||
String bsmVersion,
|
||||
OLevel oLevel,
|
||||
LivenessStatus livenessStatus,
|
||||
ServiceStage serviceStage,
|
||||
RegistryStatus registryStatus,
|
||||
Boolean locked,
|
||||
Instant sunsetAt,
|
||||
String migrationGuideUrl,
|
||||
List<UUID> replacesServiceIds,
|
||||
Instant registeredAt,
|
||||
Instant lastUpdatedAt
|
||||
) {
|
||||
public static ServiceResponse from(ServiceEntity e) {
|
||||
BsmPayload b = e.bsmPayload;
|
||||
return new ServiceResponse(
|
||||
e.id,
|
||||
b.name(),
|
||||
b.description(),
|
||||
b.endpoint(),
|
||||
b.capabilities(),
|
||||
b.registrantEmail(),
|
||||
b.registrantName(),
|
||||
b.registrantJurisdiction(),
|
||||
e.registrantOrgType,
|
||||
b.registrantLei(),
|
||||
b.openApiSpecUrl(),
|
||||
b.mcpSpecUrl(),
|
||||
b.policyUrl(),
|
||||
b.securityContactUrl(),
|
||||
b.pricing(),
|
||||
b.bsmVersion(),
|
||||
e.olevel,
|
||||
e.livenessStatus,
|
||||
e.serviceStage,
|
||||
e.registryStatus,
|
||||
e.locked,
|
||||
e.sunsetAt,
|
||||
e.migrationGuideUrl,
|
||||
b.replacesServiceIds(),
|
||||
e.registeredAt,
|
||||
e.lastUpdatedAt
|
||||
);
|
||||
}
|
||||
}
|
||||
+14
@@ -0,0 +1,14 @@
|
||||
package org.botstandards.apix.registry.dto;
|
||||
|
||||
import com.fasterxml.jackson.annotation.JsonInclude;
|
||||
|
||||
import java.util.UUID;
|
||||
|
||||
@JsonInclude(JsonInclude.Include.NON_NULL)
|
||||
public record VersionHistoryEntry(
|
||||
UUID id,
|
||||
String type,
|
||||
String previousValue,
|
||||
String newValue,
|
||||
String createdAt
|
||||
) {}
|
||||
@@ -0,0 +1,64 @@
|
||||
package org.botstandards.apix.registry.entity;
|
||||
|
||||
import jakarta.persistence.*;
|
||||
import org.botstandards.apix.common.*;
|
||||
import org.botstandards.apix.registry.persistence.BsmPayloadConverter;
|
||||
import org.hibernate.annotations.ColumnTransformer;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.UUID;
|
||||
|
||||
@Entity
|
||||
@Table(name = "services")
|
||||
public class ServiceEntity {
|
||||
|
||||
@Id
|
||||
@Column(columnDefinition = "uuid")
|
||||
public UUID id;
|
||||
|
||||
@Column(name = "endpoint_url", nullable = false, unique = true)
|
||||
public String endpointUrl;
|
||||
|
||||
@Convert(converter = BsmPayloadConverter.class)
|
||||
@Column(name = "bsm_payload", columnDefinition = "jsonb", nullable = false)
|
||||
@ColumnTransformer(write = "?::jsonb")
|
||||
public BsmPayload bsmPayload;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(nullable = false)
|
||||
public OLevel olevel = OLevel.UNVERIFIED;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "liveness_status", nullable = false)
|
||||
public LivenessStatus livenessStatus = LivenessStatus.PENDING;
|
||||
|
||||
@Column(name = "registered_at", nullable = false)
|
||||
public Instant registeredAt;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "registrant_org_type", nullable = false)
|
||||
public OrgType registrantOrgType = OrgType.INDIVIDUAL;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "service_stage", nullable = false)
|
||||
public ServiceStage serviceStage = ServiceStage.DEVELOPMENT;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "registry_status", nullable = false)
|
||||
public RegistryStatus registryStatus = RegistryStatus.ACTIVE;
|
||||
|
||||
@Column(nullable = false)
|
||||
public int version = 1;
|
||||
|
||||
@Column(name = "last_updated_at")
|
||||
public Instant lastUpdatedAt;
|
||||
|
||||
@Column(name = "locked")
|
||||
public Boolean locked;
|
||||
|
||||
@Column(name = "sunset_at")
|
||||
public Instant sunsetAt;
|
||||
|
||||
@Column(name = "migration_guide_url")
|
||||
public String migrationGuideUrl;
|
||||
}
|
||||
+27
@@ -0,0 +1,27 @@
|
||||
package org.botstandards.apix.registry.entity;
|
||||
|
||||
import jakarta.persistence.*;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.UUID;
|
||||
|
||||
@Entity
|
||||
@Table(name = "service_replacements")
|
||||
public class ServiceReplacementEntity {
|
||||
|
||||
@Id
|
||||
@Column(columnDefinition = "uuid")
|
||||
public UUID id;
|
||||
|
||||
@Column(name = "deprecated_service_id", nullable = false)
|
||||
public UUID deprecatedServiceId;
|
||||
|
||||
@Column(name = "replacement_service_id", nullable = false)
|
||||
public UUID replacementServiceId;
|
||||
|
||||
@Column(name = "declared_at", nullable = false)
|
||||
public Instant declaredAt;
|
||||
|
||||
@Column(name = "compatibility_notes")
|
||||
public String compatibilityNotes;
|
||||
}
|
||||
+55
@@ -0,0 +1,55 @@
|
||||
package org.botstandards.apix.registry.entity;
|
||||
|
||||
import jakarta.persistence.*;
|
||||
import org.botstandards.apix.common.*;
|
||||
import org.botstandards.apix.registry.persistence.BsmPayloadConverter;
|
||||
import org.hibernate.annotations.ColumnTransformer;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.UUID;
|
||||
|
||||
@Entity
|
||||
@Table(name = "service_versions")
|
||||
public class ServiceVersionEntity {
|
||||
|
||||
@Id
|
||||
@Column(columnDefinition = "uuid")
|
||||
public UUID id;
|
||||
|
||||
@Column(name = "service_id", nullable = false)
|
||||
public UUID serviceId;
|
||||
|
||||
@Column(nullable = false)
|
||||
public int version;
|
||||
|
||||
@Column(name = "recorded_at", nullable = false)
|
||||
public Instant recordedAt;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "change_type", nullable = false)
|
||||
public ChangeType changeType;
|
||||
|
||||
@Convert(converter = BsmPayloadConverter.class)
|
||||
@Column(name = "bsm_payload", columnDefinition = "jsonb", nullable = false)
|
||||
@ColumnTransformer(write = "?::jsonb")
|
||||
public BsmPayload bsmPayload;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "registrant_org_type", nullable = false)
|
||||
public OrgType registrantOrgType;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(nullable = false)
|
||||
public OLevel olevel;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "service_stage", nullable = false)
|
||||
public ServiceStage serviceStage;
|
||||
|
||||
@Enumerated(EnumType.STRING)
|
||||
@Column(name = "registry_status", nullable = false)
|
||||
public RegistryStatus registryStatus;
|
||||
|
||||
@Column(name = "note")
|
||||
public String note;
|
||||
}
|
||||
+38
@@ -0,0 +1,38 @@
|
||||
package org.botstandards.apix.registry.persistence;
|
||||
|
||||
import com.fasterxml.jackson.core.JsonProcessingException;
|
||||
import com.fasterxml.jackson.databind.SerializationFeature;
|
||||
import com.fasterxml.jackson.databind.json.JsonMapper;
|
||||
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
|
||||
import jakarta.persistence.AttributeConverter;
|
||||
import jakarta.persistence.Converter;
|
||||
import org.botstandards.apix.common.BsmPayload;
|
||||
|
||||
@Converter(autoApply = true)
|
||||
public class BsmPayloadConverter implements AttributeConverter<BsmPayload, String> {
|
||||
|
||||
private static final JsonMapper MAPPER = JsonMapper.builder()
|
||||
.addModule(new JavaTimeModule())
|
||||
.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS)
|
||||
.build();
|
||||
|
||||
@Override
|
||||
public String convertToDatabaseColumn(BsmPayload payload) {
|
||||
if (payload == null) return null;
|
||||
try {
|
||||
return MAPPER.writeValueAsString(payload);
|
||||
} catch (JsonProcessingException e) {
|
||||
throw new IllegalStateException("Cannot serialize BsmPayload", e);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public BsmPayload convertToEntityAttribute(String json) {
|
||||
if (json == null) return null;
|
||||
try {
|
||||
return MAPPER.readValue(json, BsmPayload.class);
|
||||
} catch (JsonProcessingException e) {
|
||||
throw new IllegalStateException("Cannot deserialize BsmPayload", e);
|
||||
}
|
||||
}
|
||||
}
|
||||
+101
@@ -0,0 +1,101 @@
|
||||
package org.botstandards.apix.registry.resource;
|
||||
|
||||
import jakarta.inject.Inject;
|
||||
import jakarta.validation.Valid;
|
||||
import jakarta.ws.rs.*;
|
||||
import jakarta.ws.rs.core.*;
|
||||
import org.botstandards.apix.common.BsmPayload;
|
||||
import org.botstandards.apix.common.OLevel;
|
||||
import org.botstandards.apix.registry.dto.ReplacementsResponse;
|
||||
import org.botstandards.apix.registry.dto.ServicePatchRequest;
|
||||
import org.botstandards.apix.registry.dto.ServiceResponse;
|
||||
import org.botstandards.apix.registry.dto.VersionHistoryEntry;
|
||||
import org.botstandards.apix.registry.service.RegistryService;
|
||||
import org.eclipse.microprofile.config.inject.ConfigProperty;
|
||||
|
||||
import java.net.URI;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.UUID;
|
||||
|
||||
@Path("/services")
|
||||
@Produces(MediaType.APPLICATION_JSON)
|
||||
@Consumes(MediaType.APPLICATION_JSON)
|
||||
public class ServiceResource {
|
||||
|
||||
@Inject
|
||||
RegistryService registryService;
|
||||
|
||||
@ConfigProperty(name = "apix.api-key")
|
||||
String apiKey;
|
||||
|
||||
@POST
|
||||
public Response register(@Valid BsmPayload payload, @HeaderParam("X-Api-Key") String key) {
|
||||
requireKey(key);
|
||||
var service = registryService.register(payload);
|
||||
return Response.created(URI.create("/services/" + service.id))
|
||||
.entity(Map.of("id", service.id.toString()))
|
||||
.build();
|
||||
}
|
||||
|
||||
@GET
|
||||
@Path("/{id}")
|
||||
public ServiceResponse getById(@PathParam("id") UUID id) {
|
||||
return ServiceResponse.from(registryService.requireById(id));
|
||||
}
|
||||
|
||||
@PATCH
|
||||
@Path("/{id}")
|
||||
public ServiceResponse patch(@PathParam("id") UUID id,
|
||||
ServicePatchRequest req,
|
||||
@HeaderParam("X-Api-Key") String key) {
|
||||
requireKey(key);
|
||||
return ServiceResponse.from(registryService.patch(id, req));
|
||||
}
|
||||
|
||||
@GET
|
||||
public List<ServiceResponse> search(@QueryParam("capability") String capability,
|
||||
@QueryParam("stage") String stage) {
|
||||
if (capability == null || capability.isBlank()) {
|
||||
throw new BadRequestException("capability query parameter is required");
|
||||
}
|
||||
return registryService.search(capability, stage).stream()
|
||||
.map(ServiceResponse::from)
|
||||
.toList();
|
||||
}
|
||||
|
||||
@PATCH
|
||||
@Path("/{id}/olevel")
|
||||
public ServiceResponse setOLevel(@PathParam("id") UUID id,
|
||||
@QueryParam("level") String level,
|
||||
@HeaderParam("X-Api-Key") String key) {
|
||||
requireKey(key);
|
||||
OLevel oLevel = OLevel.valueOf(level.toUpperCase());
|
||||
return ServiceResponse.from(registryService.setOLevel(id, oLevel));
|
||||
}
|
||||
|
||||
@GET
|
||||
@Path("/{id}/replacements")
|
||||
public Response getReplacements(@PathParam("id") UUID id,
|
||||
@QueryParam("minOLevel") String minOLevel) {
|
||||
ReplacementsResponse body = registryService.getReplacements(id, minOLevel);
|
||||
return Response.ok(body)
|
||||
.header("Cache-Control", "public, max-age=60")
|
||||
.build();
|
||||
}
|
||||
|
||||
@GET
|
||||
@Path("/{id}/history")
|
||||
public List<VersionHistoryEntry> getHistory(@PathParam("id") UUID id) {
|
||||
return registryService.getHistory(id);
|
||||
}
|
||||
|
||||
private void requireKey(String provided) {
|
||||
if (!apiKey.equals(provided)) {
|
||||
throw new NotAuthorizedException(
|
||||
Response.status(401)
|
||||
.entity(Map.of("message", "Invalid or missing API key"))
|
||||
.build());
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,23 @@
|
||||
package org.botstandards.apix.registry.service;
|
||||
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
|
||||
import java.time.Instant;
|
||||
|
||||
@ApplicationScoped
|
||||
public class ClockService {
|
||||
|
||||
private volatile Instant override = null;
|
||||
|
||||
public Instant now() {
|
||||
return override != null ? override : Instant.now();
|
||||
}
|
||||
|
||||
public void advance(Instant instant) {
|
||||
this.override = instant;
|
||||
}
|
||||
|
||||
public void reset() {
|
||||
this.override = null;
|
||||
}
|
||||
}
|
||||
+340
@@ -0,0 +1,340 @@
|
||||
package org.botstandards.apix.registry.service;
|
||||
|
||||
import jakarta.enterprise.context.ApplicationScoped;
|
||||
import jakarta.inject.Inject;
|
||||
import jakarta.persistence.EntityManager;
|
||||
import jakarta.transaction.Transactional;
|
||||
import jakarta.ws.rs.NotFoundException;
|
||||
import jakarta.ws.rs.WebApplicationException;
|
||||
import jakarta.ws.rs.core.Response;
|
||||
import org.botstandards.apix.common.*;
|
||||
import org.botstandards.apix.registry.dto.ReplacementsResponse;
|
||||
import org.botstandards.apix.registry.dto.ServicePatchRequest;
|
||||
import org.botstandards.apix.registry.dto.VersionHistoryEntry;
|
||||
import org.botstandards.apix.registry.entity.ServiceEntity;
|
||||
import org.botstandards.apix.registry.entity.ServiceReplacementEntity;
|
||||
import org.botstandards.apix.registry.entity.ServiceVersionEntity;
|
||||
|
||||
import java.time.Instant;
|
||||
import java.util.*;
|
||||
|
||||
@ApplicationScoped
|
||||
public class RegistryService {
|
||||
|
||||
@Inject
|
||||
EntityManager em;
|
||||
|
||||
@Inject
|
||||
ClockService clockService;
|
||||
|
||||
@Transactional
|
||||
public ServiceEntity register(BsmPayload payload) {
|
||||
long existing = ((Number) em.createNativeQuery(
|
||||
"SELECT COUNT(*) FROM services WHERE endpoint_url = :url")
|
||||
.setParameter("url", payload.endpoint())
|
||||
.getSingleResult()).longValue();
|
||||
if (existing > 0) {
|
||||
throw new WebApplicationException(
|
||||
Response.status(409)
|
||||
.entity(Map.of("message", "A service with this endpoint is already registered"))
|
||||
.build());
|
||||
}
|
||||
|
||||
Instant now = Instant.now();
|
||||
ServiceEntity service = new ServiceEntity();
|
||||
service.id = UUID.randomUUID();
|
||||
service.endpointUrl = payload.endpoint();
|
||||
service.bsmPayload = payload;
|
||||
service.olevel = OLevel.UNVERIFIED;
|
||||
service.livenessStatus = LivenessStatus.PENDING;
|
||||
service.registeredAt = now;
|
||||
service.registrantOrgType = payload.registrantOrgType() != null ? payload.registrantOrgType() : OrgType.INDIVIDUAL;
|
||||
service.serviceStage = payload.serviceStage() != null ? payload.serviceStage() : ServiceStage.DEVELOPMENT;
|
||||
service.registryStatus = RegistryStatus.ACTIVE;
|
||||
service.version = 1;
|
||||
service.locked = payload.locked();
|
||||
service.sunsetAt = payload.sunsetAt();
|
||||
service.migrationGuideUrl = payload.migrationGuideUrl();
|
||||
|
||||
em.persist(service);
|
||||
em.persist(snapshot(service, ChangeType.REGISTERED, now));
|
||||
|
||||
if (payload.replacesServiceIds() != null) {
|
||||
for (UUID deprecatedId : payload.replacesServiceIds()) {
|
||||
upsertReplacement(deprecatedId, service.id, now);
|
||||
}
|
||||
}
|
||||
|
||||
return service;
|
||||
}
|
||||
|
||||
public ServiceEntity requireById(UUID id) {
|
||||
ServiceEntity e = em.find(ServiceEntity.class, id);
|
||||
if (e == null) throw new NotFoundException("Service not found: " + id);
|
||||
return e;
|
||||
}
|
||||
|
||||
@Transactional
|
||||
public ServiceEntity setOLevel(UUID id, OLevel level) {
|
||||
ServiceEntity e = requireById(id);
|
||||
e.olevel = level;
|
||||
e.lastUpdatedAt = Instant.now();
|
||||
return e;
|
||||
}
|
||||
|
||||
@Transactional
|
||||
public ServiceEntity patch(UUID id, ServicePatchRequest req) {
|
||||
ServiceEntity service = requireById(id);
|
||||
|
||||
// ── IoT validations ───────────────────────────────────────────────────
|
||||
Instant now = clockService.now();
|
||||
requireFutureSunset(req.sunsetAt(), now);
|
||||
requireSunsetBeforeLockRelease(req.locked(), service.locked,
|
||||
req.sunsetAt() != null ? req.sunsetAt() : service.sunsetAt);
|
||||
requireSunsetPassedBeforeDecommission(req.serviceStage(), service.sunsetAt, now);
|
||||
|
||||
if (req.replacesServiceIds() != null) {
|
||||
for (UUID deprecatedId : req.replacesServiceIds()) {
|
||||
validateReplacementTarget(deprecatedId);
|
||||
}
|
||||
}
|
||||
|
||||
ServiceStage stageBefore = service.serviceStage;
|
||||
Boolean lockedBefore = service.locked;
|
||||
|
||||
applyPatch(service, req);
|
||||
|
||||
ChangeType changeType = detectChangeType(stageBefore, lockedBefore, req);
|
||||
service.version++;
|
||||
service.lastUpdatedAt = Instant.now();
|
||||
|
||||
em.persist(snapshot(service, changeType, service.lastUpdatedAt));
|
||||
|
||||
if (req.replacesServiceIds() != null) {
|
||||
syncReplacements(service.id, req.replacesServiceIds(), service.lastUpdatedAt);
|
||||
}
|
||||
|
||||
return service;
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public List<ServiceEntity> search(String capability, String stage) {
|
||||
ServiceStage targetStage = stage != null
|
||||
? ServiceStage.valueOf(stage.toUpperCase())
|
||||
: ServiceStage.PRODUCTION;
|
||||
|
||||
return em.createNativeQuery(
|
||||
"SELECT s.* FROM services s " +
|
||||
"WHERE s.bsm_payload @> jsonb_build_object('capabilities', jsonb_build_array(:cap)) " +
|
||||
"AND s.registry_status = 'ACTIVE' " +
|
||||
"AND s.service_stage = :stage",
|
||||
ServiceEntity.class)
|
||||
.setParameter("cap", capability)
|
||||
.setParameter("stage", targetStage.name())
|
||||
.getResultList();
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
public ReplacementsResponse getReplacements(UUID deprecatedId, String minOLevelStr) {
|
||||
ServiceEntity deprecated = requireById(deprecatedId);
|
||||
|
||||
// Locked services that are not yet DECOMMISSIONED block replacement discovery
|
||||
if (Boolean.TRUE.equals(deprecated.locked) && deprecated.serviceStage != ServiceStage.DECOMMISSIONED) {
|
||||
return new ReplacementsResponse(deprecated.locked, deprecated.sunsetAt, List.of());
|
||||
}
|
||||
|
||||
List<ServiceEntity> candidates = em.createNativeQuery(
|
||||
"SELECT s.* FROM services s " +
|
||||
"INNER JOIN service_replacements sr ON sr.replacement_service_id = s.id " +
|
||||
"WHERE sr.deprecated_service_id = :depId AND s.registry_status = 'ACTIVE'",
|
||||
ServiceEntity.class)
|
||||
.setParameter("depId", deprecatedId)
|
||||
.getResultList();
|
||||
|
||||
OLevel minOLevel = minOLevelStr != null ? OLevel.valueOf(minOLevelStr) : null;
|
||||
|
||||
return new ReplacementsResponse(
|
||||
deprecated.locked,
|
||||
deprecated.sunsetAt,
|
||||
candidates.stream()
|
||||
.filter(c -> minOLevel == null || c.olevel.ordinal() >= minOLevel.ordinal())
|
||||
.sorted(Comparator.comparingInt((ServiceEntity c) -> c.olevel.ordinal()).reversed())
|
||||
.map(c -> new ReplacementsResponse.Candidate(
|
||||
c.id, c.bsmPayload.name(), c.endpointUrl, c.olevel, c.serviceStage))
|
||||
.toList()
|
||||
);
|
||||
}
|
||||
|
||||
public List<VersionHistoryEntry> getHistory(UUID id) {
|
||||
requireById(id);
|
||||
|
||||
List<ServiceVersionEntity> versions = em.createQuery(
|
||||
"FROM ServiceVersionEntity v WHERE v.serviceId = :id ORDER BY v.version ASC",
|
||||
ServiceVersionEntity.class)
|
||||
.setParameter("id", id)
|
||||
.getResultList();
|
||||
|
||||
List<VersionHistoryEntry> result = new ArrayList<>();
|
||||
for (int i = 0; i < versions.size(); i++) {
|
||||
result.add(toHistoryEntry(versions.get(i), i > 0 ? versions.get(i - 1) : null));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
// ── Package-private static helpers — unit-testable without DB ─────────────
|
||||
|
||||
static ChangeType detectChangeType(ServiceStage stageBefore, Boolean lockedBefore, ServicePatchRequest req) {
|
||||
if (req.locked() != null && Boolean.FALSE.equals(req.locked()) && Boolean.TRUE.equals(lockedBefore)) {
|
||||
return ChangeType.LOCK_RELEASED;
|
||||
}
|
||||
if (req.serviceStage() != null && req.serviceStage() != stageBefore) {
|
||||
if (req.sunsetAt() != null && req.serviceStage() == ServiceStage.DEPRECATED) {
|
||||
return ChangeType.SUNSET_DECLARED;
|
||||
}
|
||||
return ChangeType.STAGE_CHANGED;
|
||||
}
|
||||
if (req.replacesServiceIds() != null) {
|
||||
return ChangeType.REPLACEMENT_DECLARED;
|
||||
}
|
||||
return ChangeType.BSM_UPDATED;
|
||||
}
|
||||
|
||||
// sunsetAt must be strictly after now — exclusive boundary means now==sunsetAt is already "past"
|
||||
static void requireFutureSunset(Instant sunsetAt, Instant now) {
|
||||
if (sunsetAt != null && !sunsetAt.isAfter(now)) {
|
||||
throw unprocessable("sunset_at must be a future moment");
|
||||
}
|
||||
}
|
||||
|
||||
static void requireSunsetBeforeLockRelease(Boolean newLocked, Boolean existingLocked, Instant effectiveSunsetAt) {
|
||||
if (Boolean.FALSE.equals(newLocked) && Boolean.TRUE.equals(existingLocked) && effectiveSunsetAt == null) {
|
||||
throw unprocessable("sunset_at required before lock release");
|
||||
}
|
||||
}
|
||||
|
||||
// Exclusive boundary: now >= sunsetAt means the sunset moment has arrived
|
||||
static void requireSunsetPassedBeforeDecommission(ServiceStage newStage, Instant sunsetAt, Instant now) {
|
||||
if (newStage == ServiceStage.DECOMMISSIONED) {
|
||||
if (sunsetAt == null || now.isBefore(sunsetAt)) {
|
||||
throw unprocessable("sunset_at has not passed");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Private helpers ───────────────────────────────────────────────────────
|
||||
|
||||
private void validateReplacementTarget(UUID deprecatedId) {
|
||||
ServiceEntity target = em.find(ServiceEntity.class, deprecatedId);
|
||||
if (target == null) throw unprocessable("target service not found");
|
||||
if (target.serviceStage != ServiceStage.DEPRECATED && target.serviceStage != ServiceStage.DECOMMISSIONED) {
|
||||
throw unprocessable("target service is not deprecated");
|
||||
}
|
||||
if (target.serviceStage == ServiceStage.DEPRECATED && Boolean.TRUE.equals(target.locked)) {
|
||||
throw unprocessable("target service lock has not been released");
|
||||
}
|
||||
}
|
||||
|
||||
private void applyPatch(ServiceEntity e, ServicePatchRequest r) {
|
||||
BsmPayload old = e.bsmPayload;
|
||||
e.bsmPayload = new BsmPayload(
|
||||
r.name() != null ? r.name() : old.name(),
|
||||
r.description() != null ? r.description() : old.description(),
|
||||
r.endpoint() != null ? r.endpoint() : old.endpoint(),
|
||||
r.capabilities() != null ? r.capabilities() : old.capabilities(),
|
||||
r.registrantEmail() != null ? r.registrantEmail() : old.registrantEmail(),
|
||||
r.registrantName() != null ? r.registrantName() : old.registrantName(),
|
||||
r.registrantJurisdiction() != null ? r.registrantJurisdiction() : old.registrantJurisdiction(),
|
||||
r.registrantOrgType() != null ? r.registrantOrgType() : old.registrantOrgType(),
|
||||
r.registrantLei() != null ? r.registrantLei() : old.registrantLei(),
|
||||
r.openApiSpecUrl() != null ? r.openApiSpecUrl() : old.openApiSpecUrl(),
|
||||
r.mcpSpecUrl() != null ? r.mcpSpecUrl() : old.mcpSpecUrl(),
|
||||
r.policyUrl() != null ? r.policyUrl() : old.policyUrl(),
|
||||
r.securityContactUrl() != null ? r.securityContactUrl() : old.securityContactUrl(),
|
||||
r.pricing() != null ? r.pricing() : old.pricing(),
|
||||
r.bsmVersion() != null ? r.bsmVersion() : old.bsmVersion(),
|
||||
r.serviceStage() != null ? r.serviceStage() : old.serviceStage(),
|
||||
r.locked() != null ? r.locked() : old.locked(),
|
||||
r.sunsetAt() != null ? r.sunsetAt() : old.sunsetAt(),
|
||||
r.migrationGuideUrl() != null ? r.migrationGuideUrl() : old.migrationGuideUrl(),
|
||||
r.replacesServiceIds() != null ? r.replacesServiceIds() : old.replacesServiceIds()
|
||||
);
|
||||
if (r.endpoint() != null) e.endpointUrl = r.endpoint();
|
||||
if (r.registrantOrgType() != null) e.registrantOrgType = r.registrantOrgType();
|
||||
if (r.serviceStage() != null) e.serviceStage = r.serviceStage();
|
||||
if (r.locked() != null) e.locked = r.locked();
|
||||
if (r.sunsetAt() != null) e.sunsetAt = r.sunsetAt();
|
||||
if (r.migrationGuideUrl() != null) e.migrationGuideUrl = r.migrationGuideUrl();
|
||||
}
|
||||
|
||||
private ServiceVersionEntity snapshot(ServiceEntity e, ChangeType changeType, Instant at) {
|
||||
ServiceVersionEntity v = new ServiceVersionEntity();
|
||||
v.id = UUID.randomUUID();
|
||||
v.serviceId = e.id;
|
||||
v.version = e.version;
|
||||
v.recordedAt = at;
|
||||
v.changeType = changeType;
|
||||
v.bsmPayload = e.bsmPayload;
|
||||
v.registrantOrgType = e.registrantOrgType;
|
||||
v.olevel = e.olevel;
|
||||
v.serviceStage = e.serviceStage;
|
||||
v.registryStatus = e.registryStatus;
|
||||
return v;
|
||||
}
|
||||
|
||||
private void syncReplacements(UUID providerId, List<UUID> deprecatedIds, Instant now) {
|
||||
em.createNativeQuery("DELETE FROM service_replacements WHERE replacement_service_id = :id")
|
||||
.setParameter("id", providerId)
|
||||
.executeUpdate();
|
||||
for (UUID deprecatedId : deprecatedIds) {
|
||||
upsertReplacement(deprecatedId, providerId, now);
|
||||
}
|
||||
}
|
||||
|
||||
private void upsertReplacement(UUID deprecatedId, UUID replacementId, Instant now) {
|
||||
long count = ((Number) em.createNativeQuery(
|
||||
"SELECT COUNT(*) FROM service_replacements " +
|
||||
"WHERE deprecated_service_id = :dep AND replacement_service_id = :rep")
|
||||
.setParameter("dep", deprecatedId)
|
||||
.setParameter("rep", replacementId)
|
||||
.getSingleResult()).longValue();
|
||||
if (count == 0) {
|
||||
// Record event in the deprecated service's timeline
|
||||
ServiceEntity deprecated = em.find(ServiceEntity.class, deprecatedId);
|
||||
if (deprecated != null) {
|
||||
deprecated.version++;
|
||||
deprecated.lastUpdatedAt = now;
|
||||
em.persist(snapshot(deprecated, ChangeType.REPLACEMENT_DECLARED, now));
|
||||
}
|
||||
ServiceReplacementEntity r = new ServiceReplacementEntity();
|
||||
r.id = UUID.randomUUID();
|
||||
r.deprecatedServiceId = deprecatedId;
|
||||
r.replacementServiceId = replacementId;
|
||||
r.declaredAt = now;
|
||||
em.persist(r);
|
||||
}
|
||||
}
|
||||
|
||||
private VersionHistoryEntry toHistoryEntry(ServiceVersionEntity cur, ServiceVersionEntity prev) {
|
||||
String previousValue = null;
|
||||
String newValue = null;
|
||||
switch (cur.changeType) {
|
||||
case STAGE_CHANGED, SUNSET_DECLARED -> {
|
||||
previousValue = prev != null ? prev.serviceStage.name() : null;
|
||||
newValue = cur.serviceStage.name();
|
||||
}
|
||||
case LOCK_RELEASED -> {
|
||||
previousValue = prev != null && prev.bsmPayload.locked() != null
|
||||
? prev.bsmPayload.locked().toString() : null;
|
||||
newValue = cur.bsmPayload.locked() != null ? cur.bsmPayload.locked().toString() : null;
|
||||
}
|
||||
default -> {}
|
||||
}
|
||||
return new VersionHistoryEntry(cur.id, cur.changeType.name(), previousValue, newValue,
|
||||
cur.recordedAt.toString());
|
||||
}
|
||||
|
||||
private static WebApplicationException unprocessable(String message) {
|
||||
return new WebApplicationException(
|
||||
Response.status(422).entity(Map.of("message", message)).build());
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,35 @@
|
||||
# ── Jandex — include apix-common in the index so bean validation constraints work ──
|
||||
quarkus.index-dependency.apix-common.group-id=org.botstandards
|
||||
quarkus.index-dependency.apix-common.artifact-id=apix-common
|
||||
|
||||
# ── Datasource ────────────────────────────────────────────────────────────────
|
||||
quarkus.datasource.db-kind=postgresql
|
||||
quarkus.datasource.jdbc.url=${QUARKUS_DATASOURCE_JDBC_URL:jdbc:postgresql://localhost:5432/apix}
|
||||
quarkus.datasource.username=${QUARKUS_DATASOURCE_USERNAME:apix}
|
||||
quarkus.datasource.password=${QUARKUS_DATASOURCE_PASSWORD:apix}
|
||||
|
||||
# ── ORM ───────────────────────────────────────────────────────────────────────
|
||||
# Liquibase owns schema creation; Hibernate must not touch DDL
|
||||
quarkus.hibernate-orm.database.generation=none
|
||||
|
||||
# ── Liquibase ─────────────────────────────────────────────────────────────────
|
||||
quarkus.liquibase.migrate-at-start=true
|
||||
quarkus.liquibase.change-log=db/changelog/db.changelog-master.xml
|
||||
|
||||
# ── HTTP ──────────────────────────────────────────────────────────────────────
|
||||
quarkus.http.port=8180
|
||||
|
||||
# ── Security — API key for write endpoints ───────────────────────────────────
|
||||
apix.api-key=${APIX_API_KEY:dev-insecure-key-change-in-prod}
|
||||
|
||||
# ── Verification ──────────────────────────────────────────────────────────────
|
||||
apix.gleif.api-url=${GLEIF_API_URL:https://api.gleif.org/api/v1}
|
||||
apix.opencorporates.api-key=${OPENCORPORATES_API_KEY:}
|
||||
apix.sanctions.cache-path=${SANCTIONS_CACHE_PATH:./sanctions-cache}
|
||||
|
||||
# ── Logging ───────────────────────────────────────────────────────────────────
|
||||
quarkus.log.level=${LOG_LEVEL:DEBUG}
|
||||
quarkus.log.console.json=false
|
||||
|
||||
# ── Health ────────────────────────────────────────────────────────────────────
|
||||
quarkus.smallrye-health.root-path=/q/health
|
||||
@@ -0,0 +1,36 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="001" author="apix">
|
||||
<createTable tableName="services">
|
||||
<column name="id" type="uuid" defaultValueComputed="gen_random_uuid()">
|
||||
<constraints primaryKey="true" nullable="false"/>
|
||||
</column>
|
||||
<column name="endpoint_url" type="text">
|
||||
<constraints nullable="false" unique="true" uniqueConstraintName="uq_services_endpoint_url"/>
|
||||
</column>
|
||||
<!-- Full BSM document stored as JSONB for flexible querying -->
|
||||
<column name="bsm_payload" type="jsonb">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="olevel" type="varchar(50)" defaultValue="UNVERIFIED">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="slevel" type="varchar(50)"/>
|
||||
<column name="liveness_status" type="varchar(50)" defaultValue="PENDING">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="registered_at" type="timestamptz" defaultValueComputed="now()">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
</createTable>
|
||||
|
||||
<!-- GIN index on bsm_payload for capability search (@> operator) -->
|
||||
<sql>CREATE INDEX idx_services_bsm_payload_gin ON services USING gin (bsm_payload);</sql>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,18 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="002" author="apix">
|
||||
<addColumn tableName="services">
|
||||
<column name="verification_status" type="varchar(50)"/>
|
||||
<column name="olevel_checked_at" type="timestamptz"/>
|
||||
<!-- null = not yet screened; false = hit; true = clear -->
|
||||
<column name="sanctions_cleared" type="boolean"/>
|
||||
<column name="gleif_lei" type="text"/>
|
||||
</addColumn>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,17 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="003" author="apix">
|
||||
<addColumn tableName="services">
|
||||
<column name="last_checked_at" type="timestamptz"/>
|
||||
<column name="uptime_30d_percent" type="numeric(5,2)"/>
|
||||
<column name="avg_response_ms" type="integer"/>
|
||||
<column name="consecutive_failures" type="integer" defaultValueNumeric="0"/>
|
||||
</addColumn>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,40 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="004" author="apix">
|
||||
<addColumn tableName="services">
|
||||
<!-- Registrant type — top-level column, not buried in JSONB, for direct filtering -->
|
||||
<column name="registrant_org_type" type="varchar(50)" defaultValue="INDIVIDUAL">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- Registrant-declared lifecycle stage — controls search visibility -->
|
||||
<column name="service_stage" type="varchar(50)" defaultValue="DEVELOPMENT">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- BSF-controlled administrative state — always applied before stage filter -->
|
||||
<column name="registry_status" type="varchar(50)" defaultValue="ACTIVE">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- Monotonically increasing per-service counter; links to service_versions -->
|
||||
<column name="version" type="integer" defaultValueNumeric="1">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- Timestamp of last non-liveness update (liveness writes go to 003 columns) -->
|
||||
<column name="last_updated_at" type="timestamptz"/>
|
||||
</addColumn>
|
||||
|
||||
<!-- service_stage is the primary search filter dimension after capability -->
|
||||
<createIndex tableName="services" indexName="idx_services_service_stage">
|
||||
<column name="service_stage"/>
|
||||
</createIndex>
|
||||
<!-- registry_status is applied to every public query; index keeps it cheap -->
|
||||
<createIndex tableName="services" indexName="idx_services_registry_status">
|
||||
<column name="registry_status"/>
|
||||
</createIndex>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,64 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="005" author="apix">
|
||||
<!-- Append-only snapshot table. Rows are never updated or deleted.
|
||||
Each row captures the complete service state at the moment of a meaningful change.
|
||||
Diffs are computed from adjacent versions at read time, not stored. -->
|
||||
<createTable tableName="service_versions">
|
||||
<column name="id" type="uuid" defaultValueComputed="gen_random_uuid()">
|
||||
<constraints primaryKey="true" nullable="false"/>
|
||||
</column>
|
||||
<column name="service_id" type="uuid">
|
||||
<constraints nullable="false"
|
||||
foreignKeyName="fk_sv_service_id"
|
||||
references="services(id)"/>
|
||||
</column>
|
||||
<column name="version" type="integer">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="recorded_at" type="timestamptz" defaultValueComputed="now()">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- REGISTERED | BSM_UPDATED | ORG_TYPE_CHANGED | OLEVEL_CHANGED |
|
||||
STAGE_CHANGED | OWNERSHIP_TRANSFERRED | REGISTRY_STATUS_CHANGED -->
|
||||
<column name="change_type" type="varchar(50)">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- Full BSM at this version — enables complete before/after comparison -->
|
||||
<column name="bsm_payload" type="jsonb">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="registrant_org_type" type="varchar(50)">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="olevel" type="varchar(50)">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="service_stage" type="varchar(50)">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<column name="registry_status" type="varchar(50)">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- Optional human-readable context for the change, e.g.
|
||||
"Incorporated as GmbH; transitioning from individual registration" -->
|
||||
<column name="note" type="text"/>
|
||||
</createTable>
|
||||
|
||||
<addUniqueConstraint
|
||||
tableName="service_versions"
|
||||
columnNames="service_id, version"
|
||||
constraintName="uq_sv_service_version"/>
|
||||
|
||||
<!-- Primary access pattern: all versions for a given service, ordered by version -->
|
||||
<createIndex tableName="service_versions" indexName="idx_sv_service_id">
|
||||
<column name="service_id"/>
|
||||
</createIndex>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,33 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="006" author="apix">
|
||||
<addColumn tableName="services">
|
||||
<!-- null = lock concept not applicable (non-IoT service)
|
||||
true = device locked to this template owner; migration blocked
|
||||
false = lock released; device owner may migrate freely
|
||||
Default null so existing records are not incorrectly treated as locked. -->
|
||||
<column name="locked" type="boolean"/>
|
||||
|
||||
<!-- ISO date when the service goes permanently offline.
|
||||
Set together with service_stage = DEPRECATED. -->
|
||||
<column name="sunset_date" type="date"/>
|
||||
|
||||
<!-- Provider-hosted migration documentation URL. -->
|
||||
<column name="migration_guide_url" type="text"/>
|
||||
</addColumn>
|
||||
|
||||
<!-- Targeted index: only rows with a sunset date (small, DEPRECATED subset) -->
|
||||
<sql>CREATE INDEX idx_services_sunset_date ON services (sunset_date)
|
||||
WHERE sunset_date IS NOT NULL;</sql>
|
||||
|
||||
<!-- Targeted index: only rows where locked is explicitly set -->
|
||||
<sql>CREATE INDEX idx_services_locked ON services (locked)
|
||||
WHERE locked IS NOT NULL;</sql>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,49 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="007" author="apix">
|
||||
<!-- Declared compatibility index.
|
||||
Replacement providers list deprecated service IDs in BSM.replacesServiceIds[].
|
||||
The registry extracts those declarations here for efficient lookup.
|
||||
These rows are derived from BSM content and re-synced on every BSM update —
|
||||
never edit directly. -->
|
||||
<createTable tableName="service_replacements">
|
||||
<column name="id" type="uuid" defaultValueComputed="gen_random_uuid()">
|
||||
<constraints primaryKey="true" nullable="false"/>
|
||||
</column>
|
||||
<!-- The deprecated service being replaced -->
|
||||
<column name="deprecated_service_id" type="uuid">
|
||||
<constraints nullable="false"
|
||||
foreignKeyName="fk_sr_deprecated"
|
||||
references="services(id)"/>
|
||||
</column>
|
||||
<!-- The service declaring it can replace the deprecated one -->
|
||||
<column name="replacement_service_id" type="uuid">
|
||||
<constraints nullable="false"
|
||||
foreignKeyName="fk_sr_replacement"
|
||||
references="services(id)"/>
|
||||
</column>
|
||||
<column name="declared_at" type="timestamptz" defaultValueComputed="now()">
|
||||
<constraints nullable="false"/>
|
||||
</column>
|
||||
<!-- Optional human-readable compatibility notes from the replacement provider -->
|
||||
<column name="compatibility_notes" type="text"/>
|
||||
</createTable>
|
||||
|
||||
<!-- A replacement service can declare compatibility with each deprecated service once -->
|
||||
<addUniqueConstraint
|
||||
tableName="service_replacements"
|
||||
columnNames="deprecated_service_id, replacement_service_id"
|
||||
constraintName="uq_sr_deprecated_replacement"/>
|
||||
|
||||
<!-- Primary query path: GET /services/{deprecatedId}/replacements -->
|
||||
<createIndex tableName="service_replacements" indexName="idx_sr_deprecated_service_id">
|
||||
<column name="deprecated_service_id"/>
|
||||
</createIndex>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,24 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<changeSet id="008" author="apix">
|
||||
<!-- Drop old date-only index before renaming the column -->
|
||||
<sql>DROP INDEX IF EXISTS idx_services_sunset_date;</sql>
|
||||
|
||||
<!-- Rename and retype: date → timestamptz (exclusive boundary, UTC).
|
||||
Existing date values are cast to midnight UTC of that date. -->
|
||||
<sql>ALTER TABLE services RENAME COLUMN sunset_date TO sunset_at;</sql>
|
||||
<sql>ALTER TABLE services
|
||||
ALTER COLUMN sunset_at TYPE timestamptz
|
||||
USING sunset_at::timestamptz;</sql>
|
||||
|
||||
<!-- Recreate targeted index under new column name -->
|
||||
<sql>CREATE INDEX idx_services_sunset_at ON services (sunset_at)
|
||||
WHERE sunset_at IS NOT NULL;</sql>
|
||||
</changeSet>
|
||||
|
||||
</databaseChangeLog>
|
||||
@@ -0,0 +1,17 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<databaseChangeLog
|
||||
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-4.27.xsd">
|
||||
|
||||
<include file="changes/001-initial-schema.xml" relativeToChangelogFile="true"/>
|
||||
<include file="changes/002-verification-columns.xml" relativeToChangelogFile="true"/>
|
||||
<include file="changes/003-liveness-metrics.xml" relativeToChangelogFile="true"/>
|
||||
<include file="changes/004-org-stage-status.xml" relativeToChangelogFile="true"/>
|
||||
<include file="changes/005-version-history.xml" relativeToChangelogFile="true"/>
|
||||
<include file="changes/006-sunset-lock.xml" relativeToChangelogFile="true"/>
|
||||
<include file="changes/007-service-replacements.xml" relativeToChangelogFile="true"/>
|
||||
<include file="changes/008-sunset-at.xml" relativeToChangelogFile="true"/>
|
||||
|
||||
</databaseChangeLog>
|
||||
+30
@@ -0,0 +1,30 @@
|
||||
package org.botstandards.apix.registry.bdd;
|
||||
|
||||
import io.cucumber.core.cli.Main;
|
||||
import io.quarkus.test.junit.QuarkusTest;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||
|
||||
/**
|
||||
* Runs all IoT transition BDD scenarios inside the Quarkus test context.
|
||||
*
|
||||
* @QuarkusTest starts the server (DevServices PostgreSQL, port 8181).
|
||||
* Main.run() invokes the Cucumber runtime directly — this bypasses the JUnit Platform
|
||||
* Cucumber engine, so junit-platform.properties tag filters do not apply here.
|
||||
*/
|
||||
@QuarkusTest
|
||||
public class IotTransitionCucumberTest {
|
||||
|
||||
@Test
|
||||
public void run() {
|
||||
byte exitCode = Main.run(
|
||||
"--glue", "org.botstandards.apix.registry.bdd",
|
||||
"--plugin", "pretty",
|
||||
"--plugin", "json:target/cucumber-report.json",
|
||||
"--plugin", "io.qameta.allure.cucumber7jvm.AllureCucumber7Jvm",
|
||||
"classpath:features/iot-transition"
|
||||
);
|
||||
assertEquals(0, exitCode, "One or more Cucumber scenarios failed — check test output for details");
|
||||
}
|
||||
}
|
||||
+786
@@ -0,0 +1,786 @@
|
||||
package org.botstandards.apix.registry.bdd;
|
||||
|
||||
import io.cucumber.java.en.Given;
|
||||
import io.cucumber.java.en.Then;
|
||||
import io.cucumber.java.en.When;
|
||||
import io.quarkus.arc.Arc;
|
||||
import io.restassured.response.Response;
|
||||
import io.restassured.specification.RequestSpecification;
|
||||
import org.botstandards.apix.registry.service.ClockService;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.time.Instant;
|
||||
import java.time.temporal.ChronoUnit;
|
||||
import java.util.*;
|
||||
|
||||
import static io.restassured.RestAssured.given;
|
||||
import static io.restassured.http.ContentType.JSON;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.hamcrest.Matchers.*;
|
||||
|
||||
/**
|
||||
* BDD step definitions for the IoT device cloud sunset transition feature group.
|
||||
*
|
||||
* Cucumber creates a fresh instance per scenario, so instance fields are scenario-scoped.
|
||||
* The {@code actionPhase} flag separates Given (setup) from Then (assertion) for steps
|
||||
* that are reused in both positions with identical text.
|
||||
*/
|
||||
public class IotTransitionSteps {
|
||||
|
||||
private static final String API_KEY_HEADER = "X-Api-Key";
|
||||
private static final String API_KEY = "test-api-key";
|
||||
|
||||
// ── Per-scenario state ────────────────────────────────────────────────────
|
||||
|
||||
private final Map<String, UUID> serviceIds = new HashMap<>();
|
||||
/** The "primary" service referred to as {id} or "the service" in step text. */
|
||||
private UUID currentServiceId;
|
||||
/** Most recent HTTP response from a When step. */
|
||||
private Response lastResponse;
|
||||
/** Collected responses for multi-call scenarios (cache, no-shared-headers). */
|
||||
private final List<Response> capturedResponses = new ArrayList<>();
|
||||
/**
|
||||
* Flipped to true by the first @When step. Steps with identical Given/Then text
|
||||
* switch from setup behaviour (PATCH) to assertion behaviour (GET + verify).
|
||||
*/
|
||||
private boolean actionPhase = false;
|
||||
|
||||
// ── Helpers ───────────────────────────────────────────────────────────────
|
||||
|
||||
private RequestSpecification asTemplateOwner() {
|
||||
return given().contentType(JSON).header(API_KEY_HEADER, API_KEY);
|
||||
}
|
||||
|
||||
private String defaultEndpointFor(String name) {
|
||||
return "https://" + name.toLowerCase().replace(" ", "") + ".example";
|
||||
}
|
||||
|
||||
private Map<String, Object> basePayload(String name) {
|
||||
Map<String, Object> p = new LinkedHashMap<>();
|
||||
p.put("name", name);
|
||||
p.put("description", name + " test service");
|
||||
p.put("endpoint", defaultEndpointFor(name));
|
||||
p.put("capabilities", List.of("device.telemetry"));
|
||||
p.put("registrantEmail", "test@example.com");
|
||||
p.put("registrantName", "Test Org");
|
||||
p.put("registrantJurisdiction", "DE");
|
||||
p.put("registrantOrgType", "COMMERCIAL");
|
||||
p.put("bsmVersion", "1.0");
|
||||
return p;
|
||||
}
|
||||
|
||||
/** POST /services; asserts 201; returns the new service ID. */
|
||||
private UUID createService(Map<String, Object> payload) {
|
||||
Response r = asTemplateOwner().body(payload).post("/services");
|
||||
r.then().statusCode(201);
|
||||
return UUID.fromString(r.jsonPath().getString("id"));
|
||||
}
|
||||
|
||||
private void store(String name, UUID id) {
|
||||
serviceIds.put(name, id);
|
||||
currentServiceId = id;
|
||||
}
|
||||
|
||||
private static String futureSunsetAt(int days) {
|
||||
return Instant.now().plus(Duration.ofDays(days)).toString();
|
||||
}
|
||||
|
||||
private static String pastSunsetAt(int days) {
|
||||
return Instant.now().minus(Duration.ofDays(days)).toString();
|
||||
}
|
||||
|
||||
// ── Given — service creation ──────────────────────────────────────────────
|
||||
|
||||
@Given("a registered service {string} with endpoint {string}")
|
||||
public void aRegisteredServiceWithEndpoint(String name, String endpoint) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("endpoint", endpoint);
|
||||
p.put("serviceStage", "PRODUCTION");
|
||||
store(name, createService(p));
|
||||
}
|
||||
|
||||
@Given("the service has capability {string}")
|
||||
public void theServiceHasCapability(String capability) {
|
||||
asTemplateOwner()
|
||||
.body(Map.of("capabilities", List.of(capability)))
|
||||
.patch("/services/" + currentServiceId)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
|
||||
@Given("the service is in stage {string}")
|
||||
public void theServiceIsInStage(String stage) {
|
||||
asTemplateOwner()
|
||||
.body(Map.of("serviceStage", stage))
|
||||
.patch("/services/" + currentServiceId)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
|
||||
/**
|
||||
* Dual-use: before any When step (actionPhase=false) → PATCH to set the locked value;
|
||||
* after a When step (actionPhase=true) → GET and verify the locked value.
|
||||
*/
|
||||
@Given("the service has locked set to {word}")
|
||||
public void theServiceHasLockedSetTo(String locked) {
|
||||
boolean val = Boolean.parseBoolean(locked);
|
||||
if (!actionPhase) {
|
||||
asTemplateOwner()
|
||||
.body(Map.of("locked", val))
|
||||
.patch("/services/" + currentServiceId)
|
||||
.then().statusCode(200);
|
||||
} else {
|
||||
given().get("/services/" + currentServiceId)
|
||||
.then().statusCode(200)
|
||||
.body("locked", equalTo(val));
|
||||
}
|
||||
}
|
||||
|
||||
@Given("a deprecated service {string} with locked set to false")
|
||||
public void aDeprecatedServiceLockedFalse(String name) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("serviceStage", "DEPRECATED");
|
||||
p.put("locked", false);
|
||||
p.put("sunsetAt", futureSunsetAt(90));
|
||||
store(name, createService(p));
|
||||
}
|
||||
|
||||
@Given("a deprecated service {string} with locked set to true")
|
||||
public void aDeprecatedServiceLockedTrue(String name) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("serviceStage", "DEPRECATED");
|
||||
p.put("locked", true);
|
||||
p.put("sunsetAt", futureSunsetAt(90));
|
||||
store(name, createService(p));
|
||||
}
|
||||
|
||||
@Given("a deprecated service {string} with locked set to false and a sunset_date set")
|
||||
public void aDeprecatedServiceLockedFalseWithSunsetDate(String name) {
|
||||
aDeprecatedServiceLockedFalse(name);
|
||||
}
|
||||
|
||||
@Given("a deprecated service {string} with a sunset_date {int} days from now")
|
||||
public void aDeprecatedServiceWithSunsetDateDaysFromNow(String name, int days) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("serviceStage", "DEPRECATED");
|
||||
p.put("locked", false);
|
||||
p.put("sunsetAt", futureSunsetAt(days));
|
||||
store(name, createService(p));
|
||||
}
|
||||
|
||||
@Given("a registered service {string} in stage {string} with O-level {string}")
|
||||
public void aRegisteredServiceInStageWithOLevel(String name, String stage, String oLevel) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("serviceStage", stage);
|
||||
UUID id = createService(p);
|
||||
store(name, id);
|
||||
if (!"UNVERIFIED".equalsIgnoreCase(oLevel)) {
|
||||
asTemplateOwner()
|
||||
.patch("/services/" + id + "/olevel?level=" + oLevel)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
}
|
||||
|
||||
@Given("{string} in stage {string} with O-level {string} has declared compatibility")
|
||||
public void serviceInStageWithOLevelHasDeclaredCompatibility(String name, String stage, String oLevel) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("serviceStage", stage);
|
||||
UUID id = createService(p);
|
||||
store(name, id);
|
||||
if (!"UNVERIFIED".equalsIgnoreCase(oLevel)) {
|
||||
asTemplateOwner()
|
||||
.patch("/services/" + id + "/olevel?level=" + oLevel)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
UUID deprecatedId = serviceIds.get("SmartHub Cloud");
|
||||
asTemplateOwner()
|
||||
.body(Map.of("replacesServiceIds", List.of(deprecatedId.toString())))
|
||||
.patch("/services/" + id)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
|
||||
@Given("a service {string} in stage {string} with O-level {string}")
|
||||
public void aServiceInStageWithOLevel(String name, String stage, String oLevel) {
|
||||
aRegisteredServiceInStageWithOLevel(name, stage, oLevel);
|
||||
}
|
||||
|
||||
@Given("a service {string} in stage {string} with locked set to true")
|
||||
public void aServiceInStageLockedTrue(String name, String stage) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("serviceStage", stage);
|
||||
p.put("locked", true);
|
||||
if ("DEPRECATED".equals(stage)) {
|
||||
p.put("sunsetAt", futureSunsetAt(90));
|
||||
}
|
||||
store(name, createService(p));
|
||||
}
|
||||
|
||||
@Given("{string} has declared compatibility with {string}")
|
||||
public void hasDeclaredCompatibilityWith(String provider, String deprecated) {
|
||||
asTemplateOwner()
|
||||
.body(Map.of("replacesServiceIds", List.of(serviceIds.get(deprecated).toString())))
|
||||
.patch("/services/" + serviceIds.get(provider))
|
||||
.then().statusCode(200);
|
||||
}
|
||||
|
||||
@Given("the service is in stage {string} with a sunset_date set")
|
||||
public void theServiceIsInStageWithSunsetDateSet(String stage) {
|
||||
asTemplateOwner()
|
||||
.body(Map.of(
|
||||
"serviceStage", stage,
|
||||
"sunsetAt", futureSunsetAt(90)
|
||||
))
|
||||
.patch("/services/" + currentServiceId)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
|
||||
@Given("the sunset_date of {string} has passed")
|
||||
public void theSunsetDateOfHasPassed(String name) {
|
||||
currentServiceId = serviceIds.get(name);
|
||||
// Set a valid future sunsetAt, then move the in-process clock to or past that
|
||||
// moment so the decommission validation ("sunset_at has not passed") succeeds.
|
||||
// Truncate to micros: Postgres timestamptz stores at microsecond precision and
|
||||
// may round sub-microsecond values, causing clock != stored sunsetAt.
|
||||
Instant sunsetAt = Instant.now().plus(Duration.ofDays(1)).truncatedTo(ChronoUnit.MICROS);
|
||||
asTemplateOwner()
|
||||
.body(Map.of("sunsetAt", sunsetAt.toString()))
|
||||
.patch("/services/" + currentServiceId)
|
||||
.then().statusCode(200);
|
||||
// Exclusive boundary: advance clock to sunsetAt — at that moment the sunset has arrived
|
||||
Arc.container().instance(ClockService.class).get().advance(sunsetAt);
|
||||
}
|
||||
|
||||
@Given("at least one replacement candidate registered")
|
||||
public void atLeastOneReplacementCandidateRegistered() {
|
||||
Map<String, Object> p = basePayload("TestReplacement");
|
||||
p.put("serviceStage", "PRODUCTION");
|
||||
UUID id = createService(p);
|
||||
serviceIds.put("TestReplacement", id);
|
||||
asTemplateOwner()
|
||||
.body(Map.of("replacesServiceIds", List.of(serviceIds.get("SmartHub Cloud").toString())))
|
||||
.patch("/services/" + id)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
|
||||
@Given("a decommissioned service")
|
||||
public void aDecommissionedService() {
|
||||
Map<String, Object> p = basePayload("DecommissionedService");
|
||||
p.put("serviceStage", "DECOMMISSIONED");
|
||||
p.put("sunsetAt", pastSunsetAt(1));
|
||||
UUID id = createService(p);
|
||||
store("DecommissionedService", id);
|
||||
}
|
||||
|
||||
@Given("a deprecated service {string} that reached DECOMMISSIONED without setting locked=false")
|
||||
public void aDeprecatedServiceDecommissionedWithoutLockRelease(String name) {
|
||||
Map<String, Object> p = basePayload(name);
|
||||
p.put("serviceStage", "DECOMMISSIONED");
|
||||
p.put("locked", true);
|
||||
p.put("sunsetAt", pastSunsetAt(1));
|
||||
store(name, createService(p));
|
||||
}
|
||||
|
||||
@Given("{string} has completed the full lifecycle")
|
||||
public void hasCompletedTheFullLifecycle(String name) {
|
||||
currentServiceId = serviceIds.get(name);
|
||||
// Background walks through PRODUCTION→DEPRECATED→locked=false; sunset has passed.
|
||||
// Complete the lifecycle by decommissioning.
|
||||
asTemplateOwner()
|
||||
.body(Map.of("serviceStage", "DECOMMISSIONED"))
|
||||
.patch("/services/" + currentServiceId)
|
||||
.then().statusCode(200);
|
||||
}
|
||||
|
||||
// ── When — template owner mutations ──────────────────────────────────────
|
||||
|
||||
@When("the template owner updates the service with sunset_date 90 days from now and stage {string}")
|
||||
public void templateOwnerUpdatesSunsetDateAndStage(String stage) {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of(
|
||||
"serviceStage", stage,
|
||||
"sunsetAt", futureSunsetAt(90)
|
||||
))
|
||||
.patch("/services/" + currentServiceId);
|
||||
}
|
||||
|
||||
@When("the template owner sets locked to false")
|
||||
public void templateOwnerSetsLockedFalse() {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("locked", false))
|
||||
.patch("/services/" + currentServiceId);
|
||||
}
|
||||
|
||||
@When("the template owner attempts to set locked to false without a sunset_date")
|
||||
public void templateOwnerAttemptsToSetLockedFalseWithoutSunsetDate() {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("locked", false))
|
||||
.patch("/services/" + currentServiceId);
|
||||
}
|
||||
|
||||
@When("the template owner attempts to set sunset_date to yesterday")
|
||||
public void templateOwnerAttemptsToSetSunsetDateToYesterday() {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("sunsetAt", pastSunsetAt(1)))
|
||||
.patch("/services/" + currentServiceId);
|
||||
}
|
||||
|
||||
@When("the template owner sets service_stage to {string}")
|
||||
public void templateOwnerSetsServiceStage(String stage) {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("serviceStage", stage))
|
||||
.patch("/services/" + currentServiceId);
|
||||
}
|
||||
|
||||
@When("the template owner attempts to set service_stage to {string}")
|
||||
public void templateOwnerAttemptsToSetServiceStage(String stage) {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("serviceStage", stage))
|
||||
.patch("/services/" + currentServiceId);
|
||||
}
|
||||
|
||||
// ── When — replacement declaration mutations ──────────────────────────────
|
||||
|
||||
@When("{string} declares replacesServiceIds containing the ID of {string}")
|
||||
public void declareReplacement(String provider, String deprecated) {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("replacesServiceIds", List.of(serviceIds.get(deprecated).toString())))
|
||||
.patch("/services/" + serviceIds.get(provider));
|
||||
}
|
||||
|
||||
@When("{string} declares replacesServiceIds containing the ID of {string} again")
|
||||
public void declareReplacementAgain(String provider, String deprecated) {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("replacesServiceIds", List.of(serviceIds.get(deprecated).toString())))
|
||||
.patch("/services/" + serviceIds.get(provider));
|
||||
}
|
||||
|
||||
@When("{string} removes {string} from its replacesServiceIds")
|
||||
public void retractReplacement(String provider, String deprecated) {
|
||||
actionPhase = true;
|
||||
lastResponse = asTemplateOwner()
|
||||
.body(Map.of("replacesServiceIds", List.of()))
|
||||
.patch("/services/" + serviceIds.get(provider));
|
||||
}
|
||||
|
||||
// ── When — anonymous HTTP GET calls ──────────────────────────────────────
|
||||
// Regex annotations because step text contains URI-template placeholders like
|
||||
// {smartHubCloudId} that are not registered Cucumber parameter types.
|
||||
|
||||
@When("^GET /services/\\{smartHubCloudId\\} is called with no (?:authentication|Authorization) header$")
|
||||
public void getSmartHubCloudStatusAnonymous() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services/" + serviceIds.get("SmartHub Cloud"));
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{smartHubCloudId\\}/replacements is called with no (?:authentication|Authorization) header$")
|
||||
public void getSmartHubCloudReplacementsAnonymous() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements");
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{smartHubCloudId\\}/replacements\\?minOLevel=LEGAL_ENTITY_VERIFIED is called with no (?:authentication|Authorization) header$")
|
||||
public void getReplacementsFilteredByMinOLevel() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get(
|
||||
"/services/" + serviceIds.get("SmartHub Cloud") + "/replacements?minOLevel=LEGAL_ENTITY_VERIFIED");
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{lockedCloudId\\}/replacements is called with no (?:authentication|Authorization) header$")
|
||||
public void getLockedCloudReplacementsAnonymous() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services/" + serviceIds.get("LockedCloud") + "/replacements");
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{neverReleasedId\\}/replacements is called with no (?:authentication|Authorization) header$")
|
||||
public void getNeverReleasedReplacementsAnonymous() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services/" + serviceIds.get("NeverReleased") + "/replacements");
|
||||
}
|
||||
|
||||
@When("^GET /services\\?capability=device\\.telemetry is called with no (?:authentication|Authorization) header$")
|
||||
public void getServicesByCapabilityAnonymous() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services?capability=device.telemetry");
|
||||
}
|
||||
|
||||
@When("^GET /services\\?capability=device\\.telemetry&stage=deprecated is called$")
|
||||
public void getServicesByCapabilityAndDeprecatedStage() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services?capability=device.telemetry&stage=deprecated");
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{smartHubCloudId\\}/replacements is called twice with no shared headers$")
|
||||
public void getReplacementsTwiceNoSharedHeaders() {
|
||||
actionPhase = true;
|
||||
capturedResponses.clear();
|
||||
String url = "/services/" + serviceIds.get("SmartHub Cloud") + "/replacements";
|
||||
capturedResponses.add(given().get(url));
|
||||
capturedResponses.add(given().get(url));
|
||||
lastResponse = capturedResponses.get(1);
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{smartHubCloudId\\}/replacements is called twice within the cache TTL$")
|
||||
public void getReplacementsTwiceWithinCacheTtl() {
|
||||
actionPhase = true;
|
||||
capturedResponses.clear();
|
||||
String url = "/services/" + serviceIds.get("SmartHub Cloud") + "/replacements";
|
||||
capturedResponses.add(given().get(url));
|
||||
capturedResponses.add(given().get(url));
|
||||
lastResponse = capturedResponses.get(1);
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{smartHubCloudId\\}/replacements is called$")
|
||||
public void getSmartHubCloudReplacements() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements");
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{id\\} is called$")
|
||||
public void getServiceById() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services/" + currentServiceId);
|
||||
}
|
||||
|
||||
@When("^GET /services/\\{id\\}/history is called$")
|
||||
public void getServiceHistory() {
|
||||
actionPhase = true;
|
||||
lastResponse = given().get("/services/" + currentServiceId + "/history");
|
||||
}
|
||||
|
||||
// ── Then — lifecycle state assertions ────────────────────────────────────
|
||||
|
||||
@Then("the service stage is {string}")
|
||||
public void theServiceStageIs(String stage) {
|
||||
lastResponse.then().statusCode(200).body("serviceStage", equalTo(stage));
|
||||
}
|
||||
|
||||
@Then("the service has a sunset_date set")
|
||||
public void theServiceHasSunsetDateSet() {
|
||||
lastResponse.then().statusCode(200).body("sunsetAt", notNullValue());
|
||||
}
|
||||
|
||||
@Then("a version history entry of type {string} exists for the service")
|
||||
public void versionHistoryEntryExistsForService(String type) {
|
||||
given().get("/services/" + currentServiceId + "/history")
|
||||
.then().statusCode(200)
|
||||
.body("type", hasItem(type));
|
||||
}
|
||||
|
||||
@Then("a version history entry of type {string} exists for {string}")
|
||||
public void versionHistoryEntryExistsForNamedService(String type, String name) {
|
||||
given().get("/services/" + serviceIds.get(name) + "/history")
|
||||
.then().statusCode(200)
|
||||
.body("type", hasItem(type));
|
||||
}
|
||||
|
||||
@Then("the service does not appear in default production search results for capability {string}")
|
||||
public void serviceNotInDefaultProductionResults(String capability) {
|
||||
String name = given().get("/services/" + currentServiceId)
|
||||
.jsonPath().getString("name");
|
||||
given().get("/services?capability=" + capability)
|
||||
.then().statusCode(200)
|
||||
.body("name", not(hasItem(name)));
|
||||
}
|
||||
|
||||
@Then("the service appears in search results when stage filter is {string}")
|
||||
public void serviceAppearsInResultsForStageFilter(String stage) {
|
||||
String name = given().get("/services/" + currentServiceId)
|
||||
.jsonPath().getString("name");
|
||||
given().get("/services?capability=device.telemetry&stage=" + stage.toLowerCase())
|
||||
.then().statusCode(200)
|
||||
.body("name", hasItem(name));
|
||||
}
|
||||
|
||||
@Then("^GET /services/\\{id\\}/replacements returns HTTP 200 with an empty list$")
|
||||
public void replacementsEndpointReturnsHttp200EmptyList() {
|
||||
given().get("/services/" + currentServiceId + "/replacements")
|
||||
.then().statusCode(200)
|
||||
.body("candidates", empty());
|
||||
}
|
||||
|
||||
@Then("the version history entry contains the previous locked value true")
|
||||
public void versionHistoryEntryContainsPreviousLockedValueTrue() {
|
||||
given().get("/services/" + currentServiceId + "/history")
|
||||
.then().statusCode(200)
|
||||
.body("find { it.type == 'LOCK_RELEASED' }.previousValue", equalTo("true"));
|
||||
}
|
||||
|
||||
// ── Then — HTTP response assertions ──────────────────────────────────────
|
||||
|
||||
@Then("the response is HTTP {int}")
|
||||
public void theResponseIsHttp(int statusCode) {
|
||||
lastResponse.then().statusCode(statusCode);
|
||||
}
|
||||
|
||||
@Then("the error message contains {string}")
|
||||
public void theErrorMessageContains(String message) {
|
||||
lastResponse.then().body("message", containsString(message));
|
||||
}
|
||||
|
||||
@Then("the response body contains service_stage {string}")
|
||||
public void responseBodyContainsServiceStage(String stage) {
|
||||
lastResponse.then().statusCode(200).body("serviceStage", equalTo(stage));
|
||||
}
|
||||
|
||||
@Then("the response body contains locked {word}")
|
||||
public void responseBodyContainsLocked(String locked) {
|
||||
lastResponse.then().statusCode(200).body("locked", equalTo(Boolean.parseBoolean(locked)));
|
||||
}
|
||||
|
||||
@Then("the response body contains a sunset_date")
|
||||
public void responseBodyContainsSunsetDate() {
|
||||
lastResponse.then().statusCode(200).body("sunsetAt", notNullValue());
|
||||
}
|
||||
|
||||
@Then("the response body contains an empty candidates list")
|
||||
public void responseBodyContainsEmptyCandidatesList() {
|
||||
lastResponse.then().statusCode(200).body("candidates", empty());
|
||||
}
|
||||
|
||||
@Then("locked and sunset_date are present in the response body")
|
||||
public void lockedAndSunsetDatePresentInResponseBody() {
|
||||
lastResponse.then().statusCode(200)
|
||||
.body("locked", notNullValue())
|
||||
.body("sunsetAt", notNullValue());
|
||||
}
|
||||
|
||||
// ── Then — replacement list assertions ───────────────────────────────────
|
||||
|
||||
@Then("the response contains {int} candidate(s)")
|
||||
public void responseContainsCandidates(int count) {
|
||||
lastResponse.then().statusCode(200).body("candidates.size()", equalTo(count));
|
||||
}
|
||||
|
||||
@Then("the candidate is {string}")
|
||||
public void theCandidateIs(String name) {
|
||||
lastResponse.then().statusCode(200).body("candidates[0].name", equalTo(name));
|
||||
}
|
||||
|
||||
@Then("{string} appears before {string} in the results")
|
||||
public void appearsBeforeInResults(String first, String second) {
|
||||
List<String> names = lastResponse.jsonPath().getList("candidates.name");
|
||||
assertThat(names.indexOf(first))
|
||||
.as("%s should appear before %s in results %s", first, second, names)
|
||||
.isLessThan(names.indexOf(second));
|
||||
}
|
||||
|
||||
@Then("the ordering is by O-level descending")
|
||||
public void orderingIsByOLevelDescending() {
|
||||
List<String> oLevels = lastResponse.jsonPath().getList("candidates.oLevel");
|
||||
List<String> expected = new ArrayList<>(oLevels);
|
||||
expected.sort(Comparator.reverseOrder());
|
||||
assertThat(oLevels)
|
||||
.as("Candidates should be ordered by O-level descending")
|
||||
.isEqualTo(expected);
|
||||
}
|
||||
|
||||
@Then("^GET /services/\\{smartHubCloudId\\}/replacements includes \"([^\"]+)\"$")
|
||||
public void replacementsIncludes(String name) {
|
||||
given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements")
|
||||
.then().statusCode(200)
|
||||
.body("candidates.name", hasItem(name));
|
||||
}
|
||||
|
||||
@Then("^GET /services/\\{smartHubCloudId\\}/replacements returns (\\d+) candidates?$")
|
||||
public void replacementsReturnsNCandidates(int count) {
|
||||
lastResponse = given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements");
|
||||
lastResponse.then().statusCode(200)
|
||||
.body("candidates.size()", equalTo(count));
|
||||
}
|
||||
|
||||
@Then("^GET /services/\\{smartHubCloudId\\}/replacements no longer includes \"([^\"]+)\"$")
|
||||
public void replacementsNoLongerIncludes(String name) {
|
||||
given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements")
|
||||
.then().statusCode(200)
|
||||
.body("candidates.name", not(hasItem(name)));
|
||||
}
|
||||
|
||||
@Then("^GET /services/\\{smartHubCloudId\\}/replacements still returns exactly (\\d+) candidates?$")
|
||||
public void replacementsStillReturnsExactlyCandidates(int count) {
|
||||
given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements")
|
||||
.then().statusCode(200)
|
||||
.body("candidates.size()", equalTo(count));
|
||||
}
|
||||
|
||||
@Then("the service_replacements table contains the declared pair")
|
||||
public void serviceReplacementsTableContainsDeclaredPair() {
|
||||
given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements")
|
||||
.then().statusCode(200)
|
||||
.body("candidates.size()", greaterThanOrEqualTo(1));
|
||||
}
|
||||
|
||||
@Then("the service_replacements row for this pair is deleted")
|
||||
public void serviceReplacementsRowDeleted() {
|
||||
given().get("/services/" + serviceIds.get("SmartHub Cloud") + "/replacements")
|
||||
.then().statusCode(200)
|
||||
.body("candidates", empty());
|
||||
}
|
||||
|
||||
// ── Then — capability search assertions ──────────────────────────────────
|
||||
|
||||
@Then("{string} is not in the results")
|
||||
public void isNotInResults(String name) {
|
||||
lastResponse.then().statusCode(200).body("name", not(hasItem(name)));
|
||||
}
|
||||
|
||||
@Then("{string} is in the results")
|
||||
public void isInResults(String name) {
|
||||
lastResponse.then().statusCode(200).body("name", hasItem(name));
|
||||
}
|
||||
|
||||
@Then("only services with stage {string} are returned")
|
||||
public void onlyServicesWithStageReturned(String stage) {
|
||||
lastResponse.then().statusCode(200).body("serviceStage", everyItem(equalTo(stage)));
|
||||
}
|
||||
|
||||
@Then("each result has service_stage {string}")
|
||||
public void eachResultHasServiceStage(String stage) {
|
||||
lastResponse.then().statusCode(200).body("serviceStage", everyItem(equalTo(stage)));
|
||||
}
|
||||
|
||||
// ── Then — decommissioning assertions ────────────────────────────────────
|
||||
|
||||
@Then("the service does not appear in any capability search results")
|
||||
public void serviceNotInAnyCapabilitySearchResults() {
|
||||
String name = given().get("/services/" + currentServiceId)
|
||||
.jsonPath().getString("name");
|
||||
given().get("/services?capability=device.telemetry")
|
||||
.then().statusCode(200)
|
||||
.body("name", not(hasItem(name)));
|
||||
}
|
||||
|
||||
@Then("^GET /services/\\{id\\} returns HTTP 200 with the complete historical record$")
|
||||
public void getServiceByIdReturnsHistoricalRecord() {
|
||||
given().get("/services/" + currentServiceId)
|
||||
.then().statusCode(200)
|
||||
.body("serviceStage", notNullValue())
|
||||
.body("name", notNullValue());
|
||||
}
|
||||
|
||||
@Then("the full BSM payload is present in the response")
|
||||
public void fullBsmPayloadPresentInResponse() {
|
||||
lastResponse.then().statusCode(200)
|
||||
.body("name", notNullValue())
|
||||
.body("endpoint", notNullValue())
|
||||
.body("serviceStage", notNullValue());
|
||||
}
|
||||
|
||||
@Then("^all version history entries are accessible via GET /services/\\{id\\}/history$")
|
||||
public void allVersionHistoryEntriesAccessibleViaHistory() {
|
||||
given().get("/services/" + currentServiceId + "/history")
|
||||
.then().statusCode(200)
|
||||
.body("size()", greaterThanOrEqualTo(1));
|
||||
}
|
||||
|
||||
@Then("replacement candidates are returned regardless of the stored locked value")
|
||||
public void replacementCandidatesReturnedRegardlessOfLockedValue() {
|
||||
lastResponse.then().statusCode(200).body("candidates", not(nullValue()));
|
||||
}
|
||||
|
||||
@Then("the full replacement list is returned")
|
||||
public void fullReplacementListReturned() {
|
||||
lastResponse.then().statusCode(200).body("candidates.size()", greaterThanOrEqualTo(1));
|
||||
}
|
||||
|
||||
// ── Then — version history timeline assertions ────────────────────────────
|
||||
|
||||
@Then("the history contains an entry of type {string}")
|
||||
public void historyContainsEntryOfType(String type) {
|
||||
lastResponse.then().statusCode(200).body("type", hasItem(type));
|
||||
}
|
||||
|
||||
@Then("the history contains an entry of type {string} with new value {string}")
|
||||
public void historyContainsEntryOfTypeWithNewValue(String type, String value) {
|
||||
lastResponse.then().statusCode(200)
|
||||
.body("find { it.type == '" + type + "' }.newValue", equalTo(value));
|
||||
}
|
||||
|
||||
@Then("all entries are ordered chronologically ascending")
|
||||
public void allEntriesOrderedChronologicallyAscending() {
|
||||
List<String> timestamps = lastResponse.jsonPath().getList("createdAt");
|
||||
List<String> sorted = new ArrayList<>(timestamps);
|
||||
Collections.sort(sorted);
|
||||
assertThat(timestamps)
|
||||
.as("History entries should be ordered chronologically ascending")
|
||||
.isEqualTo(sorted);
|
||||
}
|
||||
|
||||
// ── Then — anonymity / cache assertions ──────────────────────────────────
|
||||
|
||||
@Then("both responses are identical in content")
|
||||
public void bothResponsesIdenticalInContent() {
|
||||
assertThat(capturedResponses).hasSize(2);
|
||||
assertThat(capturedResponses.get(0).asString())
|
||||
.as("Both anonymous responses must return identical content")
|
||||
.isEqualTo(capturedResponses.get(1).asString());
|
||||
}
|
||||
|
||||
@Then("neither response contains a Set-Cookie header")
|
||||
public void neitherResponseContainsSetCookieHeader() {
|
||||
for (Response r : capturedResponses) {
|
||||
assertThat(r.getHeader("Set-Cookie"))
|
||||
.as("Anonymous response must not set a cookie")
|
||||
.isNull();
|
||||
}
|
||||
}
|
||||
|
||||
@Then("neither response contains a session reference")
|
||||
public void neitherResponseContainsSessionReference() {
|
||||
for (Response r : capturedResponses) {
|
||||
assertThat(r.getHeader("Set-Cookie")).isNull();
|
||||
assertThat(r.asString().toLowerCase()).doesNotContain("session");
|
||||
}
|
||||
}
|
||||
|
||||
@Then("the response headers contain no Set-Cookie")
|
||||
public void responseHeadersContainNoSetCookie() {
|
||||
assertThat(lastResponse.getHeader("Set-Cookie"))
|
||||
.as("Response must not set a cookie")
|
||||
.isNull();
|
||||
}
|
||||
|
||||
@Then("the response body contains no field that echoes client request details")
|
||||
public void responseBodyContainsNoFieldEchoingClientDetails() {
|
||||
String body = lastResponse.asString().toLowerCase();
|
||||
assertThat(body).doesNotContain("user-agent");
|
||||
assertThat(body).doesNotContain("x-forwarded-for");
|
||||
assertThat(body).doesNotContain("remote-addr");
|
||||
}
|
||||
|
||||
@Then("the response body contains no correlation ID tied to the caller")
|
||||
public void responseBodyContainsNoCorrelationId() {
|
||||
String body = lastResponse.asString().toLowerCase();
|
||||
assertThat(body).doesNotContain("requestid");
|
||||
assertThat(body).doesNotContain("correlationid");
|
||||
assertThat(body).doesNotContain("traceid");
|
||||
}
|
||||
|
||||
@Then("the second response is served from cache")
|
||||
public void secondResponseServedFromCache() {
|
||||
assertThat(capturedResponses).hasSize(2);
|
||||
Response second = capturedResponses.get(1);
|
||||
String cacheControl = second.getHeader("Cache-Control");
|
||||
assertThat(cacheControl)
|
||||
.as("Replacements response must carry Cache-Control: public to enable proxy caching")
|
||||
.isNotNull()
|
||||
.containsIgnoringCase("public");
|
||||
}
|
||||
|
||||
@Then("the cache key does not incorporate any client-identifying header")
|
||||
public void cacheKeyDoesNotIncorporateClientIdentifyingHeader() {
|
||||
String vary = lastResponse.getHeader("Vary");
|
||||
if (vary != null) {
|
||||
assertThat(vary.toLowerCase())
|
||||
.as("Vary header must not include client-identifying headers")
|
||||
.doesNotContain("authorization")
|
||||
.doesNotContain("cookie")
|
||||
.doesNotContain("user-agent");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,32 @@
|
||||
package org.botstandards.apix.registry.bdd;
|
||||
|
||||
import io.cucumber.java.After;
|
||||
import io.cucumber.java.Before;
|
||||
import io.quarkus.arc.Arc;
|
||||
import io.restassured.RestAssured;
|
||||
import org.botstandards.apix.registry.service.ClockService;
|
||||
|
||||
import java.sql.DriverManager;
|
||||
|
||||
public class TestSetup {
|
||||
|
||||
@Before(order = 0)
|
||||
public void configureRestAssured() {
|
||||
RestAssured.port = 8181;
|
||||
RestAssured.enableLoggingOfRequestAndResponseIfValidationFails();
|
||||
}
|
||||
|
||||
@Before(order = 1)
|
||||
public void truncateTables() throws Exception {
|
||||
try (var conn = DriverManager.getConnection(
|
||||
"jdbc:postgresql://localhost:5432/apix", "apix", "apix");
|
||||
var stmt = conn.createStatement()) {
|
||||
stmt.execute("TRUNCATE TABLE service_replacements, service_versions, services CASCADE");
|
||||
}
|
||||
}
|
||||
|
||||
@After
|
||||
public void resetClock() {
|
||||
Arc.container().instance(ClockService.class).get().reset();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,3 @@
|
||||
// Superseded: clock control now goes through Arc.container() directly in TestSetup/@After
|
||||
// and IotTransitionSteps, keeping the clock as a pure in-process concern.
|
||||
// File kept to avoid breaking any IDE import caches; no JAX-RS registration.
|
||||
+193
@@ -0,0 +1,193 @@
|
||||
package org.botstandards.apix.registry.service;
|
||||
|
||||
import jakarta.ws.rs.WebApplicationException;
|
||||
import org.botstandards.apix.common.ChangeType;
|
||||
import org.botstandards.apix.common.ServiceStage;
|
||||
import org.botstandards.apix.registry.dto.ServicePatchRequest;
|
||||
import org.junit.jupiter.api.Test;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.time.Instant;
|
||||
import java.util.List;
|
||||
import java.util.UUID;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.api.Assertions.assertThatThrownBy;
|
||||
|
||||
class RegistryServiceTest {
|
||||
|
||||
// Fixed reference moment so tests don't depend on wall-clock time
|
||||
private static final Instant NOW = Instant.parse("2025-06-15T00:00:00Z");
|
||||
|
||||
// ── detectChangeType ──────────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void detectChangeType_lockReleased_whenLockedTransitionsFromTrueToFalse() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.DEPRECATED, true, patch(null, false, null, null)))
|
||||
.isEqualTo(ChangeType.LOCK_RELEASED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_bsmUpdated_whenLockedSetFalseButWasAlreadyFalse() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.DEPRECATED, false, patch(null, false, null, null)))
|
||||
.isEqualTo(ChangeType.BSM_UPDATED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_bsmUpdated_whenLockedSetFalseButPreviouslyNull() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.PRODUCTION, null, patch(null, false, null, null)))
|
||||
.isEqualTo(ChangeType.BSM_UPDATED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_sunsetDeclared_whenStageToDeprecatedAndSunsetAtInSameRequest() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.PRODUCTION, null,
|
||||
patch(ServiceStage.DEPRECATED, null, NOW.plus(Duration.ofDays(90)), null)))
|
||||
.isEqualTo(ChangeType.SUNSET_DECLARED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_stageChanged_whenStageToDeprecatedWithoutSunsetAt() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.PRODUCTION, null,
|
||||
patch(ServiceStage.DEPRECATED, null, null, null)))
|
||||
.isEqualTo(ChangeType.STAGE_CHANGED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_stageChanged_whenStageToDecommissioned() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.DEPRECATED, false,
|
||||
patch(ServiceStage.DECOMMISSIONED, null, null, null)))
|
||||
.isEqualTo(ChangeType.STAGE_CHANGED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_replacementDeclared_whenReplacesServiceIdsProvided() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.PRODUCTION, null,
|
||||
patch(null, null, null, List.of(UUID.randomUUID()))))
|
||||
.isEqualTo(ChangeType.REPLACEMENT_DECLARED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_replacementDeclared_whenReplacesServiceIdsEmpty() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.PRODUCTION, null,
|
||||
patch(null, null, null, List.of())))
|
||||
.isEqualTo(ChangeType.REPLACEMENT_DECLARED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_bsmUpdated_whenOnlyCapabilitiesChanged() {
|
||||
var req = new ServicePatchRequest(
|
||||
null, null, null, List.of("new.capability"),
|
||||
null, null, null, null, null, null, null, null, null, null, null,
|
||||
null, null, null, null, null);
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.PRODUCTION, null, req))
|
||||
.isEqualTo(ChangeType.BSM_UPDATED);
|
||||
}
|
||||
|
||||
@Test
|
||||
void detectChangeType_lockReleased_takesOverStageChangeWhenBothOccur() {
|
||||
assertThat(RegistryService.detectChangeType(ServiceStage.DEPRECATED, true,
|
||||
patch(ServiceStage.DECOMMISSIONED, false, null, null)))
|
||||
.isEqualTo(ChangeType.LOCK_RELEASED);
|
||||
}
|
||||
|
||||
// ── requireFutureSunset ───────────────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void requireFutureSunset_passes_whenNull() {
|
||||
RegistryService.requireFutureSunset(null, NOW); // must not throw
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireFutureSunset_passes_whenFuture() {
|
||||
RegistryService.requireFutureSunset(NOW.plus(Duration.ofDays(1)), NOW); // must not throw
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireFutureSunset_throws422_whenNow() {
|
||||
// sunsetAt == now is not strictly after now — exclusive boundary means it's already "past"
|
||||
assertThatThrownBy(() -> RegistryService.requireFutureSunset(NOW, NOW))
|
||||
.isInstanceOf(WebApplicationException.class)
|
||||
.satisfies(e -> assertThat(((WebApplicationException) e).getResponse().getStatus()).isEqualTo(422));
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireFutureSunset_throws422_whenPast() {
|
||||
assertThatThrownBy(() -> RegistryService.requireFutureSunset(NOW.minus(Duration.ofDays(1)), NOW))
|
||||
.isInstanceOf(WebApplicationException.class)
|
||||
.satisfies(e -> assertThat(((WebApplicationException) e).getResponse().getStatus()).isEqualTo(422));
|
||||
}
|
||||
|
||||
// ── requireSunsetBeforeLockRelease ────────────────────────────────────────
|
||||
|
||||
@Test
|
||||
void requireSunsetBeforeLockRelease_passes_whenNotReleasingLock() {
|
||||
RegistryService.requireSunsetBeforeLockRelease(true, true, null); // not releasing
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetBeforeLockRelease_passes_whenSunsetAtPresent() {
|
||||
RegistryService.requireSunsetBeforeLockRelease(false, true, NOW.plus(Duration.ofDays(90)));
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetBeforeLockRelease_throws422_whenNoSunsetAtAndLockWasTrue() {
|
||||
assertThatThrownBy(() -> RegistryService.requireSunsetBeforeLockRelease(false, true, null))
|
||||
.isInstanceOf(WebApplicationException.class)
|
||||
.satisfies(e -> assertThat(((WebApplicationException) e).getResponse().getStatus()).isEqualTo(422));
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetBeforeLockRelease_passes_whenPreviousLockWasAlreadyFalse() {
|
||||
RegistryService.requireSunsetBeforeLockRelease(false, false, null);
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetBeforeLockRelease_passes_whenPreviousLockWasNull() {
|
||||
RegistryService.requireSunsetBeforeLockRelease(false, null, null);
|
||||
}
|
||||
|
||||
// ── requireSunsetPassedBeforeDecommission ─────────────────────────────────
|
||||
|
||||
@Test
|
||||
void requireSunsetPassedBeforeDecommission_passes_whenNotDecommissioning() {
|
||||
RegistryService.requireSunsetPassedBeforeDecommission(ServiceStage.PRODUCTION, null, NOW);
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetPassedBeforeDecommission_passes_whenSunsetIsInPast() {
|
||||
RegistryService.requireSunsetPassedBeforeDecommission(ServiceStage.DECOMMISSIONED,
|
||||
NOW.minus(Duration.ofDays(1)), NOW);
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetPassedBeforeDecommission_passes_whenSunsetIsNow() {
|
||||
// Exclusive boundary: now >= sunsetAt means the sunset moment has arrived
|
||||
RegistryService.requireSunsetPassedBeforeDecommission(ServiceStage.DECOMMISSIONED, NOW, NOW);
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetPassedBeforeDecommission_throws422_whenNoSunsetAt() {
|
||||
assertThatThrownBy(() ->
|
||||
RegistryService.requireSunsetPassedBeforeDecommission(ServiceStage.DECOMMISSIONED, null, NOW))
|
||||
.isInstanceOf(WebApplicationException.class)
|
||||
.satisfies(e -> assertThat(((WebApplicationException) e).getResponse().getStatus()).isEqualTo(422));
|
||||
}
|
||||
|
||||
@Test
|
||||
void requireSunsetPassedBeforeDecommission_throws422_whenSunsetIsInFuture() {
|
||||
assertThatThrownBy(() ->
|
||||
RegistryService.requireSunsetPassedBeforeDecommission(ServiceStage.DECOMMISSIONED,
|
||||
NOW.plus(Duration.ofDays(30)), NOW))
|
||||
.isInstanceOf(WebApplicationException.class)
|
||||
.satisfies(e -> assertThat(((WebApplicationException) e).getResponse().getStatus()).isEqualTo(422));
|
||||
}
|
||||
|
||||
// ── Helpers ───────────────────────────────────────────────────────────────
|
||||
|
||||
private static ServicePatchRequest patch(ServiceStage stage, Boolean locked, Instant sunsetAt, List<UUID> replaces) {
|
||||
return new ServicePatchRequest(
|
||||
null, null, null, null, null, null, null, null, null, null, null, null, null, null, null,
|
||||
stage, locked, sunsetAt, null, replaces);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,4 @@
|
||||
# Write results one level up so the parent aggregates all modules in one report.
|
||||
# Resolved relative to the module's working directory (Maven Surefire sets user.dir
|
||||
# to the module basedir), so ../target/allure-results = apix-mvp/target/allure-results.
|
||||
allure.results.directory=../target/allure-results
|
||||
@@ -0,0 +1,14 @@
|
||||
# ── Test datasource ───────────────────────────────────────────────────────────
|
||||
# Points directly at the docker-compose db service (postgres:16-alpine on :5432).
|
||||
# Start it with: docker-compose -f infra/docker-compose.yml up -d db
|
||||
# Bypasses Testcontainers/DevServices so Docker socket issues don't block tests.
|
||||
quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/apix
|
||||
quarkus.datasource.username=apix
|
||||
quarkus.datasource.password=apix
|
||||
|
||||
# ── Test HTTP port ────────────────────────────────────────────────────────────
|
||||
# Fixed port so RestAssured.port in TestSetup matches without reading system properties.
|
||||
quarkus.http.test-port=8181
|
||||
|
||||
# ── Test API key ──────────────────────────────────────────────────────────────
|
||||
apix.api-key=test-api-key
|
||||
@@ -0,0 +1,35 @@
|
||||
Feature: Device owner anonymity guarantee
|
||||
As an IoT device owner
|
||||
I must be able to use the full transition discovery process
|
||||
Without APIX storing or inferring any artefact that identifies me or my device
|
||||
|
||||
Background:
|
||||
Given a deprecated service "SmartHub Cloud" with locked set to false
|
||||
And at least one replacement candidate registered
|
||||
|
||||
Scenario: Status polling requires no authentication
|
||||
When GET /services/{smartHubCloudId} is called with no Authorization header
|
||||
Then the response is HTTP 200
|
||||
And locked and sunset_date are present in the response body
|
||||
|
||||
Scenario: Replacement discovery requires no authentication
|
||||
When GET /services/{smartHubCloudId}/replacements is called with no Authorization header
|
||||
Then the response is HTTP 200
|
||||
And the full replacement list is returned
|
||||
|
||||
Scenario: No session state is created during polling
|
||||
When GET /services/{smartHubCloudId}/replacements is called twice with no shared headers
|
||||
Then both responses are identical in content
|
||||
And neither response contains a Set-Cookie header
|
||||
And neither response contains a session reference
|
||||
|
||||
Scenario: Response contains no client-tracking artefacts
|
||||
When GET /services/{smartHubCloudId}/replacements is called
|
||||
Then the response headers contain no Set-Cookie
|
||||
And the response body contains no field that echoes client request details
|
||||
And the response body contains no correlation ID tied to the caller
|
||||
|
||||
Scenario: Polling endpoint is covered by the public cache
|
||||
When GET /services/{smartHubCloudId}/replacements is called twice within the cache TTL
|
||||
Then the second response is served from cache
|
||||
And the cache key does not incorporate any client-identifying header
|
||||
@@ -0,0 +1,50 @@
|
||||
Feature: Service decommissioning and historical record preservation
|
||||
As a template owner
|
||||
I want to decommission a service after its sunset date
|
||||
And as a device owner I must be able to access the historical record indefinitely
|
||||
|
||||
Background:
|
||||
Given a registered service "SmartHub Cloud" with endpoint "https://api.smarthub.example"
|
||||
And the service has capability "device.telemetry"
|
||||
And the service has locked set to true
|
||||
And the service is in stage "DEPRECATED" with a sunset_date set
|
||||
And the service has locked set to false
|
||||
And a registered service "OpenHub" with endpoint "https://api.openhub.example"
|
||||
And "OpenHub" has declared compatibility with "SmartHub Cloud"
|
||||
And the sunset_date of "SmartHub Cloud" has passed
|
||||
|
||||
Scenario: Template owner decommissions the service
|
||||
When the template owner sets service_stage to "DECOMMISSIONED"
|
||||
Then the service does not appear in any capability search results
|
||||
And GET /services/{id} returns HTTP 200 with the complete historical record
|
||||
And the response body contains service_stage "DECOMMISSIONED"
|
||||
And a version history entry of type "STAGE_CHANGED" exists for the service
|
||||
|
||||
Scenario: Decommissioned service with unreleased lock auto-releases for replacement discovery
|
||||
Given a deprecated service "NeverReleased" that reached DECOMMISSIONED without setting locked=false
|
||||
When GET /services/{neverReleasedId}/replacements is called with no authentication header
|
||||
Then the response is HTTP 200
|
||||
And replacement candidates are returned regardless of the stored locked value
|
||||
|
||||
Scenario: Historical record survives indefinitely
|
||||
Given a decommissioned service
|
||||
When GET /services/{id} is called
|
||||
Then the response is HTTP 200
|
||||
And the full BSM payload is present in the response
|
||||
And all version history entries are accessible via GET /services/{id}/history
|
||||
|
||||
Scenario: Cannot decommission a service before its sunset date
|
||||
Given a deprecated service "FutureCloud" with a sunset_date 30 days from now
|
||||
When the template owner attempts to set service_stage to "DECOMMISSIONED"
|
||||
Then the response is HTTP 422
|
||||
And the error message contains "sunset_at has not passed"
|
||||
|
||||
Scenario: Full transition timeline is visible in version history
|
||||
Given "SmartHub Cloud" has completed the full lifecycle
|
||||
When GET /services/{id}/history is called
|
||||
Then the history contains an entry of type "REGISTERED"
|
||||
And the history contains an entry of type "SUNSET_DECLARED"
|
||||
And the history contains an entry of type "LOCK_RELEASED"
|
||||
And the history contains an entry of type "REPLACEMENT_DECLARED"
|
||||
And the history contains an entry of type "STAGE_CHANGED" with new value "DECOMMISSIONED"
|
||||
And all entries are ordered chronologically ascending
|
||||
+46
@@ -0,0 +1,46 @@
|
||||
Feature: Replacement provider declares compatibility
|
||||
As a replacement service provider
|
||||
I want to declare that my service covers a deprecated template
|
||||
So that IoT device owners can discover me as a migration target
|
||||
|
||||
Background:
|
||||
Given a deprecated service "SmartHub Cloud" with locked set to false
|
||||
And a registered service "OpenHub" in stage "PRODUCTION" with O-level "IDENTITY_VERIFIED"
|
||||
|
||||
Scenario: Replacement provider declares compatibility with a deprecated service
|
||||
When "OpenHub" declares replacesServiceIds containing the ID of "SmartHub Cloud"
|
||||
Then GET /services/{smartHubCloudId}/replacements includes "OpenHub"
|
||||
And a version history entry of type "REPLACEMENT_DECLARED" exists for "SmartHub Cloud"
|
||||
And the service_replacements table contains the declared pair
|
||||
|
||||
Scenario: Declaration against a non-deprecated service is rejected
|
||||
Given a service "ActiveCloud" in stage "PRODUCTION" with locked set to true
|
||||
When "OpenHub" declares replacesServiceIds containing the ID of "ActiveCloud"
|
||||
Then the response is HTTP 422
|
||||
And the error message contains "target service is not deprecated"
|
||||
|
||||
Scenario: Declaration against a locked deprecated service is rejected
|
||||
Given a service "LockedCloud" in stage "DEPRECATED" with locked set to true
|
||||
When "OpenHub" declares replacesServiceIds containing the ID of "LockedCloud"
|
||||
Then the response is HTTP 422
|
||||
And the error message contains "target service lock has not been released"
|
||||
|
||||
Scenario: Multiple replacement providers for the same deprecated service
|
||||
Given a service "CloudBridge" in stage "PRODUCTION" with O-level "LEGAL_ENTITY_VERIFIED"
|
||||
When "OpenHub" declares replacesServiceIds containing the ID of "SmartHub Cloud"
|
||||
And "CloudBridge" declares replacesServiceIds containing the ID of "SmartHub Cloud"
|
||||
Then GET /services/{smartHubCloudId}/replacements returns 2 candidates
|
||||
And "CloudBridge" appears before "OpenHub" in the results
|
||||
And the ordering is by O-level descending
|
||||
|
||||
Scenario: Replacement provider retracts their compatibility declaration
|
||||
Given "OpenHub" has declared compatibility with "SmartHub Cloud"
|
||||
When "OpenHub" removes "SmartHub Cloud" from its replacesServiceIds
|
||||
Then GET /services/{smartHubCloudId}/replacements no longer includes "OpenHub"
|
||||
And the service_replacements row for this pair is deleted
|
||||
|
||||
Scenario: Duplicate declaration is idempotent
|
||||
Given "OpenHub" has declared compatibility with "SmartHub Cloud"
|
||||
When "OpenHub" declares replacesServiceIds containing the ID of "SmartHub Cloud" again
|
||||
Then the response is HTTP 200
|
||||
And GET /services/{smartHubCloudId}/replacements still returns exactly 1 candidate
|
||||
+45
@@ -0,0 +1,45 @@
|
||||
Feature: Device owner discovers replacement services
|
||||
As an IoT device owner
|
||||
I want to discover compatible replacement services by polling APIX
|
||||
Without revealing my identity or device details
|
||||
|
||||
Background:
|
||||
Given a deprecated service "SmartHub Cloud" with locked set to false and a sunset_date set
|
||||
And "OpenHub" in stage "PRODUCTION" with O-level "IDENTITY_VERIFIED" has declared compatibility
|
||||
And "CloudBridge" in stage "PRODUCTION" with O-level "LEGAL_ENTITY_VERIFIED" has declared compatibility
|
||||
|
||||
Scenario: Device polls service status without authentication
|
||||
When GET /services/{smartHubCloudId} is called with no authentication header
|
||||
Then the response is HTTP 200
|
||||
And the response body contains service_stage "DEPRECATED"
|
||||
And the response body contains locked false
|
||||
And the response body contains a sunset_date
|
||||
|
||||
Scenario: Device discovers replacement candidates without authentication
|
||||
When GET /services/{smartHubCloudId}/replacements is called with no authentication header
|
||||
Then the response is HTTP 200
|
||||
And the response contains 2 candidates
|
||||
And "CloudBridge" appears before "OpenHub" in the results
|
||||
|
||||
Scenario: Device filters candidates by minimum O-level
|
||||
When GET /services/{smartHubCloudId}/replacements?minOLevel=LEGAL_ENTITY_VERIFIED is called with no authentication header
|
||||
Then the response is HTTP 200
|
||||
And the response contains 1 candidate
|
||||
And the candidate is "CloudBridge"
|
||||
|
||||
Scenario: Device polls a still-locked deprecated service
|
||||
Given a deprecated service "LockedCloud" with locked set to true
|
||||
When GET /services/{lockedCloudId}/replacements is called with no authentication header
|
||||
Then the response is HTTP 200
|
||||
And the response body contains an empty candidates list
|
||||
And the response body contains locked true
|
||||
|
||||
Scenario: Default production search excludes deprecated services
|
||||
When GET /services?capability=device.telemetry is called with no authentication header
|
||||
Then "SmartHub Cloud" is not in the results
|
||||
And only services with stage "PRODUCTION" are returned
|
||||
|
||||
Scenario: Explicit deprecated filter includes deprecated services
|
||||
When GET /services?capability=device.telemetry&stage=deprecated is called
|
||||
Then "SmartHub Cloud" is in the results
|
||||
And each result has service_stage "DEPRECATED"
|
||||
@@ -0,0 +1,40 @@
|
||||
Feature: Sunset declaration and lock release
|
||||
As a template owner
|
||||
I want to declare a sunset date and release the device lock
|
||||
So that IoT device owners can prepare for migration in a predictable window
|
||||
|
||||
Background:
|
||||
Given a registered service "SmartHub Cloud" with endpoint "https://api.smarthub.example"
|
||||
And the service has capability "device.telemetry"
|
||||
And the service is in stage "PRODUCTION"
|
||||
And the service has locked set to true
|
||||
|
||||
Scenario: Template owner declares sunset date
|
||||
When the template owner updates the service with sunset_date 90 days from now and stage "DEPRECATED"
|
||||
Then the service stage is "DEPRECATED"
|
||||
And the service has a sunset_date set
|
||||
And a version history entry of type "SUNSET_DECLARED" exists for the service
|
||||
And the service does not appear in default production search results for capability "device.telemetry"
|
||||
And the service appears in search results when stage filter is "deprecated"
|
||||
|
||||
Scenario: Sunset declaration preserves the device lock
|
||||
When the template owner updates the service with sunset_date 90 days from now and stage "DEPRECATED"
|
||||
Then the service has locked set to true
|
||||
And GET /services/{id}/replacements returns HTTP 200 with an empty list
|
||||
|
||||
Scenario: Template owner releases the device lock after sunset is declared
|
||||
Given the service is in stage "DEPRECATED" with a sunset_date set
|
||||
When the template owner sets locked to false
|
||||
Then the service has locked set to false
|
||||
And a version history entry of type "LOCK_RELEASED" exists for the service
|
||||
And the version history entry contains the previous locked value true
|
||||
|
||||
Scenario: Lock cannot be released without a prior sunset declaration
|
||||
When the template owner attempts to set locked to false without a sunset_date
|
||||
Then the response is HTTP 422
|
||||
And the error message contains "sunset_at required before lock release"
|
||||
|
||||
Scenario: Sunset date cannot be set in the past
|
||||
When the template owner attempts to set sunset_date to yesterday
|
||||
Then the response is HTTP 422
|
||||
And the error message contains "sunset_at must be a future moment"
|
||||
@@ -0,0 +1,8 @@
|
||||
# Prevent the Cucumber engine from discovering feature files on its own.
|
||||
# Features are provided only by IotTransitionCucumberTest via @SelectClasspathResource,
|
||||
# which ensures Cucumber runs within the @QuarkusTest context (Quarkus server started).
|
||||
# Prevent standalone Cucumber execution. No feature file carries this tag, so the
|
||||
# Cucumber engine discovers scenarios but filters them to zero when running on its own.
|
||||
# Cucumber is run explicitly by IotTransitionCucumberTest via Main.run(), which does
|
||||
# not apply this JUnit Platform filter.
|
||||
cucumber.filter.tags=@_disabled_standalone_run_
|
||||
@@ -0,0 +1,87 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
|
||||
https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<parent>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-parent</artifactId>
|
||||
<version>${revision}</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>apix-spider</artifactId>
|
||||
<name>APIX :: Spider</name>
|
||||
<description>Liveness scheduler. Separate process, no public port. Shares DB with registry.</description>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-common</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Scheduling -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-scheduler</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Persistence (Liquibase disabled — registry owns schema) -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-hibernate-orm-panache</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-jdbc-postgresql</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- HTTP client for liveness probes -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest-client</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-rest-client-jackson</artifactId>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-smallrye-health</artifactId>
|
||||
</dependency>
|
||||
|
||||
<!-- Test -->
|
||||
<dependency>
|
||||
<groupId>io.quarkus</groupId>
|
||||
<artifactId>quarkus-junit5</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.wiremock</groupId>
|
||||
<artifactId>wiremock</artifactId>
|
||||
<version>3.5.4</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>${quarkus.platform.group-id}</groupId>
|
||||
<artifactId>quarkus-maven-plugin</artifactId>
|
||||
<extensions>true</extensions>
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>build</goal>
|
||||
<goal>generate-code</goal>
|
||||
<goal>generate-code-tests</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
@@ -0,0 +1,47 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
|
||||
https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<parent>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-parent</artifactId>
|
||||
<version>${revision}</version>
|
||||
</parent>
|
||||
|
||||
<artifactId>apix-verification</artifactId>
|
||||
<name>APIX :: Verification</name>
|
||||
<description>O-level verifiers and pipeline. No Quarkus dependency — plain CDI-free POJOs.</description>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-common</artifactId>
|
||||
</dependency>
|
||||
<!-- DNS lookups for O-1 domain verification -->
|
||||
<dependency>
|
||||
<groupId>dnsjava</groupId>
|
||||
<artifactId>dnsjava</artifactId>
|
||||
<version>3.6.1</version>
|
||||
</dependency>
|
||||
<!-- HTTP calls to GLEIF / OpenCorporates / sanctions APIs -->
|
||||
<dependency>
|
||||
<groupId>com.fasterxml.jackson.core</groupId>
|
||||
<artifactId>jackson-databind</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.junit.jupiter</groupId>
|
||||
<artifactId>junit-jupiter</artifactId>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- HTTP mocking for verifier tests -->
|
||||
<dependency>
|
||||
<groupId>org.wiremock</groupId>
|
||||
<artifactId>wiremock</artifactId>
|
||||
<version>3.5.4</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</project>
|
||||
@@ -0,0 +1,32 @@
|
||||
---
|
||||
arc42: "1 — Introduction and Goals"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 1.1 MVP Goal Statement
|
||||
|
||||
TODO: Define what must be provable at the end of the PoC phase.
|
||||
Key question: What does a Sovereign Tech Fund reviewer need to see to confirm this is real running infrastructure?
|
||||
|
||||
## 1.2 Quality Goals
|
||||
|
||||
TODO: Top 3–5 quality goals, measurable.
|
||||
Example dimensions: Queryability, Correctness of liveness status, Registration reliability, Availability.
|
||||
|
||||
## 1.3 Stakeholders
|
||||
|
||||
| Role | Expectation |
|
||||
|---|---|
|
||||
| STF reviewer | Running public URL, queryable, real services registered |
|
||||
| Agent developer | Capability search returns structured, machine-readable results |
|
||||
| Service registrant | Registration via portal or API; status visible within minutes |
|
||||
| BSF (Carsten) | Deployable solo; maintainable; demonstrable to founding members |
|
||||
|
||||
## 1.4 Out of Scope (MVP)
|
||||
|
||||
- Billing and commercial tiers
|
||||
- Automated O-level / S-level verification
|
||||
- Multi-region redundancy
|
||||
- Full CE/regulatory BSM validation
|
||||
- Agent Enterprise composition layer
|
||||
- IoT device template persistence (DC-1)
|
||||
@@ -0,0 +1,42 @@
|
||||
---
|
||||
arc42: "2 — Architecture Constraints"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 2.1 Technical Constraints
|
||||
|
||||
| Constraint | Rationale |
|
||||
|---|---|
|
||||
| Hosted on Hetzner (EU) | European sovereignty narrative; cost; GDPR residency |
|
||||
| Docker Compose deployment | Solo maintainability; no Kubernetes overhead for PoC |
|
||||
| Python 3.12 | AI ecosystem fit; LLM-assisted dev speed; SDK readiness |
|
||||
| PostgreSQL 16 | Relational integrity + JSONB flexibility for BSM payload |
|
||||
| Caddy reverse proxy | Auto-TLS (Let's Encrypt); zero-config HTTPS |
|
||||
| Open source (Apache 2.0) | STF requirement; community credibility |
|
||||
| HTTPS mandatory | Trust infrastructure must be served over TLS — non-negotiable even for PoC |
|
||||
|
||||
## 2.2 Organisational Constraints
|
||||
|
||||
| Constraint | Rationale |
|
||||
|---|---|
|
||||
| Solo developer | All components must be maintainable by one person |
|
||||
| LLM-assisted development | Accepted; all generated code must be reviewed before commit |
|
||||
| Public GitHub repository | STF requires open-source deliverables; also community signal |
|
||||
| No external team dependencies | No waiting on others; all unblocked decisions are made by Carsten |
|
||||
|
||||
## 2.3 Regulatory Constraints
|
||||
|
||||
| Constraint | Rationale |
|
||||
|---|---|
|
||||
| GDPR-lite | Only data stored: registrant email (for contact), service URL, BSM payload. No analytics, no tracking. |
|
||||
| No PII in logs | Even at DEBUG level — email addresses must not appear in log output |
|
||||
| No secrets in images or Git | API keys and DB credentials via runtime env only |
|
||||
|
||||
## 2.4 Convention Constraints
|
||||
|
||||
| Constraint | Rationale |
|
||||
|---|---|
|
||||
| HATEOAS API style | Core APIX Internet-Draft requirement; agents must be able to navigate from root URL |
|
||||
| IETF Internet-Draft alignment | BSM field names must match draft-rehfeld-bot-service-index-00 |
|
||||
| PlantUML for all diagrams | Project convention (not Mermaid) |
|
||||
| arc42 documentation structure | This document set |
|
||||
@@ -0,0 +1,58 @@
|
||||
---
|
||||
arc42: "3 — Context and Scope"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 3.1 Business Context
|
||||
|
||||
TODO: PlantUML system context diagram.
|
||||
|
||||
External actors:
|
||||
- **Autonomous Agent** — queries the index by capability; reads BSM; consumes registered services
|
||||
- **Service Registrant** — submits BSM via portal or API; receives registration confirmation
|
||||
- **Spider** — automated crawler (internal); checks liveness of registered services against external endpoints
|
||||
- **Admin (BSF)** — assigns O-levels; approves pending registrations; monitors registry health
|
||||
- **External Service Endpoints** — the actual services being registered; queried by Spider for liveness
|
||||
|
||||
```plantuml
|
||||
@startuml context
|
||||
!include https://raw.githubusercontent.com/plantuml-stdlib/C4-PlantUML/master/C4_Context.puml
|
||||
|
||||
Person(agent, "Autonomous Agent", "Queries registry by capability; consumes registered services")
|
||||
Person(registrant, "Service Registrant", "Submits BSM; monitors registration status")
|
||||
Person(admin, "BSF Admin", "Assigns O-levels; approves registrations")
|
||||
|
||||
System(apix, "APIX Registry", "Global, queryable index of machine-consumable services")
|
||||
|
||||
System_Ext(ext_service, "External Service Endpoint", "The registered service; queried by Spider for liveness")
|
||||
|
||||
Rel(agent, apix, "Capability query / BSM fetch", "HTTPS/JSON")
|
||||
Rel(registrant, apix, "BSM registration / status check", "HTTPS/JSON or Portal")
|
||||
Rel(admin, apix, "O-level assignment / moderation", "Portal (API-key)")
|
||||
Rel(apix, ext_service, "Liveness check / spec fetch", "HTTPS", "Spider")
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## 3.2 Technical Context
|
||||
|
||||
TODO: PlantUML technical context diagram showing network boundaries.
|
||||
|
||||
Components inside the system boundary:
|
||||
- Caddy (reverse proxy, TLS termination)
|
||||
- API service (FastAPI)
|
||||
- Portal service (FastAPI + HTMX)
|
||||
- Spider service (async Python scheduler)
|
||||
- PostgreSQL (registry database)
|
||||
|
||||
## 3.3 External Interface Table
|
||||
|
||||
| Interface | Direction | Protocol | Data |
|
||||
|---|---|---|---|
|
||||
| Capability query | Agent → API | HTTPS GET | Query params: `capability`, `country`, `olevel`; Response: BSM list |
|
||||
| BSM registration | Registrant → API | HTTPS POST | BSM JSON payload + API key header |
|
||||
| Service detail | Agent → API | HTTPS GET | BSM + liveness status |
|
||||
| HATEOAS root | Agent → API | HTTPS GET | Navigation links JSON |
|
||||
| Liveness check | Spider → Ext. service | HTTPS GET | HTTP status + response time |
|
||||
| OpenAPI fetch | Spider → Ext. service | HTTPS GET | OpenAPI JSON spec |
|
||||
| Admin portal | Admin → Portal | HTTPS | Browser; HTML form |
|
||||
@@ -0,0 +1,56 @@
|
||||
---
|
||||
arc42: "4 — Solution Strategy"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 4.1 Technology Decisions
|
||||
|
||||
| Decision | Choice | Rationale |
|
||||
|---|---|---|
|
||||
| Language + framework | Java 21 + Quarkus 3.x | Compile-time safety; purpose-built for microservices; GraalVM native image first-class (see ADR-001) |
|
||||
| Production binary | GraalVM Native Image | ~50–80MB RAM per service; ~100ms startup; fits Hetzner CX22 with headroom |
|
||||
| Dev loop | `quarkus dev` (JVM mode) | Live reload + continuous testing; native build only for production image |
|
||||
| Persistence | Hibernate ORM + Panache | Standard Quarkus persistence; Panache active record reduces boilerplate |
|
||||
| BSM payload | PostgreSQL JSONB + `@JdbcTypeCode(SqlTypes.JSON)` | Flexible schema for optional BSM fields without a separate document store |
|
||||
| Migrations | Liquibase | User's existing tool; first-class Quarkus extension; rollback + context support (see ADR-008) |
|
||||
| Reverse proxy | Caddy | Auto-TLS with Let's Encrypt; minimal config (see ADR-003) |
|
||||
| Portal rendering | HTMX + Qute | No JS build pipeline; type-safe templates (build-time error on missing variables); idiomatic Quarkus (see ADR-004) |
|
||||
| Spider concurrency | Java 21 virtual threads (`@RunOnVirtualThread`) | Non-blocking HTTP checks without reactive programming complexity |
|
||||
| HTTP client (Spider) | Quarkus REST Client Reactive | Declarative; integrates with Quarkus DI and fault tolerance extensions |
|
||||
| Build tool | Maven 3.9 | Quarkus documentation is Maven-first; Quarkus Maven plugin handles native build |
|
||||
| Testing | JUnit 5 + `@QuarkusTest` + RestAssured + WireMock | `@QuarkusTest` starts real application context; RestAssured for HTTP assertions; WireMock for external API mocks |
|
||||
|
||||
## 4.2 Architectural Patterns
|
||||
|
||||
| Pattern | Application |
|
||||
|---|---|
|
||||
| HATEOAS | `IndexResource` returns all navigation links; agents navigate from root without prior knowledge |
|
||||
| Repository pattern | DB access in `ServiceRepository` (Panache); business logic in `RegistryService`; resources are thin |
|
||||
| Compile-time DI | Quarkus CDI resolves all injection at build time; no runtime reflection surprises |
|
||||
| Scheduler-based Spider | `@Scheduled(every="15m")` on `SpiderScheduler`; stateless per run; virtual threads for concurrent checks |
|
||||
| Verification pipeline | Sequential O-level elevation (O-1 → sanctions → O-2 → O-3); each step is an independent CDI bean |
|
||||
| API key on writes | Single shared key for MVP via custom Quarkus Security identity provider; per-registrant keys post-MVP |
|
||||
| Fail-fast validation | BSM validated at boundary via Bean Validation (`@Valid` on JAX-RS resource); invalid BSM rejected with 400 + constraint violation details |
|
||||
|
||||
## 4.3 Quality Goal → Decision Mapping
|
||||
|
||||
| Quality Goal | Architecture Decision |
|
||||
|---|---|
|
||||
| Compile-time safety | Quarkus CDI + Bean Validation + Qute type-safe templates — errors at build time, not runtime |
|
||||
| Queryability | HATEOAS root + capability search; JPQL + JSONB operator query in ServiceRepository |
|
||||
| Liveness accuracy | SpiderScheduler every 15 min; `last_checked_at` + `uptime_30d_percent` exposed in response |
|
||||
| Registration reliability | Idempotent `UPSERT` on endpoint URL; Liquibase migrations with rollback support |
|
||||
| Security hygiene | HTTPS via Caddy; API key on write endpoints; no PII in logs; non-root container user |
|
||||
| Solo maintainability | Docker Compose; `quarkus dev` for local loop; single JVM language across all services |
|
||||
|
||||
## 4.4 MVP Shortcuts (Accepted Technical Debt)
|
||||
|
||||
| Shortcut | Exit Path |
|
||||
|---|---|
|
||||
| O-4 / O-5 assigned manually | Accredited Verifier integration post-MVP |
|
||||
| Single shared API key | Per-registrant key management + OAuth2 post-MVP |
|
||||
| No rate limiting on read endpoints | Caddy rate_limit directive when traffic warrants |
|
||||
| OpenAPI / MCP parsers validate presence only | Field-level spec comparison in Spider post-MVP |
|
||||
| Single-region deployment | Hetzner multi-region + Managed Database post-funding |
|
||||
| No billing | Commercial tier in Phase 2 |
|
||||
| No CI/CD pipeline | GitHub Actions native build pipeline post-MVP |
|
||||
@@ -0,0 +1,135 @@
|
||||
---
|
||||
arc42: "5 — Building Block View"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 5.1 Level 1 — Maven Module Structure
|
||||
|
||||
```plantuml
|
||||
@startuml modules
|
||||
skinparam packageStyle rectangle
|
||||
|
||||
package "apix-common\n(plain Java 21 library)" as common {
|
||||
component [OLevel\nLivenessStatus\nBsmPayload\nServiceSummaryDto\nVerificationResult] as dtos
|
||||
}
|
||||
|
||||
package "apix-verification\n(plain Java 21 library)" as verification {
|
||||
component [O1DnsVerifier\nO2GleifVerifier\nO2OpenCorporatesVerifier\nO3HygieneVerifier\nSanctionsScreener\nVerificationPipeline] as verifiers
|
||||
}
|
||||
|
||||
package "apix-registry\n(Quarkus 3.x app)" as registry {
|
||||
component [IndexResource\nServiceResource\nRegisterResource] as res
|
||||
component [RegistryService\nVerificationOrchestrator] as svc
|
||||
component [ServiceRecord\nServiceRepository] as repo
|
||||
component [Liquibase\nmigrations] as lb
|
||||
}
|
||||
|
||||
package "apix-spider\n(Quarkus 3.x app)" as spider {
|
||||
component [SpiderScheduler\nLivenessFetcher\nLivenessEvaluator\nOpenApiParser\nMcpParser] as spider_core
|
||||
component [SpiderServiceView\nSpiderRepository] as spider_repo
|
||||
}
|
||||
|
||||
package "apix-portal\n(Quarkus 3.x app)" as portal {
|
||||
component [PortalResource\nAdminResource\nRegistryClient] as portal_res
|
||||
component [Qute templates] as templates
|
||||
}
|
||||
|
||||
verification ..> common : depends on
|
||||
registry ..> common : depends on
|
||||
registry ..> verification : depends on
|
||||
spider ..> common : depends on
|
||||
portal ..> common : depends on
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## 5.2 Level 1 — Deployment View (Docker Compose)
|
||||
|
||||
```plantuml
|
||||
@startuml deploy_l1
|
||||
package "Docker Compose — Hetzner CX22" {
|
||||
component [Caddy\n:80 / :443] as caddy
|
||||
component [apix-registry\n:8180 (internal)] as registry
|
||||
component [apix-spider\n:8082 (internal only)] as spider
|
||||
component [apix-portal\n:8081 (internal)] as portal
|
||||
database [PostgreSQL 16\n:5432 (internal)] as db
|
||||
}
|
||||
|
||||
cloud Internet
|
||||
Internet --> caddy : HTTPS
|
||||
|
||||
caddy --> registry : /api/*
|
||||
caddy --> portal : /*
|
||||
registry --> db : Hibernate ORM (Liquibase owner)
|
||||
spider --> db : Hibernate ORM (Liquibase disabled)
|
||||
portal --> registry : REST Client (HTTP internal)
|
||||
spider --> [External Services] : liveness checks (HTTPS)
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## 5.3 Level 2 — apix-registry Internals
|
||||
|
||||
```plantuml
|
||||
@startuml level2_registry
|
||||
package "apix-registry" {
|
||||
component [IndexResource] as r_index
|
||||
component [ServiceResource] as r_svc
|
||||
component [RegisterResource] as r_reg
|
||||
component [RegistryService] as svc
|
||||
component [VerificationOrchestrator] as orch
|
||||
component [ServiceRepository\n(Panache)] as repo
|
||||
component [Bean Validation\n(@Valid on JAX-RS)] as val
|
||||
}
|
||||
|
||||
r_index --> svc
|
||||
r_svc --> svc
|
||||
r_reg --> val
|
||||
r_reg --> svc
|
||||
r_reg --> orch
|
||||
svc --> repo
|
||||
orch --> [VerificationPipeline\n(apix-verification)]
|
||||
orch --> repo : persist VerificationResult
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## 5.4 Level 2 — apix-spider Internals
|
||||
|
||||
```plantuml
|
||||
@startuml level2_spider
|
||||
package "apix-spider" {
|
||||
component [SpiderScheduler\n@Scheduled(every=15m)] as sched
|
||||
component [LivenessFetcher\n@RestClient\n@RunOnVirtualThread] as fetcher
|
||||
component [LivenessEvaluator\n(pure logic)] as eval
|
||||
component [OpenApiParser] as oa
|
||||
component [McpParser] as mcp
|
||||
component [SpiderRepository\n(Panache)] as repo
|
||||
}
|
||||
|
||||
sched --> repo : load services due for check
|
||||
sched --> fetcher : dispatch per service
|
||||
fetcher --> eval : HTTP status + response_ms
|
||||
fetcher --> oa : spec URL
|
||||
fetcher --> mcp : MCP URL
|
||||
eval --> repo : write LivenessStatus
|
||||
oa --> repo : write spec validation result
|
||||
mcp --> repo : write spec validation result
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## 5.5 Component Responsibility Table
|
||||
|
||||
| Module / Component | Type | Responsibility |
|
||||
|---|---|---|
|
||||
| `apix-common` | Plain Java library | Shared enums and DTOs; no framework dependency; used by all modules |
|
||||
| `apix-verification` | Plain Java library | O-level elevation pipeline; pure logic + external HTTP/DNS calls via `java.net.http`; no Quarkus context |
|
||||
| `apix-registry` | Quarkus app | REST API (HATEOAS); BSM registration + validation; capability search; schema owner (Liquibase) |
|
||||
| `apix-spider` | Quarkus app | Scheduled liveness checks; OpenAPI/MCP spec verification; writes liveness metrics to DB; independent lifecycle |
|
||||
| `apix-portal` | Quarkus app | Human-readable web portal (HTMX + Qute); registration form; admin O-level view; calls registry via REST Client |
|
||||
| `VerificationOrchestrator` | CDI bean (registry) | Bridge between Quarkus config injection and the plain-Java `VerificationPipeline`; persists results |
|
||||
| `LivenessEvaluator` | Plain class (spider) | Pure function: HTTP status + response time → `LivenessStatus`; no I/O; testable without Quarkus |
|
||||
| `ServiceRecord` | Panache entity (registry) | Full entity — all columns; schema owner |
|
||||
| `SpiderServiceView` | Panache entity (spider) | Read/write subset of `services` table — only liveness columns; does not run migrations |
|
||||
| PostgreSQL | Database | Single shared instance; registry owns schema; spider and portal are consumers |
|
||||
@@ -0,0 +1,94 @@
|
||||
---
|
||||
arc42: "6 — Runtime View"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## Scenario 1 — Agent Queries Registry by Capability
|
||||
|
||||
```plantuml
|
||||
@startuml sc1
|
||||
actor Agent
|
||||
participant "Caddy" as caddy
|
||||
participant "API Service" as api
|
||||
database "PostgreSQL" as db
|
||||
|
||||
Agent -> caddy : GET /api/services?capability=inventory.read&country=DE
|
||||
caddy -> api : forward
|
||||
api -> db : SELECT services WHERE capability MATCH AND country=DE AND liveness=live
|
||||
db --> api : [ServiceRecord, ...]
|
||||
api --> caddy : 200 OK — JSON array of BSM summaries with _links
|
||||
caddy --> Agent : response
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## Scenario 2 — Service Registrant Submits BSM via Portal
|
||||
|
||||
```plantuml
|
||||
@startuml sc2
|
||||
actor Registrant
|
||||
participant "Caddy" as caddy
|
||||
participant "Portal Service" as portal
|
||||
participant "API Service" as api
|
||||
database "PostgreSQL" as db
|
||||
|
||||
Registrant -> caddy : POST /register (form submit)
|
||||
caddy -> portal : forward
|
||||
portal -> api : POST /api/register (BSM JSON + API key)
|
||||
api -> api : validate BSM (Pydantic)
|
||||
alt BSM invalid
|
||||
api --> portal : 422 Unprocessable — validation errors
|
||||
portal --> Registrant : form with errors highlighted
|
||||
else BSM valid
|
||||
api -> db : UPSERT service record
|
||||
db --> api : service_id
|
||||
api --> portal : 201 Created — service_id + status URL
|
||||
portal --> Registrant : "Registered. Status: pending O-level."
|
||||
end
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## Scenario 3 — Spider Liveness Check
|
||||
|
||||
```plantuml
|
||||
@startuml sc3
|
||||
participant "Spider Scheduler" as sched
|
||||
participant "Fetcher" as fetcher
|
||||
participant "Evaluator" as eval
|
||||
participant "DB Writer" as writer
|
||||
database "PostgreSQL" as db
|
||||
participant "External Service" as ext
|
||||
|
||||
sched -> db : SELECT services WHERE next_check <= NOW()
|
||||
db --> sched : [ServiceRecord, ...]
|
||||
loop for each service
|
||||
sched -> fetcher : check(service_url, spec_url)
|
||||
fetcher -> ext : GET service_url (timeout: 5s)
|
||||
ext --> fetcher : HTTP response
|
||||
fetcher -> eval : (status_code, response_time_ms)
|
||||
eval --> writer : liveness=live|degraded|unreachable, checked_at=NOW()
|
||||
writer -> db : UPDATE liveness_status WHERE service_id=X
|
||||
end
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## Scenario 4 — Agent Navigates via HATEOAS
|
||||
|
||||
```plantuml
|
||||
@startuml sc4
|
||||
actor Agent
|
||||
participant "API Service" as api
|
||||
|
||||
Agent -> api : GET /api/
|
||||
api --> Agent : 200 OK — { "_links": { "search": "/api/services{?capability,country}", "register": "/api/register", "health": "/api/health" } }
|
||||
|
||||
Agent -> api : GET /api/services?capability=slot.book
|
||||
api --> Agent : 200 OK — [{ "id": "...", "name": "...", "_links": { "self": "/api/services/{id}" } }, ...]
|
||||
|
||||
Agent -> api : GET /api/services/{id}
|
||||
api --> Agent : 200 OK — full BSM record + liveness status
|
||||
|
||||
@enduml
|
||||
```
|
||||
@@ -0,0 +1,65 @@
|
||||
---
|
||||
arc42: "7 — Deployment View"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 7.1 Hetzner Deployment Diagram
|
||||
|
||||
```plantuml
|
||||
@startuml deploy
|
||||
node "Hetzner CX22\n(2 vCPU, 4GB RAM, Ubuntu 24.04)" as hetzner {
|
||||
component [Caddy\n:80, :443] as caddy
|
||||
component [API Service\n:8000 (internal)] as api
|
||||
component [Portal Service\n:8001 (internal)] as portal
|
||||
component [Spider Service\n(no exposed port)] as spider
|
||||
database [PostgreSQL 16\n:5432 (internal)] as db
|
||||
folder "Hetzner Volume\n(20GB)" as vol
|
||||
}
|
||||
|
||||
cloud "Internet" {
|
||||
actor Agent
|
||||
actor Registrant
|
||||
}
|
||||
|
||||
cloud "Let's Encrypt" as le
|
||||
|
||||
Agent --> caddy : HTTPS :443
|
||||
Registrant --> caddy : HTTPS :443
|
||||
caddy --> api
|
||||
caddy --> portal
|
||||
caddy <--> le : ACME cert renewal
|
||||
api --> db
|
||||
spider --> db
|
||||
db --> vol : data persistence
|
||||
|
||||
@enduml
|
||||
```
|
||||
|
||||
## 7.2 Environment Table
|
||||
|
||||
| Setting | Dev | Prod (Hetzner) |
|
||||
|---|---|---|
|
||||
| TLS | None (HTTP only) | Auto via Caddy + Let's Encrypt |
|
||||
| DB | postgres:16 local container | postgres:16 container, data on Hetzner volume |
|
||||
| Spider interval | 2 min (fast feedback) | 15 min |
|
||||
| API key | `dev-key-insecure` | Strong random key, env var only |
|
||||
| Log level | DEBUG | INFO |
|
||||
| Port exposure | All ports exposed to host | Only :80, :443 via Caddy; all others internal |
|
||||
|
||||
## 7.3 Backup and Restore
|
||||
|
||||
- `backup.sh` runs via cron daily at 03:00 UTC
|
||||
- Executes `pg_dump` into `/backup/apix_$(date +%Y%m%d).sql.gz`
|
||||
- Backup directory mounted on Hetzner volume (separate from DB data volume)
|
||||
- Retain last 7 dumps; older files deleted by script
|
||||
- Restore: `psql < apix_YYYYMMDD.sql.gz` — documented in `infra/hetzner/RESTORE.md`
|
||||
|
||||
## 7.4 Domain and DNS
|
||||
|
||||
TODO: Confirm domain name (OQ-MVP-01).
|
||||
|
||||
Planned DNS setup:
|
||||
- `registry.apix.dev` (or `index.botstandards.org`) → Hetzner VPS IP (A record)
|
||||
- TTL: 300s initially for fast propagation during setup
|
||||
|
||||
Caddy will automatically obtain and renew the TLS certificate once the A record resolves to the server IP.
|
||||
@@ -0,0 +1,127 @@
|
||||
---
|
||||
arc42: "8 — Crosscutting Concepts"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 8.1 Logging
|
||||
|
||||
- Format: structured JSON in production (`python-json-logger`); human-readable in dev
|
||||
- Log levels: DEBUG (dev only), INFO (operational events), WARNING (recoverable anomalies), ERROR (failures needing attention)
|
||||
- What is logged per component:
|
||||
|
||||
| Component | INFO events | WARNING events | ERROR events |
|
||||
|---|---|---|---|
|
||||
| API | request method+path+status+duration | validation failure (BSM) | DB connection failure |
|
||||
| Spider | check start/end, service_id, liveness result, duration | response time > 3s | fetch timeout, DB write failure |
|
||||
| Portal | form submission received | — | API call failure |
|
||||
|
||||
- **Never log:** email addresses, API keys, DB credentials, any PII
|
||||
- Request IDs: generate UUID per request, include in all log lines for that request
|
||||
|
||||
## 8.2 Error Handling
|
||||
|
||||
- All API errors return structured JSON: `{ "error": "string", "detail": "string", "code": "APIX-ERR-XXX" }`
|
||||
- HTTP status codes:
|
||||
- `400` — malformed request (not JSON, missing content-type)
|
||||
- `422` — BSM validation failure (Pydantic; include field-level errors)
|
||||
- `401` — missing or invalid API key on write endpoints
|
||||
- `404` — service not found
|
||||
- `429` — rate limit exceeded
|
||||
- `500` — internal server error (never expose stack trace to client)
|
||||
- Spider errors are logged but do not crash the scheduler; failed service → `liveness=unreachable`
|
||||
|
||||
## 8.3 Security Hygiene (MVP-grade)
|
||||
|
||||
| Control | Implementation | What this is NOT |
|
||||
|---|---|---|
|
||||
| HTTPS | Caddy auto-TLS; HTTP redirects to HTTPS | Not HSTS with long max-age (add post-MVP) |
|
||||
| Write endpoint auth | `X-API-Key` header checked against env var | Not per-user keys (add post-MVP) |
|
||||
| Rate limiting on writes | Caddy `rate_limit` directive: 10 req/min per IP on `/api/register` | Not full DDoS protection |
|
||||
| No secrets in Git | `.env.example` only; `.env` in `.gitignore` | Not secret scanning CI (add post-MVP) |
|
||||
| No PII in logs | Enforced by convention; no log of `registrant_email` field | Not automated PII detection |
|
||||
| Non-root containers | All Dockerfiles use `USER appuser` | Not read-only filesystem (add post-MVP) |
|
||||
|
||||
## 8.4 BSM Validation
|
||||
|
||||
- Validation layer: Pydantic v2 model in `models/bsm.py`
|
||||
- Required fields (per Internet-Draft): `name`, `version`, `description`, `capabilities[]`, `endpoint`, `contact_email`
|
||||
- Optional fields validated if present: `olevel`, `slevel`, `pricing`, `regulatory`
|
||||
- On validation failure: `422` with field-level error list
|
||||
- Re-registration (same `endpoint` URL): treated as update (UPSERT); BSM version must be >= existing version
|
||||
- Schema version stored with each record; enables future migration
|
||||
|
||||
## 8.5 Liveness Check
|
||||
|
||||
- **"Live"** = HTTP 2xx response within 5 seconds from the registered `endpoint` URL
|
||||
- **"Degraded"** = HTTP 2xx but response time > 3 seconds
|
||||
- **"Unreachable"** = timeout, connection refused, or non-2xx response
|
||||
- Status transitions: any state → any state on each check (no hysteresis in MVP)
|
||||
- Check frequency: 15 min in prod, 2 min in dev
|
||||
- `last_checked_at` timestamp always exposed in API response
|
||||
|
||||
## 8.6 Idempotency
|
||||
|
||||
- `POST /api/register` with the same `endpoint` URL: UPSERT (update BSM, reset liveness to `pending`)
|
||||
- Spider re-check: always overwrites previous liveness status — idempotent by design
|
||||
- DB migrations (Liquibase): each changeset is forward-only; re-running skips already-applied changesets (Liquibase tracks applied changesets in `DATABASECHANGELOG` table)
|
||||
|
||||
## 8.7 Internationalisation (i18n)
|
||||
|
||||
See ADR-013 for the full decision and rationale.
|
||||
|
||||
**Locale resolution order (highest priority first):**
|
||||
1. `apix-locale` cookie (set by the language switcher via `POST /locale`)
|
||||
2. `Accept-Language` request header (browser preference)
|
||||
3. Default: `en` (English)
|
||||
|
||||
**String externalisation:**
|
||||
- All user-visible strings in Qute templates are referenced via `{inject:msg.<key>}` — not hardcoded
|
||||
- `Messages.java` (`@MessageBundle`) declares all keys; Quarkus compiler verifies usage at build time
|
||||
- `messages.properties` — English; `messages_de.properties` — German; adding a locale requires only a new properties file
|
||||
- Keys follow the pattern `<section>.<element>` (e.g. `nav.register`, `service.oLevel.label`, `admin.pending.title`)
|
||||
|
||||
**Help / tour content:**
|
||||
- Tour titles, step headings, and step body text are defined in `HelpContentService` using `Messages` keys, resolved to the request locale
|
||||
- The resolved tour data is serialized to JSON and embedded in each page as `window.PAGE_TOURS` + `window.PAGE_HELP` — no client-side translation lookup at runtime
|
||||
- Adding a translated tour step requires only adding the key to `Messages.java` + both properties files
|
||||
|
||||
**What is not localised in MVP:**
|
||||
- Error messages from Bean Validation (return as-is in EN; acceptable for API-layer errors)
|
||||
- Log messages (always EN)
|
||||
- BSM content submitted by registrants (stored as-is; not translated)
|
||||
|
||||
**Language switcher:**
|
||||
- `<form method="post" action="/locale">` with `<input name="lang" value="de|en">` in the base layout
|
||||
- `POST /locale`: validates lang against `["en", "de"]`; sets `apix-locale` cookie (path `/`, SameSite=Lax, HttpOnly); redirects to `Referer` header
|
||||
- Language switcher is rendered in the base layout; available on every portal page
|
||||
|
||||
## 8.8 Human-Readable Service Detail (Index Level 2 Entry)
|
||||
|
||||
The machine-readable service entry (`GET /api/services/{id}` returning JSON) and the human-readable portal page (`GET /services/{id}` returning HTML) represent the same data. The HTML version is designed for a human making a go/no-go decision about using a service — not for a machine parsing a schema.
|
||||
|
||||
**Design principle:** answer four questions in order, above the fold where possible:
|
||||
1. **Who is this?** — name, description
|
||||
2. **Can I trust them?** — O-level with plain-English explanation, liveness uptime, last-verified date
|
||||
3. **What exactly does it do?** — capabilities, pricing
|
||||
4. **How do I call it?** — endpoint, spec links, example snippet
|
||||
|
||||
**Trust level presentation:**
|
||||
- O-level and S-level are never shown as bare codes (O-2, S-1) to human visitors — always rendered as `badge + level name + 2-sentence explanation`
|
||||
- The explanation is locale-resolved from `Messages` (keys `service.oLevel.N.description`) — not hardcoded in the template
|
||||
- O-level badge color conveys confidence tier at a glance: grey (O-0), blue (O-1/O-2/O-3), green (O-4/O-5)
|
||||
- "Reference entry by BSF" badge is shown prominently when `isReferenceEntry=true` — prevents a human from mistaking a BSF-registered third-party service for one that has self-registered
|
||||
|
||||
**Liveness presentation:**
|
||||
- Status displayed as colored dot + label (LIVE / DEGRADED / UNREACHABLE) — not as an enum string
|
||||
- Uptime percentage and average response time are formatted human values ("98.4%", "142 ms") computed by `ServiceDetailViewModelFactory`, not raw floats
|
||||
- Last-checked timestamp shown relative ("8 minutes ago") with absolute ISO date in a `<title>` tooltip — humans read relative time faster; machines read absolute
|
||||
|
||||
**Separation of concerns:**
|
||||
- `ServiceDetailViewModelFactory` (portal module) owns all human-readable computation: relative timestamps, color class selection, O-level description lookup, GLEIF LEI URL construction
|
||||
- The Qute template (`service.html`) contains no business logic — it renders what the view model provides
|
||||
- The registry API is not changed for this feature; the portal fetches the existing full-detail endpoint and enriches the response client-side in the portal
|
||||
|
||||
**Integration section (collapsible):**
|
||||
- The raw endpoint URL and a minimal HTTP example are provided for developers who discover the service through the portal rather than via agent query
|
||||
- Link to `GET /api/services/{id}` (machine-readable JSON) is included — a developer can use the portal as a discovery UI and then switch to the machine API
|
||||
- This collapsible is closed by default to keep the human trust signals prominent
|
||||
@@ -0,0 +1,397 @@
|
||||
---
|
||||
arc42: "9 — Architecture Decisions"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## ADR-001: Java 21 + Quarkus 3.x over Python/FastAPI or Spring Boot
|
||||
|
||||
**Status:** Revised (was Python + FastAPI)
|
||||
**Context:** The MVP targets a microservice architecture deployed as container-native services. Two competing concerns: development speed (solo, LLM-assisted) and production reliability (trust infrastructure, compile-time safety). Python + FastAPI was the original choice for speed; on review, Java 21 + Quarkus fits the microservice target better and Quarkus's dev mode removes most of the iteration speed penalty.
|
||||
|
||||
**Decision:** Java 21 + Quarkus 3.x with GraalVM Native Image for production builds.
|
||||
|
||||
**Stack breakdown:**
|
||||
|
||||
| Layer | Technology | Replaces |
|
||||
|---|---|---|
|
||||
| REST API | RESTEasy Reactive (JAX-RS) | FastAPI routers |
|
||||
| Persistence | Hibernate ORM + Panache | SQLAlchemy |
|
||||
| Migrations | Liquibase | Alembic |
|
||||
| Validation | Hibernate Validator (Bean Validation) | Pydantic |
|
||||
| Portal templates | Qute | Jinja2 |
|
||||
| Spider scheduler | Quarkus Scheduler | APScheduler |
|
||||
| HTTP client (Spider) | Quarkus REST Client Reactive | aiohttp |
|
||||
| Health checks | SmallRye Health (`/q/health`) | Manual `/health` route |
|
||||
| Metrics | Micrometer | Manual |
|
||||
| Security (API key) | Quarkus Security custom identity provider | Custom middleware |
|
||||
| Build | Maven 3.9 | pip / uvicorn |
|
||||
| Testing | JUnit 5 + @QuarkusTest + RestAssured | pytest |
|
||||
| Production binary | GraalVM Native Image | Python interpreter |
|
||||
| Dev loop | `quarkus dev` (JVM mode, live reload, continuous testing) | `uvicorn --reload` |
|
||||
|
||||
**Rationale:**
|
||||
- **Compile-time safety:** Quarkus resolves dependency injection, validation, and REST binding at build time — not at runtime via reflection. Errors that would surface as `500` at runtime in Python surface as build failures in Quarkus.
|
||||
- **Purpose-built for microservices:** Quarkus's design assumption is container-native, independently deployable services. Spring Boot was designed for monoliths first and microservices second; Quarkus is the reverse.
|
||||
- **Native image quality:** GraalVM Native Image works cleanly with Quarkus because Quarkus uses no runtime reflection by default. Spring Boot's native image support requires reflection hints for anything that Spring's runtime proxy model touches. Quarkus native: ~50–80MB RAM per service, ~100ms startup.
|
||||
- **Dev mode removes the speed penalty:** `quarkus dev` gives instant live reload, continuous test execution, and Dev UI — the iteration loop is comparable to Python for day-to-day development. Native build only runs for the production image.
|
||||
- **Java 21 virtual threads:** Reactive-style concurrency (needed for the Spider's async HTTP checks) without reactive programming model complexity. `@RunOnVirtualThread` on the Spider scheduler gives non-blocking I/O without Mutiny/Reactor boilerplate.
|
||||
- **Liquibase:** User's existing tool, Quarkus has a first-class Liquibase extension — no migration cost.
|
||||
|
||||
**Development model:** Code and test in JVM mode (`quarkus dev`). CI builds the native image. Production container runs the native binary.
|
||||
|
||||
**Rejected alternatives:**
|
||||
- Python + FastAPI: dynamic typing; no compile-time safety; memory/startup acceptable but native image not available; retained for the SDK (client-side), not server-side
|
||||
- Spring Boot 3.x + GraalVM Native: native image works but requires reflection hints for Spring's proxy model; more operational complexity than Quarkus native for the same result
|
||||
- Go: fastest native binary; no JVM; but Carsten's background is JVM-based; no meaningful advantage over Quarkus native for this use case
|
||||
|
||||
**Consequence for SDK:** The apix-sdk-python and apix-sdk-typescript remain Python and TypeScript — the server being Java has no impact on client SDK language.
|
||||
|
||||
---
|
||||
|
||||
## ADR-002: PostgreSQL + JSONB over MongoDB
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** BSM is a structured document with a fixed core and a flexible optional section (`regulatory`, `pricing`). Registry operations are relational (search by capability, filter by country, join with liveness status).
|
||||
**Decision:** PostgreSQL 16 with JSONB column for BSM payload
|
||||
**Rationale:** Relational integrity where it matters (service_id, liveness_status, timestamps are typed columns). JSONB for BSM payload flexibility without a separate document store. Single database engine to maintain. Liquibase manages migrations (see ADR-008).
|
||||
**Rejected alternatives:**
|
||||
- MongoDB: no relational joins; schema migration story is weaker; adds operational complexity for no benefit
|
||||
|
||||
---
|
||||
|
||||
## ADR-003: Caddy over nginx or Traefik
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** Need TLS termination and reverse proxy. Solo operator, no DevOps team.
|
||||
**Decision:** Caddy
|
||||
**Rationale:** Automatic TLS via Let's Encrypt with zero configuration. Caddyfile is 10 lines vs nginx config of 50+. Traefik adds Docker label complexity not needed for a two-service setup.
|
||||
**Rejected alternatives:**
|
||||
- nginx: more control, more config; cert renewal needs certbot cron; higher solo maintenance burden
|
||||
- Traefik: good for dynamic service discovery; overkill for fixed 2-service Docker Compose
|
||||
|
||||
---
|
||||
|
||||
## ADR-004: HTMX + Qute over React/Vue for Portal
|
||||
|
||||
**Status:** Revised (was HTMX + Jinja2 / FastAPI — updated for Quarkus stack)
|
||||
**Context:** Portal is admin-grade, not consumer-grade. Primary users are registrants (submit BSM) and admin (assign O-levels). No real-time requirements. Stack is now Quarkus, so Jinja2 (Python) is not available.
|
||||
**Decision:** HTMX + Qute templates served from the Quarkus portal application.
|
||||
**Rationale:** Qute is Quarkus's native, type-safe templating engine. Type-safe means template errors surface at build time, not at render time — consistent with the compile-time safety rationale of ADR-001. No JS build pipeline. No npm. HTMX handles dynamic form behaviour (inline validation, partial page updates) without a JS framework. Template hot-reload works in `quarkus dev` mode.
|
||||
**Qute specifics:** Templates live in `src/main/resources/templates/`. Type-safe binding via `@CheckedTemplate` — the Java compiler verifies that template variables exist and are of the declared type. Significantly safer than Jinja2's runtime-only variable resolution.
|
||||
**Rejected alternatives:**
|
||||
- React/Vue: overkill for admin portal; adds build pipeline maintenance; SPA adds complexity without user benefit
|
||||
- Freemarker / Thymeleaf: both work with Quarkus but are not type-safe; Qute is the idiomatic Quarkus choice
|
||||
- Jinja2: Python only; not available in Quarkus
|
||||
|
||||
---
|
||||
|
||||
## ADR-005: Automated O-1 / O-2 / O-3 Verification in MVP; O-4 / O-5 Post-MVP
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** The trust model has six organisation levels (O-0 to O-5). The original assumption was to defer all automated verification to Phase 2. On review: O-1, O-2, and O-3 are achievable in the MVP timeframe and are essential for the PoC to credibly demonstrate the trust model — not just describe it. O-4 and O-5 require human reviewers (Accredited Verifiers) and are genuinely post-MVP.
|
||||
|
||||
**Decision:** Implement automated verification for O-1, O-2, and O-3 in the MVP. O-4 and O-5 remain manual / post-MVP.
|
||||
|
||||
| Level | Mechanism | External dependency | MVP? |
|
||||
|---|---|---|---|
|
||||
| O-1 Identity Verified | DNS TXT record proof of domain ownership; business email MX check | Standard DNS resolver — no external API | Yes |
|
||||
| O-2 Legal Entity Verified | GLEIF REST API (primary); OpenCorporates API (fallback for registrants without LEI) | GLEIF (free, public); OpenCorporates (free tier) | Yes |
|
||||
| O-2 pre-condition | Sanctions screening against OFAC SDN + EU consolidated + UN SC lists | Public datasets; downloaded and cached locally; no live API call at check time | Yes |
|
||||
| O-3 Hygiene Verified | HTTP fetch of `/.well-known/security.txt`; DNS DMARC + SPF lookup; reachability of Privacy Policy + ToS URLs | HTTP fetcher + DNS — no external API | Yes |
|
||||
| O-4 Operationally Verified | Accredited Verifier assessment — human review | Accredited Verifier network | No |
|
||||
| O-5 Audited | Third-party audit certificate (SOC 2 / ISO 27001) | Audit body | No |
|
||||
|
||||
**Rationale:** O-1 and O-3 require only DNS + HTTP — zero external API dependencies, implementable in hours. O-2 via GLEIF is one REST call against a well-documented public API. Sanctions screening uses locally cached public datasets — no live API dependency at verification time, only at download time. The combined effort is ~1–2 weeks of focused work, and the result is a PoC that demonstrates the trust model end-to-end rather than describing it.
|
||||
|
||||
**Consequences:**
|
||||
- `config.py` gains: `GLEIF_API_URL`, `OPENCORPORATES_API_KEY`, `SANCTIONS_CACHE_PATH`, `SANCTIONS_REFRESH_INTERVAL_DAYS`
|
||||
- New `src/api/verification/` module with 6 components (C-31 to C-36)
|
||||
- New Alembic migration: `verification_status`, `olevel`, `olevel_checked_at`, `sanctions_cleared` columns
|
||||
- Verification tests (C-37 to C-41) use mocked external APIs — no live network calls in test suite
|
||||
- Admin portal still shows pending verifications; admin can override any O-level manually (important for edge cases and for O-4/O-5 placeholders)
|
||||
|
||||
**Rejected alternative:** Fully manual O-level assignment. Rejected because the PoC then cannot demonstrate automated trust elevation — the most important differentiator from a static directory.
|
||||
|
||||
---
|
||||
|
||||
## ADR-006: Single-Region Hetzner Deployment for MVP
|
||||
|
||||
---
|
||||
|
||||
## ADR-006: Two-VPS Hetzner Deployment (APIX Application + Gitea)
|
||||
|
||||
**Status:** Revised (was single VPS — updated for Gitea separation per ADR-010)
|
||||
**Context:** Gitea requires dedicated hosting separate from the APIX application to avoid coupled failure domains. Code hosting and application hosting failing together during a deployment is an unacceptable blast radius.
|
||||
|
||||
**Decision:** Two Hetzner CX22 VPS instances, both in FSN1 (Falkenstein, Germany):
|
||||
|
||||
| VPS | Purpose | Services |
|
||||
|---|---|---|
|
||||
| `apix-app` | APIX application (Docker Swarm) | registry, spider, portal, PostgreSQL, Caddy |
|
||||
| `apix-gitea` | Code + CI/CD (Docker Compose) | Gitea, Caddy, act_runner (JVM), act_runner (GraalVM native) |
|
||||
|
||||
**Rationale:** Decoupled failure domains. A deployment to `apix-app` cannot affect Gitea. A Gitea restart cannot affect the running registry. The GraalVM native build runner runs on `apix-gitea` — it is CPU-intensive but isolated from the running application services.
|
||||
|
||||
**Cost:** 2 × CX22 = ~€8.70/month. Acceptable for PoC.
|
||||
|
||||
**Backup:** Both VPS: pg_dump (apix-app) and Gitea data volume (apix-gitea) backed up daily to their respective Hetzner volumes.
|
||||
|
||||
**Exit path:** Post-funding: Hetzner Managed Database for HA PostgreSQL; multi-region Gitea replication; dedicated build runner on a larger instance.
|
||||
|
||||
---
|
||||
|
||||
## ADR-007: Register Verification APIs as Reference APIX Entries
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** APIX uses external public APIs (GLEIF, OpenCorporates, EU Sanctions list, Companies House) as part of its automated O-level verification pipeline. These APIs are themselves exactly the kind of atomic, independently callable services that the APIX model is designed to make discoverable. Registering them in APIX serves two purposes: (1) it demonstrates the model by making the verification infrastructure itself discoverable, and (2) it creates a natural outreach opportunity — once registered as reference entries, BSF can invite the operators to self-upgrade their registrations.
|
||||
|
||||
**Decision:** BSF registers GLEIF, OpenCorporates, EU Sanctions (eu-sanctions.io or equivalent public endpoint), and Companies House UK as reference APIX entries at O-0/O-1 during the MVP build (Week 5). These are registered by BSF as the registrant, clearly labelled as "reference registration — operator not yet self-registered."
|
||||
|
||||
| Service | Capability tag | Initial O-level | Target O-level (post-outreach) |
|
||||
|---|---|---|---|
|
||||
| GLEIF API | `legal-entity.lookup` | O-1 (domain verified) | O-2+ if GLEIF self-registers |
|
||||
| OpenCorporates API | `company.lookup` | O-1 | O-2+ if OC self-registers |
|
||||
| EU Sanctions endpoint | `sanctions.screen` | O-1 | O-2+ |
|
||||
| Companies House UK | `org.verify.uk` | O-1 | O-2+ |
|
||||
|
||||
**Rationale:** These registrations cost BSF one afternoon of work and produce four real, meaningful entries in the registry. They also demonstrate recursion: APIX verifies organisations using services that are themselves registered in APIX. This is a strong narrative for the STF application and for founding member pitches. The outreach to GLEIF and Companies House to self-upgrade their registrations is also a legitimate business development activity.
|
||||
|
||||
**Constraints:** BSF's Terms of Service must explicitly permit third-party reference registrations at O-0. Admin override allows BSF to mark these entries as "reference — not operator-maintained" to avoid misleading consuming agents about SLA.
|
||||
|
||||
**Rejected alternative:** Only register self-operated services. Rejected because it leaves the registry with fewer entries and misses the recursive demonstration value.
|
||||
|
||||
---
|
||||
|
||||
## ADR-009: Maven Multi-Module Project with Separated Scheduler
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** The registry API and the Spider scheduler are distinct concerns with different scaling and deployment characteristics. The API must be responsive at all times; the scheduler runs on a fixed interval and is latency-insensitive. Bundling them into a single deployable couples their release cycles and prevents independent scaling. Similarly, shared types (enums, DTOs) and the verification pipeline should not be duplicated across services.
|
||||
|
||||
**Decision:** Maven multi-module project with five modules:
|
||||
|
||||
| Module | Type | Depends on | Responsibility |
|
||||
|---|---|---|---|
|
||||
| `apix-common` | Plain Java 21 library | — | Shared enums (`OLevel`, `LivenessStatus`), DTOs (`BsmPayload`, `ServiceSummaryDto`), `VerificationResult` record |
|
||||
| `apix-verification` | Plain Java 21 library | `apix-common` | O-level elevation pipeline; uses `java.net.http.HttpClient` and `dnsjava`; no Quarkus dependency — fully testable without Quarkus context |
|
||||
| `apix-registry` | Quarkus 3.x app | `apix-common`, `apix-verification` | REST API (HATEOAS), BSM registration, capability search, Liquibase migrations (schema owner) |
|
||||
| `apix-spider` | Quarkus 3.x app | `apix-common` | `@Scheduled` liveness checks, OpenAPI/MCP spec verification; connects to shared DB; does **not** run migrations |
|
||||
| `apix-portal` | Quarkus 3.x app | `apix-common` | HTMX + Qute web portal; calls `apix-registry` via REST Client; admin O-level assignment |
|
||||
|
||||
**Rationale:**
|
||||
- **Scheduler independence:** Spider can be restarted, redeployed, or scaled independently of the API. A Spider bug cannot take down the registry. Quarkus `@Scheduled` inside the registry would tie their lifecycles together.
|
||||
- **Plain Java library for verification:** `apix-verification` uses `java.net.http.HttpClient` (Java 11+) and `dnsjava` — no Quarkus runtime needed. This means all verification logic is unit-testable with plain JUnit, with no `@QuarkusTest` overhead. The registry wraps the pipeline in a CDI bean (`VerificationOrchestrator`) that injects Quarkus config and calls the library.
|
||||
- **Schema ownership:** Registry runs Liquibase at startup. Spider connects to the same PostgreSQL instance but has Liquibase disabled (`quarkus.liquibase.migrate-at-start=false`). Spider has its own `ServiceRecord` entity mapped to the same table — it only reads `endpoint_url` and writes `liveness_status`, `last_checked_at`, `uptime_30d_percent`, `avg_response_ms`, `consecutive_failures`.
|
||||
- **Parent POM as BOM:** Quarkus BOM imported in parent manages all transitive version alignment. Each Quarkus module inherits plugin config via `<parent>`. Plain Java modules only inherit `maven-compiler-plugin` config (Java 21 release).
|
||||
|
||||
**Consequence for Docker Compose:** Three independently deployable containers (registry, spider, portal) + PostgreSQL + Caddy. Each has its own Dockerfile with multi-stage GraalVM native build. On CX22 (4GB): 3 × ~80MB native = ~240MB + PostgreSQL ~256MB + Caddy ~20MB ≈ 516MB total — comfortable headroom.
|
||||
|
||||
**Rejected alternative:** Single Quarkus application with Spider as `@Scheduled` bean inside the registry. Rejected because it couples API and scheduler lifecycles, prevents independent scaling, and violates the single-responsibility principle that microservices are meant to enforce.
|
||||
|
||||
---
|
||||
|
||||
## ADR-008: Liquibase over Flyway for Database Migrations
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** Database schema migrations are required for both the initial schema and the incremental additions (verification status columns, liveness metrics). Both Liquibase and Flyway have first-class Quarkus extensions. The question is which fits the microservice context and the team's existing knowledge better.
|
||||
**Decision:** Liquibase with XML changesets.
|
||||
**Rationale:** Carsten already knows Liquibase — this is the primary decision factor for a solo MVP. The operational risk of learning a new migration tool while building a new framework (Quarkus) simultaneously is not justified by Flyway's marginal simplicity advantage. Liquibase's rollback support, changeset contexts (dev vs prod), and precondition checks provide more control for a trust infrastructure that must handle schema changes carefully. Quarkus's `quarkus-liquibase` extension runs changelogs at startup automatically — identical developer experience to Flyway.
|
||||
**Consequence:** Changesets live in `src/main/resources/db/changelog/`. Master changelog at `db.changelog-master.xml`; individual changesets in `db/changelog/changes/`.
|
||||
**Rejected alternative:** Flyway. More common in microservice community; simpler mental model; SQL-first. Rejected because the switching cost (learning new tool under time pressure) outweighs the simplicity benefit for a solo developer who already knows Liquibase.
|
||||
|
||||
---
|
||||
|
||||
## ADR-010: Self-Hosted Gitea as Primary; GitHub as Automated Push Mirror
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** The project requires code hosting and a Docker container registry for CI/CD artifacts. Options: GitHub (public, US-hosted), Gitea self-hosted (European sovereignty), GitLab self-hosted (heavier). The BSF's sovereignty narrative demands European-hosted, non-commercially-controlled infrastructure. GitHub remains relevant for community visibility (STF application, IETF credibility, developer adoption).
|
||||
|
||||
**Decision:** Gitea self-hosted on a dedicated Hetzner CX22 VPS as the authoritative remote. GitHub is a read-only push mirror, updated automatically by Gitea on every push to main. Gitea Container Registry hosts all Docker images.
|
||||
|
||||
**Infrastructure:**
|
||||
- Hetzner CX22 (FSN1, Germany) dedicated to Gitea — separate from the APIX application VPS
|
||||
- Gitea runs in Docker Compose on the Gitea VPS (Gitea + Caddy for TLS)
|
||||
- Gitea Container Registry enabled (OCI-compatible; images pushed as `gitea.botstandards.org/<org>/<module>:<tag>`)
|
||||
- SQLite for Gitea's own database — solo team, no concurrent write pressure; eliminates second PostgreSQL instance on the Gitea VPS
|
||||
- Gitea Actions enabled; act_runner installed on the Gitea VPS for JVM-mode builds (fast, low CPU)
|
||||
- Native image builds run on a separate Gitea Actions runner on the APIX VPS (scheduled, not on every push — CPU-intensive)
|
||||
- GitHub push mirror: configured in Gitea repository settings; pushes to `github.com/bot-standards-foundation/<repo>` on every main branch push; GitHub repo is read-only for external contributors (PRs accepted via GitHub, mirrored to Gitea)
|
||||
|
||||
**Rationale:**
|
||||
- All code, all build artifacts, all CI pipelines run on European infrastructure under BSF control
|
||||
- Gitea Actions is GitHub Actions-compatible YAML — no migration cost if ever moving to GitHub Actions
|
||||
- Container images pulled from Gitea registry at deploy time — no DockerHub dependency
|
||||
- GitHub mirror preserves community discoverability without surrendering control
|
||||
- SQLite for Gitea removes a second PostgreSQL instance; Gitea's write load (a solo developer) is trivially within SQLite's capacity
|
||||
|
||||
**Rejected alternatives:**
|
||||
- GitHub as primary: contradicts sovereignty narrative; US-controlled; acceptable for mirror only
|
||||
- GitLab self-hosted: heavier resource requirements; Gitea is sufficient for one developer
|
||||
|
||||
---
|
||||
|
||||
## ADR-011: Docker Swarm Single-Node for Zero-Downtime Production Deployment
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** The APIX registry is trust infrastructure — downtime during deployments damages credibility with registered services, consuming agents, and the STF reviewer. Docker Compose's standard `up -d` stops the old container before the new one is healthy, causing a brief outage. Kubernetes is operationally out of scope for a solo developer. A zero-downtime deployment mechanism is required.
|
||||
|
||||
**Decision:** Docker Swarm single-node mode for production on the APIX VPS. Local development continues to use Docker Compose (simpler, no Swarm overhead). Production uses a `docker-stack.yml` with `deploy.update_config.order: start-first` and health-check gating.
|
||||
|
||||
**How zero-downtime works:**
|
||||
1. CI pushes new image to Gitea registry
|
||||
2. Deploy step runs `docker service update --image <new-image> <service>` via SSH
|
||||
3. Swarm starts new container and waits for its health check to pass
|
||||
4. Once healthy, Swarm begins routing traffic to the new container
|
||||
5. Old container is stopped and removed
|
||||
6. If health check never passes: automatic rollback to previous image (`rollback_config`)
|
||||
7. Caddy routes to the Swarm service VIP — it never needs reconfiguring during rolling updates
|
||||
|
||||
**Swarm stack config per service:**
|
||||
```yaml
|
||||
deploy:
|
||||
replicas: 1
|
||||
update_config:
|
||||
order: start-first
|
||||
failure_action: rollback
|
||||
delay: 10s
|
||||
rollback_config:
|
||||
order: stop-first
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
```
|
||||
|
||||
**Local vs production parity:**
|
||||
| Concern | Local (`docker-compose.yml`) | Production (`docker-stack.yml`) |
|
||||
|---|---|---|
|
||||
| Orchestrator | Docker Compose | Docker Swarm single-node |
|
||||
| Images | Built from source (`quarkus dev`) | Pre-built native images from Gitea registry |
|
||||
| TLS | None | Caddy auto-cert |
|
||||
| Rolling updates | Not supported | `start-first` with health check gate |
|
||||
| Secrets | `.env` file | Docker Swarm secrets |
|
||||
|
||||
**Rejected alternatives:**
|
||||
- Docker Compose only: brief downtime on every deploy; not acceptable for trust infrastructure
|
||||
- Kubernetes (k3s): zero-downtime capable but operationally too heavy for a solo developer
|
||||
- Traefik instead of Caddy: Traefik has better native Swarm label integration but adds complexity; Caddy routing to Swarm service VIP achieves the same result without replacing the reverse proxy
|
||||
|
||||
---
|
||||
|
||||
## ADR-012: Three-Stage CI/CD Pipeline
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** GraalVM native image builds take 10–15 minutes. Running them on every push would make the CI feedback loop unusable for active development. Conversely, deploying JVM-mode images to production is not acceptable — native images are required for the memory profile and startup time targets. Production deployments must be independently tested before going live.
|
||||
|
||||
**Decision:** Three distinct CI stages with independent triggers:
|
||||
|
||||
| Stage | Trigger | Runner | Duration | Output |
|
||||
|---|---|---|---|---|
|
||||
| **1 — Fast cycle** | Every push to any branch | Gitea VPS act_runner (JVM) | ~3–5 min | JVM build pass/fail; unit + `@QuarkusTest` results |
|
||||
| **2 — Native build** | Merge to `main` | APIX VPS act_runner (GraalVM) | ~10–15 min | Native images pushed to Gitea Container Registry |
|
||||
| **3 — Deploy** | Git tag (`v*`) | Gitea VPS act_runner | ~2 min | Zero-downtime Swarm rolling update; health check verified; rollback on failure |
|
||||
|
||||
**Stage 1 — Fast cycle (`.gitea/workflows/ci-fast.yml`):**
|
||||
- `mvn verify` in JVM mode on all modules
|
||||
- `@QuarkusTest` with Testcontainers (PostgreSQL) for registry + spider
|
||||
- WireMock-based tests for verification pipeline
|
||||
- No Docker build; no native compilation
|
||||
|
||||
**Stage 2 — Native build (`.gitea/workflows/ci-native.yml`):**
|
||||
- `mvn package -Pnative -Dquarkus.native.container-build=true` for each Quarkus module
|
||||
- Docker multi-stage build produces native image
|
||||
- Integration test of native container (`@QuarkusIntegrationTest`)
|
||||
- Push tagged image to Gitea registry: `gitea.botstandards.org/bsf/<module>:main-<sha>`
|
||||
|
||||
**Stage 3 — Deploy (`.gitea/workflows/deploy.yml`):**
|
||||
- SSH to APIX VPS
|
||||
- `docker service update --image <new-image> apix_registry` (and spider, portal)
|
||||
- Wait for Swarm health check confirmation
|
||||
- Verify `/q/health` endpoint returns UP
|
||||
- On failure: Swarm auto-rollback; pipeline fails with alert
|
||||
|
||||
**Rationale:** Stage separation gives a fast feedback loop (developer doesn't wait 15 min for native build feedback) while ensuring production always runs tested native images. The deploy stage is a separate, explicit action — no code is deployed to production without a human creating a git tag.
|
||||
|
||||
**Consequence:** Requires two Gitea Actions runners:
|
||||
- Gitea VPS: JVM runner (Stage 1 + Stage 3) — low CPU requirement
|
||||
- APIX VPS: GraalVM native runner (Stage 2) — CPU-intensive; runs on a schedule or on-demand, not concurrently with the running application
|
||||
|
||||
---
|
||||
|
||||
## ADR-013: Server-Side i18n via Quarkus @MessageBundle; EN + DE for MVP
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** The portal must be usable by German-speaking founding member candidates (manufacturing sector, logistics operators) without requiring them to work in English. The STF application emphasises European focus — a German-language portal is consistent with that narrative. The stack is Quarkus + Qute; multiple i18n approaches exist and the choice must remain consistent with the compile-time safety rationale of ADR-001.
|
||||
|
||||
**Decision:** Server-side i18n using Quarkus `@MessageBundle` for static UI strings with Qute type-safe injection. Locale resolved from `Accept-Language` header with `apix-locale` cookie override. English (EN) and German (DE) for MVP. Language switcher in base layout.
|
||||
|
||||
**How it works:**
|
||||
- `Messages.java` — `@MessageBundle`-annotated interface; one method per translatable string key
|
||||
- `messages.properties` — English default strings (all keys defined here)
|
||||
- `messages_de.properties` — German strings (same keys, DE values)
|
||||
- Quarkus resolves the correct properties file at build time and injects it into Qute templates
|
||||
- Templates use `{inject:msg.someKey}` syntax — the Qute compiler verifies the key exists on `Messages.java` at build time
|
||||
- `LocaleResolver.java` CDI bean: reads `Accept-Language` header; falls back to `apix-locale` cookie if present; returns `java.util.Locale`
|
||||
- `PortalResource` injects `LocaleResolver`; passes locale to template rendering context
|
||||
- `POST /locale` (`LocaleResource.java`) — sets `apix-locale` cookie; redirects to `Referer`; used by language switcher
|
||||
- **Tour and help content** (JavaScript structures) are built by `HelpContentService` in the resolved locale and rendered into each page as a `<script>` block via Qute — consuming agents and portal users both receive pre-localized strings; no client-side translation layer
|
||||
|
||||
**Rationale:**
|
||||
- Server-side rendering is the natural fit for Quarkus + Qute — no JS i18n library needed, no build pipeline
|
||||
- `@MessageBundle` gives compile-time verification that all string keys exist — consistent with ADR-001 compile-time safety
|
||||
- Baking tour content into the page as pre-localized JSON means the help engine (`help.js`) receives already-resolved strings; no translation lookup at runtime
|
||||
- Cookie-based locale preference survives page navigation without requiring a user account
|
||||
- EN + DE covers the BSF operating language (EN) and the primary founding member market (DE); other locales can be added by adding a new properties file — no code change
|
||||
|
||||
**Rejected alternatives:**
|
||||
- Client-side i18n (`data-i18n` attributes + JS TRANSLATIONS object as in the `used-books` reference): works for a single-file app but loses compile-time key checking; breaks the Qute type-safe model; requires maintaining a separate JS translation layer alongside the server-side one
|
||||
- Separate JSON locale files served as static assets: decouples translations from build; loses key verification; requires a JS runtime translation layer and additional `fetch` call per page load
|
||||
|
||||
---
|
||||
|
||||
## ADR-014: Client-Side Help Overlay Engine with Server-Rendered Tour Content
|
||||
|
||||
**Status:** Decided
|
||||
**Context:** Portal users — registrants submitting their first BSM, agent developers querying the registry for the first time, admins assigning O-levels — need in-context guidance at the exact moment they are performing an action. A static FAQ page requires users to context-switch. The portal must also work as a self-guided demo for the STF reviewer and founding member pitches. The `used-books` application in this repository contains a proven pattern: a four-wing spotlight overlay with a draggable tour card, progress dots, and a separate page-level help drawer.
|
||||
|
||||
**Decision:** Client-side JS help overlay engine (`help.js`) adapted from the `used-books` pattern. Tour and page-help content is server-rendered into each page via Qute as a locale-aware JS data structure. No external tour library dependency.
|
||||
|
||||
**Architecture:**
|
||||
- `help.js` — single file, no framework; ~350 lines; manages the full overlay lifecycle:
|
||||
- Four `<div>` dimming wings (`help-dim-top/left/right/bottom`) that cut out a spotlight window around the current target element
|
||||
- Highlight ring (`help-highlight`) positioned over the target
|
||||
- Draggable tour card (`help-card`) with header drag handle (`cursor-grab`), group icon, progress dots, title, state indicator, body text, Back / Skip / Next buttons
|
||||
- Page-level static help drawer (`help-drawer`) sliding in from the right; contains: "Guided Tours" section (list of tours relevant to the current page) + "Page Help" section (static explanation of the current page)
|
||||
- Context filter: the drawer shows only tours whose `pages` array includes the current page ID (`<body data-page-id="...">`)
|
||||
- `tourCheckAndNext()`: validates any required form state before advancing a step; configurable per step
|
||||
- **Tour data injection:** each Qute template embeds a `<script>` block with two page-scoped globals rendered at request time in the resolved locale:
|
||||
```html
|
||||
<script>
|
||||
window.PAGE_TOURS = {tours};
|
||||
window.PAGE_HELP = {pageHelp};
|
||||
</script>
|
||||
```
|
||||
`{tours}` and `{pageHelp}` are `String` parameters passed by `PortalResource` — pre-serialized JSON produced by `HelpContentService` in the resolved locale. Qute renders them into the page; `help.js` reads them on `window.onload`.
|
||||
- `TourDefinition.java` + `TourStep.java` — Java records defining the data model for tour content
|
||||
- `HelpContentService.java` — CDI bean; builds locale-resolved `TourDefinition` list per page; serializes to JSON; 5 tours defined for MVP:
|
||||
|
||||
| Tour ID | Pages | Steps |
|
||||
|---|---|---|
|
||||
| `tour-agent-setup` | `/` (home) | 3: root endpoint URL → HATEOAS links JSON → capability query example |
|
||||
| `tour-register` | `/register` | 5: open form → BSM name + description → capability tags → O-level meaning → submit |
|
||||
| `tour-search` | `/search` | 3: enter capability → read results → interpret liveness badge |
|
||||
| `tour-trust` | `/service/{id}` | 4: O-level indicator → S-level indicator → liveness badge → last_checked_at |
|
||||
| `tour-admin` | `/admin` | 4: pending verifications list → assign O-level → reference registration flag → API key reminder |
|
||||
|
||||
- `templates/layout.html` — Qute base layout (all pages extend this); includes: help button (?) in nav bar; overlay HTML (4 wing divs + highlight ring + tour card + progress dots + state indicator); help drawer shell; `<script src="/help.js">`; language switcher form
|
||||
|
||||
**Rationale:**
|
||||
- Client-side overlay requires no round-trips per step — smooth UX for a step-through walkthrough
|
||||
- Server-rendering tour content in the resolved locale via Qute keeps i18n consistent with ADR-013 — one locale resolution point, no client-side translation map
|
||||
- Spotlight overlay moves out of the way when dragged — user can see the target element while reading the explanation, unlike a modal
|
||||
- The `used-books` pattern is already in production in an adjacent project; adaptation cost is low; no learning curve
|
||||
- No external CDN dependency means the help system works offline and does not introduce a third-party privacy concern
|
||||
|
||||
**Rejected alternatives:**
|
||||
- Shepherd.js / Driver.js: well-maintained but external JS dependency; overkill for five tours in a portal with known pages; adds CDN or bundler dependency
|
||||
- Pure modal help without overlay: user cannot see the element being explained while reading the explanation; defeats the purpose of contextual guidance
|
||||
- Help text embedded directly in Qute templates: clutters the template; cannot be stepped through; not filterable by page context; not locale-switchable without full re-render
|
||||
@@ -0,0 +1,52 @@
|
||||
---
|
||||
arc42: "10 — Quality Requirements"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 10.1 Quality Tree
|
||||
|
||||
```
|
||||
Quality
|
||||
├── Functionality
|
||||
│ ├── Capability search returns relevant results
|
||||
│ ├── HATEOAS navigation works from root URL without prior knowledge
|
||||
│ └── BSM validation rejects invalid submissions with actionable errors
|
||||
├── Reliability
|
||||
│ ├── Liveness status reflects actual service state within one check interval
|
||||
│ └── Registry survives VPS restart (data persisted to volume)
|
||||
├── Security Hygiene
|
||||
│ ├── All traffic over HTTPS
|
||||
│ ├── Write endpoints reject unauthenticated requests
|
||||
│ └── No credentials or PII in logs or Git
|
||||
└── Operability
|
||||
├── Deployable from scratch on a new Hetzner VPS in < 30 minutes
|
||||
├── Health endpoint reflects actual DB connectivity
|
||||
└── Logs provide enough context to diagnose a registration failure without a debugger
|
||||
```
|
||||
|
||||
## 10.2 Quality Scenarios
|
||||
|
||||
| # | Stimulus | Response | Measurable Outcome |
|
||||
|---|---|---|---|
|
||||
| QS-01 | Agent sends `GET /api/services?capability=inventory.read` | Returns list of matching services with BSM summaries and `_links` | Response time < 500ms; result includes at least 1 registered service |
|
||||
| QS-02 | Registrant submits BSM with missing required field | API returns 422 with field-level error identifying the missing field | Error response includes field name and reason; no partial write to DB |
|
||||
| QS-03 | Registered service goes offline | Spider marks it `unreachable` within 15 min | `liveness_status=unreachable` and updated `last_checked_at` in API response |
|
||||
| QS-04 | Agent sends `GET /api/` (root) | Returns JSON with `_links` to search, register, and health endpoints | No prior knowledge of path structure required; all links resolvable |
|
||||
| QS-05 | VPS is rebooted | All services come back up automatically; registry data intact | `docker compose up` on restart (via restart policy); 0 data loss |
|
||||
| QS-06 | Unauthenticated POST to `/api/register` | 401 Unauthorized | No registration created; API key required |
|
||||
| QS-07 | STF reviewer opens portal in browser | Homepage shows registry stats + search; registration form works | Zero errors in browser console; form submits successfully |
|
||||
|
||||
## 10.3 MVP Acceptance Criteria
|
||||
|
||||
The PoC is **done** when all of the following are true:
|
||||
|
||||
- [ ] Public URL is reachable over HTTPS
|
||||
- [ ] `GET /api/` returns valid HATEOAS navigation links
|
||||
- [ ] `GET /api/services?capability=X` returns at least 1 result for at least 3 distinct capability queries
|
||||
- [ ] At least 5 real services are registered (not demo fixtures)
|
||||
- [ ] Spider has run at least one full check cycle and updated liveness status for all registered services
|
||||
- [ ] Portal registration form accepts a valid BSM and shows confirmation
|
||||
- [ ] Admin O-level assignment works via portal
|
||||
- [ ] `GET /api/health` returns 200 with DB status
|
||||
- [ ] No credentials or PII appear in `docker compose logs` output
|
||||
- [ ] `infra/hetzner/provision.sh` + `deploy.sh` installs and starts the full stack on a fresh Hetzner VPS
|
||||
@@ -0,0 +1,31 @@
|
||||
---
|
||||
arc42: "11 — Risks and Technical Debt"
|
||||
status: stub
|
||||
---
|
||||
|
||||
## 11.1 Risk Register
|
||||
|
||||
| # | Risk | Probability | Impact | Mitigation |
|
||||
|---|---|---|---|---|
|
||||
| R-01 | Big tech ships a competing agent service directory before PoC is done | Medium | High | Speed is the primary mitigation. PoC by end of 2026. IETF draft establishes prior art regardless of PoC state. |
|
||||
| R-02 | Chicken-and-egg: no real registrants → registry looks empty → no agents query it → no registrant motivation | High | High | Pre-seed with 5 real services (self + Lexnexum + 3 outreach targets) before any public announcement. Never launch empty. |
|
||||
| R-03 | Solo bus factor: Carsten gets sick/unavailable | Medium | High | All infra as code (GitHub); `provision.sh` + `deploy.sh` must be runnable by anyone with Hetzner access. No undocumented steps. |
|
||||
| R-04 | Hetzner VPS data loss (disk failure) | Low | High | Daily pg_dump to separate Hetzner volume. Restore documented and tested. |
|
||||
| R-05 | Spider causes load on registrant services (aggressive checking) | Low | Medium | 15-min interval; 5s timeout; respect `Crawl-delay` in robots.txt if present; opt-out mechanism in BSM. |
|
||||
| R-06 | STF rejects application despite PoC | Medium | Medium | PoC also serves founding member pitch and IETF credibility regardless of STF outcome. |
|
||||
| R-07 | IETF draft does not progress / working group not formed | Medium | Medium | APIX can operate as a de-facto standard regardless of IETF formal status (as DNS did). |
|
||||
|
||||
## 11.2 Technical Debt Log
|
||||
|
||||
Accepted shortcuts in the MVP, with explicit exit paths:
|
||||
|
||||
| # | Debt | Accepted Because | Exit Path | Priority |
|
||||
|---|---|---|---|---|
|
||||
| TD-01 | Manual O-level assignment | Automated GLEIF/domain check is weeks of work; manual is safe for PoC | Automated O-1 (DNS/domain) + O-2 (GLEIF) in Phase 2 | High |
|
||||
| TD-02 | Single shared API key | Per-registrant key management requires auth layer; premature for PoC | OAuth2 / per-registrant key management post-MVP | High |
|
||||
| TD-03 | No rate limiting on read endpoints | PoC traffic too low to warrant it | Caddy rate_limit directives when traffic warrants | Medium |
|
||||
| TD-04 | No full OpenAPI spec field validation by Spider | Field-level validation requires schema comparison logic; overkill for PoC | Spider `openapi_parser.py` extension post-MVP | Medium |
|
||||
| TD-05 | Single-region deployment | Multi-region requires DB replication; solo can't maintain safely | Hetzner Managed Database + multi-region post-funding | Low (PoC SLA is acceptable) |
|
||||
| TD-06 | No CI/CD pipeline | Solo dev; manual deploy via `deploy.sh` is sufficient | GitHub Actions pipeline post-MVP | Low |
|
||||
| TD-07 | No TLS for Spider → DB connection | Both on same Docker network; no external exposure | TLS on internal connections post-MVP if required by audit | Low |
|
||||
| TD-08 | Spider has no respect for registrant `robots.txt` | Most registered services won't have agent-specific crawl rules yet | Add robots.txt check to Spider fetcher when needed | Low |
|
||||
@@ -0,0 +1,23 @@
|
||||
---
|
||||
arc42: "12 — Glossary"
|
||||
status: stub
|
||||
---
|
||||
|
||||
| Term | Definition |
|
||||
|---|---|
|
||||
| **APIX** | API Index — the global, neutral, machine-queryable registry of services for autonomous agents |
|
||||
| **BSM** | Bot Service Manifest — the structured metadata document that describes a machine-consumable service (capabilities, endpoint, trust level, pricing) |
|
||||
| **Spider** | The automated APIX crawler that periodically checks liveness and spec compliance of registered services |
|
||||
| **O-level** | Organisation trust level (O-0 to O-5). O-0: unverified. O-1: domain ownership confirmed. O-2: legal entity verified. Higher levels require additional compliance verification. |
|
||||
| **S-level** | Service trust level. Reflects technical verification of the service against its declared BSM spec. |
|
||||
| **Liveness** | Operational status of a registered service as last measured by the Spider. States: `pending`, `live`, `degraded`, `unreachable`. |
|
||||
| **AE** | Agent Enterprise — an autonomous agent that composes APIX-registered services into a workflow and potentially earns on each execution |
|
||||
| **HATEOAS** | Hypermedia as the Engine of Application State — REST architectural constraint where the client navigates entirely via links returned by the server; no out-of-band URL knowledge required |
|
||||
| **DC-1** | Device Class registration — the APIX registration record for an IoT device class (BSM template); persists beyond the original operator's cloud service lifetime |
|
||||
| **Capability** | A machine-readable tag describing what a service does (e.g., `inventory.read`, `slot.book`, `customs.doc`). The primary search dimension in APIX. |
|
||||
| **GLEIF** | Global Legal Entity Identifier Foundation — the data source used for automated O-2 legal entity verification |
|
||||
| **Internet-Draft** | `draft-rehfeld-bot-service-index-00` — the IETF submission that formalises the APIX specification |
|
||||
| **PoC** | Proof of Concept — the MVP deployment described in this document |
|
||||
| **STF** | Sovereign Tech Fund — the German federal funding body; primary target for APIX infrastructure funding |
|
||||
| **BSF** | Bot Standards Foundation — the Swiss Stiftung that governs the APIX standard and operates the reference index |
|
||||
| **UPSERT** | Insert-or-update DB operation — used for re-registration: same endpoint URL updates existing record rather than creating a duplicate |
|
||||
@@ -0,0 +1,104 @@
|
||||
version: "3.9"
|
||||
|
||||
# Production service topology. For local JVM dev mode see docker-compose.override.yml (Block 5 / I-02).
|
||||
# Images are built and pushed by CI (Block 5 / I-21); Dockerfiles are Block 5-6 (I-04 to I-06).
|
||||
|
||||
services:
|
||||
|
||||
db:
|
||||
image: postgres:16-alpine
|
||||
environment:
|
||||
POSTGRES_USER: ${APIX_DB_USER:-apix}
|
||||
POSTGRES_PASSWORD: ${APIX_DB_PASSWORD:-apix}
|
||||
POSTGRES_DB: ${APIX_DB_NAME:-apix}
|
||||
ports:
|
||||
- "${APIX_DB_PORT:-5432}:5432"
|
||||
volumes:
|
||||
- db_data:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${APIX_DB_USER:-apix}"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
restart: unless-stopped
|
||||
|
||||
registry:
|
||||
image: apix-registry:latest
|
||||
ports:
|
||||
- "8180:8180"
|
||||
environment:
|
||||
QUARKUS_DATASOURCE_JDBC_URL: jdbc:postgresql://db:5432/${APIX_DB_NAME:-apix}
|
||||
QUARKUS_DATASOURCE_USERNAME: ${APIX_DB_USER:-apix}
|
||||
QUARKUS_DATASOURCE_PASSWORD: ${APIX_DB_PASSWORD:-apix}
|
||||
APIX_API_KEY: ${APIX_API_KEY}
|
||||
GLEIF_API_URL: ${GLEIF_API_URL:-https://api.gleif.org/api/v1}
|
||||
OPENCORPORATES_API_KEY: ${OPENCORPORATES_API_KEY:-}
|
||||
SANCTIONS_CACHE_PATH: /app/sanctions
|
||||
LOG_LEVEL: ${LOG_LEVEL:-INFO}
|
||||
volumes:
|
||||
- sanctions_cache:/app/sanctions
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -sf http://localhost:8180/q/health/live || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
restart: unless-stopped
|
||||
|
||||
# Internal only — no public port exposure
|
||||
spider:
|
||||
image: apix-spider:latest
|
||||
environment:
|
||||
QUARKUS_DATASOURCE_JDBC_URL: jdbc:postgresql://db:5432/${APIX_DB_NAME:-apix}
|
||||
QUARKUS_DATASOURCE_USERNAME: ${APIX_DB_USER:-apix}
|
||||
QUARKUS_DATASOURCE_PASSWORD: ${APIX_DB_PASSWORD:-apix}
|
||||
SPIDER_INTERVAL_MINUTES: ${SPIDER_INTERVAL_MINUTES:-15}
|
||||
LOG_LEVEL: ${LOG_LEVEL:-INFO}
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -sf http://localhost:8082/q/health/live || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
restart: unless-stopped
|
||||
|
||||
portal:
|
||||
image: apix-portal:latest
|
||||
ports:
|
||||
- "8081:8081"
|
||||
environment:
|
||||
REGISTRY_BASE_URL: http://registry:8180
|
||||
LOG_LEVEL: ${LOG_LEVEL:-INFO}
|
||||
depends_on:
|
||||
- registry
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -sf http://localhost:8081/q/health/live || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
restart: unless-stopped
|
||||
|
||||
caddy:
|
||||
image: caddy:2-alpine
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
- "443:443/udp"
|
||||
volumes:
|
||||
- ./Caddyfile:/etc/caddy/Caddyfile:ro
|
||||
- caddy_data:/data
|
||||
- caddy_config:/config
|
||||
depends_on:
|
||||
- registry
|
||||
- portal
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
db_data:
|
||||
sanctions_cache:
|
||||
caddy_data:
|
||||
caddy_config:
|
||||
@@ -0,0 +1,179 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
|
||||
https://maven.apache.org/xsd/maven-4.0.0.xsd">
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-parent</artifactId>
|
||||
<version>${revision}</version>
|
||||
<packaging>pom</packaging>
|
||||
<name>APIX :: Parent</name>
|
||||
|
||||
<modules>
|
||||
<module>apix-common</module>
|
||||
<module>apix-verification</module>
|
||||
<module>apix-registry</module>
|
||||
<module>apix-spider</module>
|
||||
<module>apix-portal</module>
|
||||
</modules>
|
||||
|
||||
<properties>
|
||||
<revision>1.0-SNAPSHOT</revision>
|
||||
<maven.compiler.release>21</maven.compiler.release>
|
||||
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
|
||||
<quarkus.platform.group-id>io.quarkus.platform</quarkus.platform.group-id>
|
||||
<quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id>
|
||||
<quarkus.platform.version>3.15.1</quarkus.platform.version>
|
||||
<compiler-plugin.version>3.13.0</compiler-plugin.version>
|
||||
<surefire-plugin.version>3.2.5</surefire-plugin.version>
|
||||
<cucumber.version>7.15.0</cucumber.version>
|
||||
<assertj.version>3.25.3</assertj.version>
|
||||
<allure.version>2.25.0</allure.version>
|
||||
</properties>
|
||||
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>${quarkus.platform.group-id}</groupId>
|
||||
<artifactId>${quarkus.platform.artifact-id}</artifactId>
|
||||
<version>${quarkus.platform.version}</version>
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
<!-- Internal modules — version managed here so child POMs omit it -->
|
||||
<dependency>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-common</artifactId>
|
||||
<version>${project.version}</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.botstandards</groupId>
|
||||
<artifactId>apix-verification</artifactId>
|
||||
<version>${project.version}</version>
|
||||
</dependency>
|
||||
<!-- Cucumber BDD -->
|
||||
<dependency>
|
||||
<groupId>io.cucumber</groupId>
|
||||
<artifactId>cucumber-java</artifactId>
|
||||
<version>${cucumber.version}</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>io.cucumber</groupId>
|
||||
<artifactId>cucumber-junit-platform-engine</artifactId>
|
||||
<version>${cucumber.version}</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- AssertJ — explicit version in case Quarkus BOM does not provide it -->
|
||||
<dependency>
|
||||
<groupId>org.assertj</groupId>
|
||||
<artifactId>assertj-core</artifactId>
|
||||
<version>${assertj.version}</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- Allure BDD reporting -->
|
||||
<dependency>
|
||||
<groupId>io.qameta.allure</groupId>
|
||||
<artifactId>allure-cucumber7-jvm</artifactId>
|
||||
<version>${allure.version}</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
|
||||
<build>
|
||||
<pluginManagement>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-compiler-plugin</artifactId>
|
||||
<version>${compiler-plugin.version}</version>
|
||||
<configuration>
|
||||
<release>21</release>
|
||||
<!-- Preserve method parameter names for RESTEasy / Jackson -->
|
||||
<compilerArgs>
|
||||
<arg>-parameters</arg>
|
||||
</compilerArgs>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-surefire-plugin</artifactId>
|
||||
<version>${surefire-plugin.version}</version>
|
||||
<configuration>
|
||||
<systemPropertyVariables>
|
||||
<java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager>
|
||||
</systemPropertyVariables>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<plugin>
|
||||
<groupId>${quarkus.platform.group-id}</groupId>
|
||||
<artifactId>quarkus-maven-plugin</artifactId>
|
||||
<version>${quarkus.platform.version}</version>
|
||||
</plugin>
|
||||
<!-- Version-managed here; configured per-module in apix-registry -->
|
||||
<plugin>
|
||||
<groupId>org.liquibase</groupId>
|
||||
<artifactId>liquibase-maven-plugin</artifactId>
|
||||
<version>4.27.0</version>
|
||||
</plugin>
|
||||
<!-- allure:report and allure:serve — configured per-module -->
|
||||
<plugin>
|
||||
<groupId>io.qameta.allure</groupId>
|
||||
<artifactId>allure-maven</artifactId>
|
||||
<version>2.13.0</version>
|
||||
<configuration>
|
||||
<!-- Keep report version in sync with the library version -->
|
||||
<reportVersion>${allure.version}</reportVersion>
|
||||
<resultsDirectory>${project.build.directory}/allure-results</resultsDirectory>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</pluginManagement>
|
||||
<plugins>
|
||||
<!-- Aggregated Allure report — NOT bound to a lifecycle phase.
|
||||
Run explicitly AFTER tests: mvn allure:serve or mvn allure:report
|
||||
Results land in target/allure-results because every module writes there
|
||||
via allure.results.directory=../target/allure-results in allure.properties.
|
||||
inherited=false ensures this block is ignored by child modules. -->
|
||||
<plugin>
|
||||
<groupId>io.qameta.allure</groupId>
|
||||
<artifactId>allure-maven</artifactId>
|
||||
<inherited>false</inherited>
|
||||
<configuration>
|
||||
<resultsDirectory>${project.build.directory}/allure-results</resultsDirectory>
|
||||
<reportDirectory>${project.build.directory}/allure-report</reportDirectory>
|
||||
</configuration>
|
||||
</plugin>
|
||||
<!-- Resolves ${revision} in installed/deployed POMs so inter-module
|
||||
dependencies can find each other in the local repository. -->
|
||||
<plugin>
|
||||
<groupId>org.codehaus.mojo</groupId>
|
||||
<artifactId>flatten-maven-plugin</artifactId>
|
||||
<version>1.6.0</version>
|
||||
<configuration>
|
||||
<updatePomFile>true</updatePomFile>
|
||||
<flattenMode>resolveCiFriendliesOnly</flattenMode>
|
||||
</configuration>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>flatten</id>
|
||||
<phase>process-resources</phase>
|
||||
<goals>
|
||||
<goal>flatten</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>flatten.clean</id>
|
||||
<phase>clean</phase>
|
||||
<goals>
|
||||
<goal>clean</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
||||
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env bash
|
||||
# Start all three Quarkus modules in dev mode.
|
||||
# Uses tmux (one window per module) if available; falls back to background processes.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
LOG_DIR="$PROJECT_ROOT/.logs"
|
||||
PID_DIR="$PROJECT_ROOT/.pids"
|
||||
|
||||
GREEN='\033[0;32m'; YELLOW='\033[1;33m'; NC='\033[0m'
|
||||
info() { echo -e "${GREEN}[apix]${NC} $*"; }
|
||||
warn() { echo -e "${YELLOW}[warn]${NC} $*"; }
|
||||
|
||||
mkdir -p "$LOG_DIR" "$PID_DIR"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Require PostgreSQL
|
||||
if ! docker ps --format '{{.Names}}' | grep -qx apix-postgres; then
|
||||
echo "PostgreSQL container is not running."
|
||||
echo "Run: ./scripts/setup-dev.sh"
|
||||
read -rp "Press Enter to close…" _
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ── tmux mode ────────────────────────────────────────────────────────────────
|
||||
if command -v tmux &>/dev/null; then
|
||||
SESSION=apix-dev
|
||||
|
||||
if tmux has-session -t "$SESSION" 2>/dev/null; then
|
||||
warn "tmux session '$SESSION' already exists."
|
||||
echo " Attach: tmux attach -t $SESSION"
|
||||
echo " Restart: ./scripts/restart.sh"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
tmux new-session -d -s "$SESSION" -x 220 -y 50 -n registry
|
||||
tmux send-keys -t "$SESSION:registry" \
|
||||
"cd '$PROJECT_ROOT' && mvn quarkus:dev -pl apix-registry" Enter
|
||||
|
||||
tmux new-window -t "$SESSION" -n portal
|
||||
tmux send-keys -t "$SESSION:portal" \
|
||||
"cd '$PROJECT_ROOT' && mvn quarkus:dev -pl apix-portal" Enter
|
||||
|
||||
tmux new-window -t "$SESSION" -n spider
|
||||
tmux send-keys -t "$SESSION:spider" \
|
||||
"cd '$PROJECT_ROOT' && mvn quarkus:dev -pl apix-spider" Enter
|
||||
|
||||
tmux select-window -t "$SESSION:registry"
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}Started in tmux session '${SESSION}'${NC}"
|
||||
echo " Switch windows: Ctrl-b 0 / 1 / 2"
|
||||
echo " Detach: Ctrl-b d"
|
||||
echo ""
|
||||
echo " Registry API → http://localhost:8180"
|
||||
echo " Portal → http://localhost:8081"
|
||||
echo ""
|
||||
tmux attach -t "$SESSION"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# ── Background mode (no tmux) ─────────────────────────────────────────────────
|
||||
warn "tmux not found — starting modules in background with log files."
|
||||
|
||||
_start() {
|
||||
local module="$1" port="$2"
|
||||
local pidfile="$PID_DIR/${module}.pid"
|
||||
local logfile="$LOG_DIR/${module}.log"
|
||||
|
||||
if [[ -f "$pidfile" ]] && kill -0 "$(cat "$pidfile")" 2>/dev/null; then
|
||||
info "$module already running (PID $(cat "$pidfile"))"
|
||||
return
|
||||
fi
|
||||
|
||||
info "Starting $module → http://localhost:${port}"
|
||||
MAVEN_OPTS="-XX:TieredStopAtLevel=1" \
|
||||
mvn quarkus:dev -pl "$module" >"$logfile" 2>&1 &
|
||||
echo $! >"$pidfile"
|
||||
}
|
||||
|
||||
_start apix-registry 8180
|
||||
_start apix-portal 8081
|
||||
_start apix-spider 8082
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}All modules started${NC}"
|
||||
echo " Logs: ./scripts/logs.sh [registry|portal|spider|all]"
|
||||
echo " Stop: ./scripts/stop.sh"
|
||||
echo ""
|
||||
echo " Registry API → http://localhost:8180"
|
||||
echo " Portal → http://localhost:8081"
|
||||
echo ""
|
||||
read -rp "Press Enter to close…" _
|
||||
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env bash
|
||||
# Tail logs for dev-mode services.
|
||||
# Usage: logs.sh [registry|portal|spider|all] (default: all)
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
LOG_DIR="$PROJECT_ROOT/.logs"
|
||||
|
||||
TARGET="${1:-all}"
|
||||
|
||||
# ── tmux mode — attach to the right window ────────────────────────────────────
|
||||
if command -v tmux &>/dev/null && tmux has-session -t apix-dev 2>/dev/null; then
|
||||
case "$TARGET" in
|
||||
registry) tmux select-window -t apix-dev:registry; tmux attach -t apix-dev ;;
|
||||
portal) tmux select-window -t apix-dev:portal; tmux attach -t apix-dev ;;
|
||||
spider) tmux select-window -t apix-dev:spider; tmux attach -t apix-dev ;;
|
||||
all) tmux attach -t apix-dev ;;
|
||||
*) echo "Usage: logs.sh [registry|portal|spider|all]"; exit 1 ;;
|
||||
esac
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# ── Background mode — tail log files ─────────────────────────────────────────
|
||||
_log() { echo "$LOG_DIR/${1}.log"; }
|
||||
|
||||
case "$TARGET" in
|
||||
registry) tail -f "$(_log apix-registry)" ;;
|
||||
portal) tail -f "$(_log apix-portal)" ;;
|
||||
spider) tail -f "$(_log apix-spider)" ;;
|
||||
all)
|
||||
FILES=("$(_log apix-registry)" "$(_log apix-portal)" "$(_log apix-spider)")
|
||||
for f in "${FILES[@]}"; do
|
||||
[[ -f "$f" ]] || { echo "Log not found: $f (has dev.sh been run?)"; exit 1; }
|
||||
done
|
||||
if command -v multitail &>/dev/null; then
|
||||
multitail -cT ANSI "${FILES[@]}"
|
||||
else
|
||||
# Label each line with the service name using sed
|
||||
tail -f "${FILES[@]}" | \
|
||||
awk '/==> .+apix-registry/ { svc="registry" } /==> .+apix-portal/ { svc="portal" } /==> .+apix-spider/ { svc="spider" } !/^==>/ { print "[" svc "] " $0 }'
|
||||
fi
|
||||
;;
|
||||
*) echo "Usage: logs.sh [registry|portal|spider|all]"; exit 1 ;;
|
||||
esac
|
||||
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env bash
|
||||
# Full dev environment reset: stop everything, drop and recreate the DB,
|
||||
# re-run all Liquibase migrations, then restart dev servers.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; NC='\033[0m'
|
||||
info() { echo -e "${GREEN}[apix]${NC} $*"; }
|
||||
warn() { echo -e "${YELLOW}[warn]${NC} $*"; }
|
||||
die() { echo -e "${RED}[fail]${NC} $*" >&2; exit 1; }
|
||||
|
||||
warn "This will DROP and recreate the local 'apix' database."
|
||||
read -rp "Continue? [y/N] " confirm
|
||||
[[ "${confirm,,}" == "y" ]] || { echo "Aborted."; exit 0; }
|
||||
|
||||
# Stop everything
|
||||
info "Stopping dev servers"
|
||||
"$SCRIPT_DIR/stop.sh"
|
||||
|
||||
# Load .env
|
||||
cd "$PROJECT_ROOT"
|
||||
if [[ ! -f .env ]]; then
|
||||
die ".env not found — run ./scripts/setup-dev.sh first."
|
||||
fi
|
||||
set -a
|
||||
# shellcheck disable=SC1091
|
||||
source .env
|
||||
set +a
|
||||
|
||||
DB_USER="${APIX_DB_USER:-apix}"
|
||||
DB_PASS="${APIX_DB_PASSWORD:-apix}"
|
||||
DB_NAME="${APIX_DB_NAME:-apix}"
|
||||
DB_PORT="${APIX_DB_PORT:-5432}"
|
||||
|
||||
# Remove and recreate the container (fastest way to wipe the DB on local dev)
|
||||
info "Removing apix-postgres container"
|
||||
docker rm -f apix-postgres 2>/dev/null || true
|
||||
|
||||
info "Starting fresh PostgreSQL container"
|
||||
docker run -d \
|
||||
--name apix-postgres \
|
||||
--restart unless-stopped \
|
||||
-e POSTGRES_USER="$DB_USER" \
|
||||
-e POSTGRES_PASSWORD="$DB_PASS" \
|
||||
-e POSTGRES_DB="$DB_NAME" \
|
||||
-p "${DB_PORT}:5432" \
|
||||
postgres:16-alpine >/dev/null
|
||||
|
||||
info "Waiting for PostgreSQL…"
|
||||
for i in $(seq 1 30); do
|
||||
if docker exec apix-postgres pg_isready -U "$DB_USER" -q 2>/dev/null; then
|
||||
info "PostgreSQL ready"
|
||||
break
|
||||
fi
|
||||
[[ $i -eq 30 ]] && die "PostgreSQL did not become ready. Check: docker logs apix-postgres"
|
||||
sleep 1
|
||||
done
|
||||
|
||||
info "Running Liquibase migrations"
|
||||
mvn -q liquibase:update -pl apix-registry \
|
||||
-Dliquibase.url="jdbc:postgresql://localhost:${DB_PORT}/${DB_NAME}" \
|
||||
-Dliquibase.username="$DB_USER" \
|
||||
-Dliquibase.password="$DB_PASS"
|
||||
info "Migrations applied"
|
||||
|
||||
info "Reset complete — starting dev servers"
|
||||
"$SCRIPT_DIR/dev.sh"
|
||||
@@ -0,0 +1,86 @@
|
||||
#!/usr/bin/env bash
|
||||
# Restart one or all dev-mode Quarkus modules.
|
||||
# Usage: restart.sh [registry|portal|spider|all] (default: all)
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
PID_DIR="$PROJECT_ROOT/.pids"
|
||||
LOG_DIR="$PROJECT_ROOT/.logs"
|
||||
|
||||
TARGET="${1:-all}"
|
||||
|
||||
GREEN='\033[0;32m'; NC='\033[0m'
|
||||
info() { echo -e "${GREEN}[apix]${NC} $*"; }
|
||||
|
||||
# ── tmux mode ────────────────────────────────────────────────────────────────
|
||||
if command -v tmux &>/dev/null && tmux has-session -t apix-dev 2>/dev/null; then
|
||||
_restart_window() {
|
||||
local win="$1" cmd="$2"
|
||||
tmux send-keys -t "apix-dev:${win}" C-c '' ENTER
|
||||
sleep 0.5
|
||||
tmux send-keys -t "apix-dev:${win}" "cd '$PROJECT_ROOT' && $cmd" ENTER
|
||||
info "Restarted $win"
|
||||
}
|
||||
|
||||
case "$TARGET" in
|
||||
registry) _restart_window registry "mvn quarkus:dev -pl apix-registry" ;;
|
||||
portal) _restart_window portal "mvn quarkus:dev -pl apix-portal" ;;
|
||||
spider) _restart_window spider "mvn quarkus:dev -pl apix-spider" ;;
|
||||
all)
|
||||
_restart_window registry "mvn quarkus:dev -pl apix-registry"
|
||||
_restart_window portal "mvn quarkus:dev -pl apix-portal"
|
||||
_restart_window spider "mvn quarkus:dev -pl apix-spider"
|
||||
;;
|
||||
*) echo "Usage: restart.sh [registry|portal|spider|all]"; exit 1 ;;
|
||||
esac
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# ── Background mode ───────────────────────────────────────────────────────────
|
||||
_kill_module() {
|
||||
local module="$1"
|
||||
local pidfile="$PID_DIR/${module}.pid"
|
||||
if [[ -f "$pidfile" ]]; then
|
||||
local pid
|
||||
pid=$(cat "$pidfile")
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
info "Stopping $module (PID $pid)"
|
||||
kill "$pid" 2>/dev/null || true
|
||||
sleep 1
|
||||
fi
|
||||
rm -f "$pidfile"
|
||||
fi
|
||||
}
|
||||
|
||||
_start_module() {
|
||||
local module="$1" port="$2"
|
||||
local pidfile="$PID_DIR/${module}.pid"
|
||||
local logfile="$LOG_DIR/${module}.log"
|
||||
info "Starting $module → http://localhost:${port}"
|
||||
MAVEN_OPTS="-XX:TieredStopAtLevel=1" \
|
||||
mvn quarkus:dev -pl "$module" >"$logfile" 2>&1 &
|
||||
echo $! >"$pidfile"
|
||||
}
|
||||
|
||||
case "$TARGET" in
|
||||
registry)
|
||||
_kill_module apix-registry
|
||||
_start_module apix-registry 8180
|
||||
;;
|
||||
portal)
|
||||
_kill_module apix-portal
|
||||
_start_module apix-portal 8081
|
||||
;;
|
||||
spider)
|
||||
_kill_module apix-spider
|
||||
_start_module apix-spider 8082
|
||||
;;
|
||||
all)
|
||||
_kill_module apix-registry; _kill_module apix-portal; _kill_module apix-spider
|
||||
_start_module apix-registry 8180
|
||||
_start_module apix-portal 8081
|
||||
_start_module apix-spider 8082
|
||||
;;
|
||||
*) echo "Usage: restart.sh [registry|portal|spider|all]"; exit 1 ;;
|
||||
esac
|
||||
@@ -0,0 +1,154 @@
|
||||
#!/usr/bin/env bash
|
||||
# Idempotent local dev environment setup.
|
||||
# Run once after cloning; safe to re-run at any time.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
mkdir -p "$PROJECT_ROOT/logs"
|
||||
LOG_FILE="$PROJECT_ROOT/logs/setup-$(date +%Y%m%d-%H%M%S).log"
|
||||
exec > >(tee "$LOG_FILE") 2>&1
|
||||
|
||||
GREEN='\033[0;32m'; YELLOW='\033[1;33m'; RED='\033[0;31m'; BOLD='\033[1m'; NC='\033[0m'
|
||||
info() { echo -e "${GREEN}[apix]${NC} $*"; }
|
||||
warn() { echo -e "${YELLOW}[warn]${NC} $*"; }
|
||||
die() { echo -e "${RED}[fail]${NC} $*" >&2; exit 1; }
|
||||
step() { echo -e "\n${BOLD}── $* ──${NC}"; }
|
||||
|
||||
_on_exit() {
|
||||
echo ""
|
||||
echo -e "${BOLD}Full log:${NC} $LOG_FILE"
|
||||
read -rp "Press Enter to close…" _
|
||||
}
|
||||
trap _on_exit EXIT
|
||||
|
||||
info "Logging to $LOG_FILE"
|
||||
|
||||
# ── 1. Java 21 ──────────────────────────────────────────────────────────────
|
||||
step "Java 21"
|
||||
if java -version 2>&1 | grep -q 'version "21'; then
|
||||
info "Java 21 detected"
|
||||
else
|
||||
warn "Java 21 not found."
|
||||
echo " Install via SDKMAN (recommended):"
|
||||
echo " curl -s https://get.sdkman.io | bash"
|
||||
echo " sdk install java 21-tem"
|
||||
echo ""
|
||||
echo " Or download from: https://adoptium.net/"
|
||||
die "Please install Java 21 and re-run this script."
|
||||
fi
|
||||
|
||||
# ── 2. Maven ─────────────────────────────────────────────────────────────────
|
||||
step "Maven"
|
||||
if command -v mvn &>/dev/null; then
|
||||
MVN_VER="$(mvn -q --version 2>&1 | head -1 || true)"
|
||||
info "Maven ${MVN_VER}"
|
||||
else
|
||||
die "Maven not found. Install: https://maven.apache.org/install.html"
|
||||
fi
|
||||
|
||||
# ── 3. Docker ────────────────────────────────────────────────────────────────
|
||||
step "Docker"
|
||||
if docker info &>/dev/null; then
|
||||
info "Docker running"
|
||||
else
|
||||
die "Docker not running. Start Docker Desktop (or the Docker daemon) and re-run."
|
||||
fi
|
||||
|
||||
# ── 4. .env file ─────────────────────────────────────────────────────────────
|
||||
step ".env"
|
||||
cd "$PROJECT_ROOT"
|
||||
if [[ ! -f .env.example ]]; then
|
||||
die ".env.example not found in $PROJECT_ROOT — repository may be incomplete."
|
||||
fi
|
||||
|
||||
if [[ ! -f .env ]]; then
|
||||
cp .env.example .env
|
||||
info "Created .env from .env.example"
|
||||
warn "Review .env and set real values before running in any shared environment."
|
||||
else
|
||||
info ".env already exists — skipping copy"
|
||||
fi
|
||||
|
||||
# Load .env into this shell so DB vars are available below
|
||||
set -a
|
||||
# shellcheck disable=SC1091
|
||||
source .env
|
||||
set +a
|
||||
|
||||
DB_USER="${APIX_DB_USER:-apix}"
|
||||
DB_PASS="${APIX_DB_PASSWORD:-apix}"
|
||||
DB_NAME="${APIX_DB_NAME:-apix}"
|
||||
DB_PORT="${APIX_DB_PORT:-5432}"
|
||||
|
||||
# ── 5. PostgreSQL container ───────────────────────────────────────────────────
|
||||
step "PostgreSQL"
|
||||
CONTAINER=apix-postgres
|
||||
|
||||
if docker ps --format '{{.Names}}' | grep -qx "$CONTAINER"; then
|
||||
info "Container '$CONTAINER' already running"
|
||||
elif docker ps -a --format '{{.Names}}' | grep -qx "$CONTAINER"; then
|
||||
info "Starting existing container '$CONTAINER'"
|
||||
docker start "$CONTAINER" >/dev/null
|
||||
else
|
||||
info "Creating container '$CONTAINER' (postgres:16-alpine, port $DB_PORT)"
|
||||
docker run -d \
|
||||
--name "$CONTAINER" \
|
||||
--restart unless-stopped \
|
||||
-e POSTGRES_USER="$DB_USER" \
|
||||
-e POSTGRES_PASSWORD="$DB_PASS" \
|
||||
-e POSTGRES_DB="$DB_NAME" \
|
||||
-p "${DB_PORT}:5432" \
|
||||
postgres:16-alpine >/dev/null
|
||||
fi
|
||||
|
||||
# Wait for Postgres to accept connections
|
||||
info "Waiting for PostgreSQL to become ready…"
|
||||
for i in $(seq 1 30); do
|
||||
if docker exec "$CONTAINER" pg_isready -U "$DB_USER" -q 2>/dev/null; then
|
||||
info "PostgreSQL ready"
|
||||
break
|
||||
fi
|
||||
if [[ $i -eq 30 ]]; then
|
||||
die "PostgreSQL did not become ready within 30 s. Check: docker logs $CONTAINER"
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# ── 6. Liquibase migrations ───────────────────────────────────────────────────
|
||||
step "Database migrations"
|
||||
JDBC_URL="jdbc:postgresql://localhost:${DB_PORT}/${DB_NAME}"
|
||||
|
||||
if [[ ! -f "$PROJECT_ROOT/pom.xml" ]]; then
|
||||
warn "No pom.xml found — Maven project not scaffolded yet (WORKLOG Block 1 / C-00)."
|
||||
warn "Skipping Liquibase migrations. Run this script again after completing Block 1."
|
||||
else
|
||||
# apix-registry depends on apix-common and apix-verification; install them
|
||||
# to the local repository first so Maven can resolve them during the
|
||||
# liquibase:update goal (no source files yet — this completes in seconds).
|
||||
info "Installing shared modules to local repository…"
|
||||
mvn -q install -pl apix-common,apix-verification -DskipTests
|
||||
|
||||
info "Running Liquibase migrations on $JDBC_URL"
|
||||
mvn -q liquibase:update -pl apix-registry \
|
||||
-Dliquibase.url="$JDBC_URL" \
|
||||
-Dliquibase.username="$DB_USER" \
|
||||
-Dliquibase.password="$DB_PASS"
|
||||
info "Migrations applied"
|
||||
fi
|
||||
|
||||
# ── Done ──────────────────────────────────────────────────────────────────────
|
||||
echo ""
|
||||
echo -e "${GREEN}${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${GREEN}${BOLD} APIX dev environment ready${NC}"
|
||||
echo -e "${GREEN}${BOLD}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo " Start all services: ./scripts/dev.sh"
|
||||
echo " View logs: ./scripts/logs.sh [registry|portal|spider]"
|
||||
echo " Stop everything: ./scripts/stop.sh"
|
||||
echo " Full reset (drop DB): ./scripts/reset.sh"
|
||||
echo ""
|
||||
echo " Registry API → http://localhost:8180"
|
||||
echo " Portal → http://localhost:8081"
|
||||
echo ""
|
||||
@@ -0,0 +1,38 @@
|
||||
#!/usr/bin/env bash
|
||||
# Stop all dev-mode Quarkus processes and the PostgreSQL container.
|
||||
set -uo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
PID_DIR="$PROJECT_ROOT/.pids"
|
||||
|
||||
GREEN='\033[0;32m'; NC='\033[0m'
|
||||
info() { echo -e "${GREEN}[apix]${NC} $*"; }
|
||||
|
||||
# tmux session
|
||||
if command -v tmux &>/dev/null && tmux has-session -t apix-dev 2>/dev/null; then
|
||||
info "Killing tmux session apix-dev"
|
||||
tmux kill-session -t apix-dev
|
||||
fi
|
||||
|
||||
# PID files (background mode)
|
||||
if [[ -d "$PID_DIR" ]]; then
|
||||
for pidfile in "$PID_DIR"/*.pid; do
|
||||
[[ -f "$pidfile" ]] || continue
|
||||
pid=$(cat "$pidfile")
|
||||
module=$(basename "$pidfile" .pid)
|
||||
if kill -0 "$pid" 2>/dev/null; then
|
||||
info "Stopping $module (PID $pid)"
|
||||
kill "$pid" 2>/dev/null || true
|
||||
fi
|
||||
rm -f "$pidfile"
|
||||
done
|
||||
fi
|
||||
|
||||
# PostgreSQL container
|
||||
if docker ps --format '{{.Names}}' | grep -qx apix-postgres; then
|
||||
info "Stopping apix-postgres container"
|
||||
docker stop apix-postgres >/dev/null
|
||||
fi
|
||||
|
||||
info "All stopped"
|
||||
Reference in New Issue
Block a user