The model registry referenced from the AI Acceptable Use Policy → Model cards. Every model surfaced through a Neuroscale product feature — whether built on a third-party provider or own-trained — has an entry here. New customer-facing entries, material updates, or replacements are gated on a registry entry plus an AI Review Log decision.
Operational mirror: the technical artifacts (training datasets, eval scripts, model weights) live in the engineering data lake / model store. This page is the human-readable summary referenced from the policy and from external trust-center materials.
Model-card schema
Every entry captures the following sections:
- Model identity — name, version, intended customer use, owner.
- Provider — own-trained / Anthropic / OpenAI / AWS Bedrock / other (cite specific model and version).
- Training data summary — for own-trained models, source data categories and licensing. For third-party models, refer to the provider’s published documentation and our enterprise terms.
- Evaluation — benchmarks used, metrics, current scores, evaluation date. Include any fairness / bias evaluations relevant to the use case.
- Known limitations — failure modes, situations where the model should not be used.
- Recommended uses to avoid — explicit list (e.g., “Do not use for medical diagnosis,” “Do not use as the sole basis for a hiring decision”).
- Data flow — what customer data flows to / from the model, what is logged, and what retention applies.
- EU AI Act classification — provisional tiering (prohibited / high-risk / limited-risk / minimal-risk) plus GPAI / systemic-risk flag where applicable. Confirmed by counsel before EU launch.
- Customer disclosures required — text used in product UI and in customer DPAs.
- DPIA reference — link to the DPIA covering the processing.
- AI review record — link to the AI Review Log entry that approved this version.
- Re-review date — default 24 months; sooner on material change.
Active models (customer-facing)
The four entries below are third-party AI provider integrations that power customer-facing product features (and are also approved for internal workforce use). They are listed publicly on the Sub-processor List as of 2026-05-07.
| Model / provider | Approved tier | Customer-facing use | Data flow | Customer-facing? | EU AI Act tier (provisional) | DPIA | Review record | Re-review |
|---|
| Anthropic — Claude (API + Team / Enterprise) | Enterprise / API | Production AI features in Neuroscale products; internal workforce assistant | Customer prompts + retrieved context → Anthropic API → response back to product. Inputs are not used to train Anthropic models per enterprise terms. Full data-flow detail per feature lives in the corresponding DPIA. | Yes | Limited-risk (Neuroscale’s use, under Art. 50 transparency obligations); provider operates a GPAI model. | Customer-facing AI DPIA, DPIA Register | AI Review Log 2026-05-07 | 2028-05-07 |
| OpenAI — ChatGPT / API (Enterprise + API) | Enterprise / API | Production AI features in Neuroscale products; internal workforce assistant | Same as above with OpenAI | Yes | Same as above | Customer-facing AI DPIA, DPIA Register | AI Review Log 2026-05-07 | 2028-05-07 |
| xAI — Grok (API + Enterprise) | Enterprise / API | Production AI features in Neuroscale products; internal workforce assistant | Same as above with xAI | Yes | Same as above | Customer-facing AI DPIA, DPIA Register | AI Review Log 2026-05-07 | 2028-05-07 |
| Cerebras — cerebras.ai inference (API) | Enterprise / API | Production AI features (fast inference); internal workforce assistant | Same as above with Cerebras | Yes | Same as above | Customer-facing AI DPIA, DPIA Register | AI Review Log 2026-05-07 | 2028-05-07 |
Reconfirmation cadence: every 24 months (next due 2028-05-07) or sooner on a material change in any provider’s terms, model surface, or in the BIS / EU AI Act / state-AI rules. Adding a new provider, retiring a provider, or routing a new data category to a provider requires a new AI Risk Review and a DPIA addendum.
Per-feature model cards. A separate model card per Neuroscale product feature is added to the table below once the product surface is named and the feature is launched. The provider rows above are the provider-level entries that any feature card cross-references.
Per-feature model cards
A feature card identifies the specific model and version invoked by a named Neuroscale product feature and is the artifact cross-referenced from the corresponding customer-facing DPIA. Provider terms set forth at Active models apply to each feature card by reference and shall not be restated.
| Feature | Model and version | Provider row | Intended use | Inputs and outputs logged | Bias and fairness evaluation | EU AI Act tier (provisional) | DPIA | Review record | Re-review |
|---|
Template — to be completed upon first feature launch. <feature; product surface> | <exact model identifier and version> | <link to provider row> | <single-sentence customer-visible purpose> | <categories logged; retention period> | <method; date; result; link> | <prohibited / high-risk / limited-risk / minimal-risk>; counsel confirmation required prior to EU placement on market | <DPIA link> | <AI Review Log entry> | <date; default 24 months from approval> |
A new or updated feature card shall be issued upon any of the following: (i) a change in model identifier or version; (ii) a change in provider; (iii) the routing of a new data category to the model; (iv) the introduction of a decision surface based solely on automated processing within the meaning of GDPR Art. 22 or the California Privacy Protection Agency’s Automated Decision-making Technology regulations; or (v) a material change to customer-facing disclosure copy. Each such event requires an AI Review Log entry and a DPIA addendum prior to launch.
Own-trained and self-hosted models
This section governs (i) models that Neuroscale fine-tunes; (ii) open-source models that Neuroscale self-hosts, including without limitation Llama, Qwen, Mistral, and DeepSeek, where surfaced through a customer feature; and (iii) any model trained from scratch by Neuroscale. Each entry shall comprise the twelve schema fields specified at Model-card schema together with the additional fields tabulated below. Fine-tunes and from-scratch training are subject to the dataset-card requirement set forth in the AI Acceptable Use Policy → Training data. Where a model qualifies as a general-purpose AI model, the training-content summary required by EU AI Act Art. 53(1)(d) shall be prepared and approved by the General Counsel prior to placement on the EU market.
| Model identity | Base model and license | Training or fine-tune dataset (dataset card) | Hosting | Provider role | Bias and fairness evaluation | EU AI Act tier; GPAI flag | DPIA | Review record | Re-review |
|---|
Template — to be completed upon first own-trained or self-hosted entry. <internal name; version> | <base model; license (e.g., Llama 3.1 70B; Llama 3 Community License)> | <dataset-card link(s); for inference-only deployments of upstream weights, state "Not applicable; see upstream model card"> | <deployment surface and region> | Neuroscale operates the deployed model; upstream weight licensor identified separately | <method; date; subgroup metrics; link> | <tier>; GPAI: <yes / no>; counsel confirmation required prior to EU placement on market | <DPIA link> | <AI Review Log entry> | <date; default 24 months from approval> |
Inference-only deployments of upstream open-source weights. A registry entry is required where the model is surfaced through a customer feature. The training-data field shall cite the upstream model card; a Neuroscale dataset card is not required absent fine-tuning, reinforcement learning from human feedback, or other modification of weights.
Fine-tunes. Each fine-tune constitutes a distinct entry, notwithstanding that the base model is separately registered, because intended use, evaluation results, and dataset composition differ from the base. Training-data sourcing, lawful basis, and dataset-card requirements are governed by the AI Acceptable Use Policy → Training data.
Customer Content as a training-data source — deidentification required. Where a fine-tune or from-scratch training run consumes Customer Content (including Candidate data submitted to Arbi), the corpus shall be processed through the Deidentification Standard before admission, and the resulting model shall pass the Reidentification Audit Procedure before production deployment. The dataset card shall record the Stage-3 k and l values actually achieved, the Stage-5 differential-privacy accountant report, and the Stage-6 audit result. Tier-based training-use opt-outs are enforced upstream of the Deidentification Standard per Section 7.3 of the Terms of Service.
Provider terms — current state
All four approved providers operate under enterprise / API terms that prohibit training on Neuroscale-submitted inputs. Customer-facing AI processing is disclosed in the Sub-processor List and in the Customer DPA template. Adding a new provider, or moving any data category not previously covered to an existing provider, requires a new AI Risk Review and a DPIA addendum before launch.
Adding a model
- Product owner drafts the model card using the schema above.
- Engineering attaches evaluation results.
- General Counsel reviews EU AI Act tiering and customer-disclosure language.
- AI risk-review group approves per the AI Review Log procedure.
- The card is added here; a launch is not authorized until the entry is in this registry.
Retiring a model
When a model is retired or replaced:
- The current entry is moved to the Retired table below with the retirement date and the replacement model (if any).
- Customers affected are notified per the relevant DPA notice obligations.
Retired models
| Model identity | Retired on | Replaced by | Notes |
|---|
| None. | | | |
Cross-references
Version history
| Version | Date | Description | Author | Approved by |
|---|
| 1.0 | May 8, 2026 | Initial version | Cameron Wolfe | Ishan Jadhwani |