Policy Owner: CTO
Co-signers: CISO, General Counsel
Effective Date: May 8, 2026
Reviewed: Annually
Next Review: May 8, 2027

Purpose

To establish rules and expectations for two distinct, but related, sets of activities at Neuroscale:
  1. Employee use of third-party AI and generative-AI tools in the course of Neuroscale work.
  2. Neuroscale’s responsible-development obligations for the AI features and AI products Neuroscale builds and offers to customers.
This policy is designed to enable productive use of AI while protecting Neuroscale’s Confidential data, customer data, intellectual property, and the rights of individuals affected by AI systems we build or deploy.

Scope

All Neuroscale employees, contractors, and other personnel; all Neuroscale-developed software and services that incorporate machine-learning models, large language models (LLMs), or other AI components; and all third-party AI tools used in connection with Neuroscale work, on Neuroscale-managed devices or accounts.

Policy

Neuroscale uses AI as a productivity tool internally and develops AI as a product externally. In both contexts, employees and the company must:
  • Protect Confidential and customer data from unauthorized exposure to AI providers.
  • Comply with applicable laws and contractual commitments.
  • Treat AI output as draft work product subject to human review, not as an authoritative source.
  • Build AI features that are accurate, transparent, fair, and accountable to the people affected by them.
This policy supplements (and does not replace) the Information Security Policy, Data Management Policy, Code of Conduct, Secure Development Policy, and Open Source & SBOM Policy.

Part A — Employee use of third-party AI tools

Approved tools

The following tools are approved for Neuroscale work, including (where indicated) work involving Confidential data. All have data-processing terms in place that prevent the provider from training models on Neuroscale inputs.
ToolApproved for Confidential data?Notes
Anthropic (Claude — API and Team / Enterprise plans)YesEnterprise data-processing terms in effect; no training on inputs
OpenAI (ChatGPT Enterprise / API)YesEnterprise terms; SSO via Rippling; no training on inputs
xAI (Grok — API / Enterprise)YesEnterprise data-processing terms in effect; no training on inputs
Cerebras (cerebras.ai inference)YesEnterprise inference terms; no training on inputs
The current authoritative list is maintained in Vanta and enforced through the IT-managed allowlist in Cloudflare One (Gateway and Access). Any additions go through the Vendor Risk Assessment process and must be approved by the CISO.

Prohibited tools and uses

  • Consumer-tier free AI services (free ChatGPT, free Claude.ai, Gemini consumer, Perplexity free, etc.) must not be used for any Neuroscale work involving Confidential, Restricted, or customer data, source code, secrets, customer prompts, internal strategy, or personnel information. Free-tier terms typically permit the provider to use inputs for training and offer no enterprise data-handling commitments.
  • Personal AI accounts (e.g., a personal ChatGPT Plus subscription) must not be used for Neuroscale work, even if the underlying model is the same as an approved enterprise tool. Approval attaches to the contracted tenancy, not the model.
  • Browser extensions or plugins that send page content to an AI provider must not be installed on Neuroscale-managed devices unless on the IT allowlist.
  • AI-generated voice cloning, deepfakes, or impersonation of any person — employee, customer, public figure — is prohibited except for clearly labeled internal training or red-team purposes pre-approved by the CISO.

Confidential data + AI

Employees must not paste, upload, or otherwise transmit any of the following to a non-approved AI tool:
  • Source code from any Neuroscale repository.
  • Customer data, including prompts and outputs from customers using Neuroscale products.
  • Personally identifiable information (PII) or other regulated data.
  • Secrets, credentials, API keys, or anything that would otherwise be stored in HashiCorp Vault or Dashlane.
  • Strategic plans, financial data, unannounced product information, M&A discussions, or board materials.
  • Internal legal documents, employee personnel files, or compensation data.
Approved tools have data-processing terms (DPAs and equivalent) committing the provider not to use Neuroscale inputs to train models and to handle data in line with our Data Management Policy and customer commitments.

Code generated or assisted by AI

  • All AI-generated and AI-assisted code must follow Secure Coding and Code Review. The fact that code came from an AI tool does not waive any review, testing, or security requirement.
  • AI-generated code is treated as “third-party” for license purposes. Engineers must not commit AI-suggested code that reproduces non-permissively licensed code. License-filtering features (e.g., Copilot’s duplicate-detection / public-code filter) must be enabled. See the Open Source & SBOM Policy.
  • PR descriptions must disclose substantial AI assistance — i.e., where a meaningful function, file, or design was generated or transformed by an AI tool — so reviewers can apply appropriate scrutiny. Trivial autocomplete need not be disclosed.
  • Secrets must never be passed to AI tools to generate, “rotate,” or troubleshoot — even briefly. Treat any secret that touches an AI prompt as compromised and rotate it via Secrets Management.

AI-generated content disclosure

Where AI-generated content is communicated externally — marketing copy, customer emails, blog posts, public statements — a human reviewer must verify accuracy and appropriateness. Employees must comply with any contractual or regulatory disclosure obligation requiring that AI-generated content be labeled (for example, where a customer or platform requires it). Use of AI to generate content that fabricates quotes, citations, or events is prohibited. Customer-facing product labeling. AI-generated outputs surfaced through Neuroscale products — including candidate summaries, ranking explanations, draft outreach copy, and any agentic tool output — are labeled as AI-generated at the point of customer or end-user display. The label is a persistent visible element of the rendered output (not a one-time onboarding banner). Engineering owns implementation; the General Counsel reviews label copy and placement on each material change. The labeling obligation implements California SB 942 (Bus. & Prof. Code §§22757–22757.4, effective Jan 1, 2026), the Utah AI Policy Act (Utah Code §§13-72-101 et seq.), and analogous state-law transparency obligations.

Part B — Neuroscale’s responsible AI development

Customer-facing AI features

Any customer-facing AI feature — including but not limited to LLM-powered features, recommendation systems, classification, summarization, and agentic workflows — must, before launch, satisfy the following:
  • Accuracy and limitations. Documented evaluation against representative data, with measured error / hallucination rates and known failure modes captured in the model card (see “Model cards” below).
  • Hallucination disclosure. Customer-facing UI clearly indicates that outputs are AI-generated and may be inaccurate. Outputs that are presented as fact (e.g., citations, quotes, structured data) include source links or are clearly bounded.
  • Content moderation. Inputs and outputs are screened for prohibited content (CSAM, violent extremism, targeted harassment) using provider-side filters and Neuroscale-side controls as appropriate to the use case.
  • Age gates. If the feature is reasonably foreseeable to be used by individuals under 18, age-appropriate-design controls are evaluated by the General Counsel — including but not limited to compliance with the Children’s Online Privacy Protection Act (COPPA, 15 U.S.C. §§6501–6506) and the California Age-Appropriate Design Code (Cal. Civ. Code §§1798.99.28 et seq.).
  • Automated decision-making rights. Where a Neuroscale feature produces a decision based solely on automated processing — including profiling — that produces legal effects, or similarly significantly affects, a data subject, the deployment must comply with GDPR Art. 22: rely on a permitted Art. 22(2) basis (contract necessity, EU/Member-State law, or explicit consent) and, where Art. 22(2)(a) or (c) applies, implement suitable safeguards including the data subject’s right to obtain human intervention, express their point of view, and contest the decision (Art. 22(3)). Where required by CPRA (Cal. Civ. Code §1798.185(a)(16) and the California Privacy Protection Agency’s Automated Decision-making Technology regulations), provide the corresponding pre-use notice, opt-out, and access mechanisms. The General Counsel reviews any feature within scope before launch and the deployment surface (UI copy, API toggle, support workflow) is documented in the model card.

Training data

Training, fine-tuning, and evaluation data used in Neuroscale models must:
  • Have documented provenance via a dataset card. Every dataset has a written dataset card recorded in the AI Model Registry covering: source(s); collection method (licensed, customer-contributed-with-consent, public-web-with-license-review, synthetic, etc.); collection date(s) and frozen snapshot identifier; lawful basis for use (license name, customer-agreement clause, public-domain dedication, fair-use opinion from counsel); known biases or representativeness limitations; PII or sensitive-category exposure and any de-identification applied; volumetric summary; and the dataset owner. Dataset cards are linked from each model card that consumes the dataset. The format and minimum fields are aligned with Datasheets for Datasets (Gebru et al. 2021) and the EU AI Act Art. 53(1)(d) “sufficiently detailed summary of training content” obligation for general-purpose AI models.
  • Customer Content as a documented training-data source, after universal deidentification. Customer Content (including Candidate data submitted to Arbi) is a documented training-data source for Neuroscale’s own AI models pursuant to Section 7 of the Terms of Service and the executed DPA. All Customer Content destined for training — regardless of subscription tier — first passes the Deidentification Standard set forth below; raw Customer Content is never used in training, only the resulting Deidentified Data. Tier-based controls govern training-use, not deidentification: Free-Tier Customer Content has no training-use opt-out and the resulting Deidentified Data is used in training; paid-tier Customer Content is admitted to training by default with a training-use opt-out available via Customer Admin settings or written request to privacy@neuroscale.ai. The opt-out applies prospectively only and does not require Neuroscale to retract Deidentified Data already incorporated into training prior to the opt-out. Neither Customer Content nor Deidentified Data is transmitted to or made available to any third-party AI provider for the purpose of training that third party’s AI models.

Deidentification standard for training-data ingestion

This standard governs the transformation of Customer Content into Deidentified Data suitable for use in training, fine-tuning, and evaluating Neuroscale’s own AI models. The objective is to satisfy the “deidentified” definition in Cal. Civ. Code §1798.140(h) and analogous state-privacy-act definitions, and to support a defensible position under GDPR Recital 26 and EDPB Opinion 28/2024 that the resulting trained-model state is no longer Personal Data. Neuroscale does not rely on a claim of true anonymization for upstream Personal Data; the upstream pipeline is treated as Personal-Data Processing under a documented legitimate-interest basis (LIA on file with the General Counsel), and the deidentified-output position attaches to the trained model and downstream artifacts only.

Stage 1 — Direct-identifier redaction

Every record entering the training pipeline shall be processed by an automated redaction pass that removes, at minimum, the following fields and the entity-recognized analogues thereof in free-text fields:
  • Personal name (first, last, middle, suffix, alias, social handle).
  • Contact data — email address, telephone number, postal address (street level), URL pointing to a personal page (LinkedIn, GitHub, personal website, portfolio).
  • Government identifiers — SSN, driver’s-license number, passport number, taxpayer ID, national ID, immigration number, employer-issued candidate ID where the ID encodes identity.
  • Financial identifiers — account number, payment-card number, IBAN.
  • Biometric identifiers — voiceprint, faceprint, fingerprint, retina/iris template, gait signature.
  • Photographs and audio of identifiable persons.
  • Free-text occurrences of “I am [name],” “my name is [name],” and analogous self-identifying disclosures.
Redaction is performed by named-entity recognition with conservative-bias rules (false-positive preferred over false-negative) and verified by a sampling-based human review on a rolling basis (see audit cadence below).

Stage 2 — Quasi-identifier generalization and tokenization

After Stage 1, the following quasi-identifiers shall be transformed before any record is admitted to a training corpus:
  • Employer name — replaced with a non-reversible hash and bucketed to an industry-and-size tier (e.g., tech_5k_to_50k_emp, consulting_top_25_global). Original-employer-name lookup tables are not retained in or alongside the training corpus.
  • Educational institution — generalized to a tier and country (e.g., r1_research_us, top_50_business_school_eu).
  • Dates — calendar dates generalized to year-only granularity. Date ranges with duration ≤ 90 days are generalized to a quarter; ranges ≤ 30 days are generalized to a year.
  • Geographic location — generalized to metropolitan statistical area (US) or NUTS-2 region (EU); rural locations to country.
  • Job titles — normalized to a controlled vocabulary; verbatim title preserved only where the controlled-vocabulary term retains the recruiting-relevant information.

Stage 3 — k-anonymity and l-diversity gates

After Stage 2, the corpus is partitioned by the equivalence class formed by the post-generalization quasi-identifiers, and each record is admitted to the training corpus only if:
  • k ≥ 10 — the equivalence class contains at least 10 records sharing the same quasi-identifier values; and
  • l ≥ 2 — for any retained sensitive attribute (e.g., compensation band, employment-status indicator), the equivalence class exhibits at least 2 distinct values.
Records that fail the k-anonymity gate are dropped from the training corpus or returned to Stage 2 for further generalization.

Stage 4 — Sensitive-attribute removal

The following attributes are removed before training, regardless of whether they appear post-Stage-3:
  • Special-category data within the meaning of GDPR Art. 9: racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, genetic data, biometric data for unique identification, health data, sex life or sexual orientation.
  • Sensitive Personal Information within the meaning of Cal. Civ. Code §1798.140(ae).
  • Data of individuals under 16 (or under 13 where COPPA applies) — entire record dropped, not redacted.
  • Data flagged by the Customer as restricted, sensitive, or under a legal-hold notice.

Stage 5 — Differentially-private training

Training jobs that consume the deidentified corpus shall, in addition to the above, apply differential-privacy controls calibrated to the training method:
  • Full fine-tuning of foundation models — DP-SGD with target privacy budget ε ≤ 6, δ ≤ 1 / N (where N is the corpus size), per-example gradient clipping, calibrated noise multiplier. The privacy accountant report is recorded in the dataset card.
  • LoRA / adapter fine-tuning of foundation models — DP-SGD on the LoRA parameters with the same privacy budget (ε ≤ 6); full-parameter weights are frozen.
  • Embedding-only training (retrieval / re-ranking) — DP-SGD on the encoder with ε ≤ 10 (looser bound acceptable where the embeddings are not directly invertible to inputs).
  • Evaluation-only consumption — no DP requirement; corpus is treated as read-only and not used to update weights.

Stage 6 — Output controls and reidentification testing

After training, the resulting model state shall be subject to the Reidentification Audit Procedure before any production deployment. Failure of the audit halts deployment, returns the corpus to Stage 2 for tighter generalization, and triggers a fresh training run.

Documentation and ownership

  • Dataset card — every training run records, in the AI Model Registry, the Stage-1 redaction taxonomy version, Stage-2 generalization parameters, Stage-3 k and l values actually achieved (not target), Stage-4 sensitive-attribute exclusions, Stage-5 privacy-accountant ε / δ, and the Stage-6 audit result.
  • LIA — a written Legitimate-Interest Assessment for the upstream Stage-1 / Stage-2 / Stage-3 processing of Personal Data is maintained by the General Counsel and reviewed at least annually.
  • Owner — the CTO is the technical owner of the deidentification pipeline; the General Counsel is the legal owner of this standard; the CISO signs off on each new release of the redaction-taxonomy or k-anonymity parameters.
  • Not be obtained by scraping in violation of a website’s Terms of Service or robots.txt where doing so would create legal risk. Where scraping is used, counsel reviews the source.
  • Respect copyright. Training on copyrighted material without a license is undertaken only after written General Counsel approval. Where applicable, opt-out signals (e.g., ai.txt, robots.txt disallow rules, or per-publisher opt-out lists) are honored.
  • Be governed by AI-training agreements with vendors. Any third-party data-licensing agreement used for training must be reviewed by the General Counsel and tracked in Vanta.

Bias, fairness, and explainability

  • Pre-deployment evaluation. New customer-facing models and material model updates undergo bias and fairness evaluation appropriate to the use case (e.g., subgroup performance, refusal-rate parity, false-positive parity). Results are documented in the model card.
  • Mandatory disparate-impact testing for employment-AI. Any feature used to substantially assist or replace discretionary decision-making in candidate sourcing, screening, ranking, scoring, qualification, interview shortlisting, or offer recommendation shall, before production deployment and at least annually thereafter, pass the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure. The procedure implements the 4/5ths adverse-impact rule of the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. §1607.4(D)), the EEOC’s May 2023 technical guidance on algorithmic discrimination under Title VII, and the analogous obligations under NYC Local Law 144 of 2021, the Colorado AI Act (C.R.S. §§6-1-1701 et seq.), Illinois HB 3773 (775 ILCS 5/2-103.1), and Texas TRAIGA. Failure of the 4/5ths gate halts deployment unless the model owner provides a documented validity defense per 29 C.F.R. §§1607.5–1607.15.
  • Independent-auditor sign-off. Bias audits required by NYC LL 144 §20-871 are conducted by an Independent Auditor satisfying 6 RCNY §5-301; results are published as a public bias-audit summary on the AI Training-Data Transparency Notice.
  • Explainability. Where outputs drive consequential decisions about a person, the system must support a human-understandable explanation of the basis for the output, to the extent technically feasible.
  • Algorithmic-discrimination disclosure. If the General Counsel determines that a Neuroscale-built or Neuroscale-fine-tuned AI system has caused, or is reasonably likely to have caused, algorithmic discrimination within the meaning of C.R.S. §6-1-1701(1), the General Counsel shall provide notice to the Colorado Attorney General within the time period required by C.R.S. §6-1-1703(7) and to any other state Attorney General to whom an analogous obligation applies. Production deployment is halted in the affected jurisdiction(s) until remediation per the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure.
  • Model-harm incident response. Reports of bias, discriminatory output, hallucinated harm, or other model-driven harm follow the Incident Response Plan. The Incident Manager engages the model owner, the General Counsel, and (where customer-facing) the CTO. A post-incident model-card update and, where warranted, a customer notification, is required.

Model cards

Each customer-facing model has a model card recording: intended use, training data summary, evaluation results, known limitations, recommended uses to avoid, and the responsible owner. Model cards are tracked in the AI Model Registry.

Regulatory compliance

The General Counsel maintains the authoritative regulatory matrix. Key regimes currently in scope:

EU and UK

  • EU Artificial Intelligence Act (Regulation (EU) 2024/1689). The AI Act establishes a tiered framework of prohibited, high-risk, limited-risk, and minimal-risk AI systems. Neuroscale’s tiering, recorded per feature in the AI Model Registry, is:
    • High-risk (Annex III(4) — employment / worker management). Any Neuroscale feature used to substantially assist or replace discretionary decision-making in recruitment, screening, ranking, or evaluation of candidates is a high-risk AI system. Neuroscale, as the provider within the meaning of Art. 3(3), performs the obligations under Arts. 9–17: risk-management system across the lifecycle (Art. 9); data and data-governance practices for training, validation, and testing including the bias-examination and representativeness obligations of Art. 10 (operationalized in the Deidentification Standard and the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure); technical documentation per Annex IV (recorded in the AI Model Registry); record-keeping and logging (Art. 12); transparency and instructions for use to deployers (Art. 13); human oversight (Art. 14); accuracy, robustness, and cybersecurity (Art. 15); quality-management system (Art. 17); conformity assessment, CE marking, and EU declaration of conformity (Arts. 43–48); EU-database registration (Art. 49); post-market monitoring and serious-incident reporting (Arts. 72–73); and authorized-EU-representative designation (Art. 22) since Neuroscale has no EU establishment. Deployer flow-down obligations under Art. 26 are addressed in Section 7.3 of the Terms of Service and the Customer-facing toolkit. As of the effective date of this policy, no Neuroscale feature has been placed on the EU market and no Neuroscale-trained model is in production; the General Counsel re-confirms the tier and the corresponding documentation set before any EU launch and on each material feature change.
    • Limited-risk (Art. 50 transparency). Customer-facing chat, voice, and content-generation surfaces operate under Art. 50 disclosure obligations: users are informed they are interacting with AI, and any AI-generated synthetic content is labeled per Art. 50(2)–(4); customer-facing product labeling is implemented per the AI-generated content disclosure section above.
    • Prohibited (Art. 5). No Neuroscale feature performs the practices prohibited under Art. 5 — including social scoring, emotion-inference in workplace or education contexts, untargeted biometric scraping, real-time remote biometric ID for law enforcement, or subliminal manipulation; see High-risk and prohibited use cases.
  • General-purpose AI model obligations (EU AI Act Arts. 51–55). Additional obligations attach to providers of general-purpose AI models, with heightened obligations for models presenting “systemic risk” (currently keyed to a training compute threshold of 10^25 FLOP cumulative training compute, with iterations also at 10^26 FLOP under successor delegated acts). As of the effective date of this policy, no Neuroscale-trained model approaches either threshold: Neuroscale’s largest in-house training runs are well below 10^23 FLOP, and Neuroscale-deployed third-party GPAI models (Anthropic, OpenAI, xAI, Cerebras hosted) are operated as a downstream deployer rather than as a GPAI provider. The General Counsel and CTO re-confirm this determination on each new training run, each material model release, and at the annual policy review; if a Neuroscale model approaches the GPAI or systemic-risk threshold, the additional Art. 51–55 obligations — including the Art. 53(1)(d) “sufficiently detailed summary of training content” obligation — are stood up before placement on the market and the AI Training-Data Transparency Notice per-model row is updated to satisfy that obligation in addition to its US-law function.
  • GDPR and UK GDPR (Regulation (EU) 2016/679; UK Data Protection Act 2018 and UK GDPR). Neuroscale acts as a Processor on behalf of Customer-Controllers under the DPA template, and its operative obligations include: lawful-basis support under Art. 6 (and Art. 9 special-category processing where applicable); processor obligations under Art. 28 (documented instructions, confidentiality, security, sub-processor flow-down, data-subject-rights assistance, breach notification, return/deletion, audit cooperation); cross-border transfers under Chapter V (addressed in the Cross-Border Transfers Procedure). Direct controller-side obligations attach to a narrow set of Neuroscale-internal processing (workforce data, marketing). Customer-side GDPR obligations for Candidate data are addressed in Section 7.1 and Section 7.3(g) of the Terms of Service — Customer is the Controller and is responsible for Art. 14 indirect-collection notices, Art. 22 sole-automated-decision rights, Art. 35 DPIAs, and analogous UK GDPR obligations.
  • Cross-border transfers — SCCs, UK IDTA, EU-US Data Privacy Framework. The current state of Neuroscale’s transfer-mechanism reliance is recorded in the Cross-Border Transfers Procedure and the Sub-processor List. The EU-US DPF, the UK Extension to the DPF, and the Swiss-US DPF are tracked for participation status; Standard Contractual Clauses (Implementing Decision (EU) 2021/914) and the UK International Data Transfer Addendum are incorporated into each Customer DPA where required. Transfer Impact Assessments are documented per the TIA template and refreshed on material change.
  • EU Data Act (Regulation (EU) 2023/2854). Where Neuroscale’s services collect or generate data through connected products or related services within the meaning of the Data Act, the General Counsel re-confirms scope; the recruiting product as currently configured is not within scope. Re-confirmation occurs at the annual review.
  • UK ICO guidance on AI in recruitment (2024). UK-deployed employment-AI features comply with the ICO’s enforcement priorities and the AI-and-data-protection toolkit; specific obligations are addressed in the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure when a UK deployment is contemplated.

US federal

  • FTC Act §5 (15 U.S.C. §45) — unfair or deceptive AI claims, “AI-washing.” Customer-facing claims about model accuracy, fairness, capabilities, automation, or comparative performance shall be substantiated before publication and reviewed by the General Counsel and the Marketing lead. Forbidden patterns include unqualified superlatives (“most accurate,” “bias-free,” “human-level”), claims about training-data scale or composition that are not consistent with the AI Training-Data Transparency Notice, and capability demonstrations that materially exceed production behavior. Material disclaimers (“AI-generated; may contain errors”) accompany any consumer-facing AI output per Section 8 of the Terms of Service.
  • EEOC technical guidance on algorithmic discrimination under Title VII (May 2023), the ADA (May 2022), and the ADEA, applied through the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure. Employment-AI features pass 4/5ths-rule adverse-impact testing before launch and annually thereafter.
  • Federal contractor obligations (OFCCP) — Air Force is a Neuroscale customer, and other Customers may be federal contractors subject to 41 C.F.R. Part 60 and OFCCP affirmative-action and recordkeeping obligations. Where a Customer represents that it is OFCCP-covered, Neuroscale provides cooperation letters and feature-level documentation to support that Customer’s compliance, and assesses whether Neuroscale itself is brought within OFCCP scope by the contract.
  • FCRA non-CRA disclaimer (15 U.S.C. §§1681 et seq.) — Neuroscale does not act as a “consumer reporting agency” within the meaning of the Fair Credit Reporting Act and does not furnish “consumer reports” to third parties. Where Neuroscale features score, rank, or qualify candidates, those outputs are provided to the Customer for human-review-supported decision-making and are governed by Section 8 of the Terms of Service; they are not consumer reports. Customers using Neuroscale outputs in adverse-action contexts are responsible for FCRA-compliant adverse-action notices where applicable to their separate background-check workflows. The General Counsel re-confirms this characterization on each material change to candidate-scoring features.
  • E.O. 14117 (Executive Order on Preventing Access to Americans’ Bulk Sensitive Personal Data and U.S. Government-Related Data by Countries of Concern; DOJ implementing rule at 28 C.F.R. Part 202) — bulk Customer Content (including Candidate data) and the resulting Deidentified Data are processed and stored exclusively in US regions of AWS and Vultr, and access by foreign nationals from countries of concern is restricted per the Access Control Policy and the Trade Compliance Policy. Training-compute resources shall not run on infrastructure that involves a covered data transaction with a country of concern.
  • Section 889 of the FY 2019 NDAA (FAR 52.204-25) — Neuroscale does not procure or integrate covered telecommunications equipment or services within the meaning of Section 889; vendor selection is screened against this prohibition per the Trade Compliance Policy.
  • BIS / EAR export controls on advanced AI (15 C.F.R. Parts 740, 742, 744; AI Diffusion rule and successor delegated rules) — addressed by the Trade Compliance Policy → AI-specific export controls and the Export Classification Matrix. The General Counsel re-confirms classification on each new training run, each material model release, and at the annual policy review.
  • COPPA (15 U.S.C. §§6501–6506) — addressed under Customer-facing AI features → Age gates above; data of individuals under 16 is dropped at Stage 4 of the Deidentification Standard.
  • OSTP AI Bill of Rights and OMB M-24-10 / M-24-13 (advisory) — Neuroscale’s controls map to the five AI Bill of Rights principles: notice (this policy + AI Training-Data Transparency Notice); opt-out (tier-based training-use opt-out under ToS §7.3; state-law profiling opt-outs under Privacy Notice); human oversight (Customer-facing AI features → No reliance for employment decisions; Employment-AI Bias-Audit Procedure); data privacy (Data Management Policy + Deidentification Standard); algorithmic accountability (bias evaluation, model card, Incident Response Plan).

State AI laws

  • Colorado AI Act (SB24-205, C.R.S. §§6-1-1701 et seq., effective February 1, 2026). Imposes developer- and deployer-side obligations on “high-risk” AI systems making “consequential decisions” — including employment, education, financial services, health care, housing, insurance, and legal services. Neuroscale’s developer-side controls: written risk-management program (this policy + Employment-AI Bias-Audit and Disparate-Impact Testing Procedure); impact assessment per C.R.S. §6-1-1702(3) (recorded in the AI Model Registry feature card); public statement per C.R.S. §6-1-1703(1)(a) (published at the AI Training-Data Transparency Notice); algorithmic-discrimination notice to the Colorado Attorney General per C.R.S. §6-1-1703(7) on identification of algorithmic discrimination. Customer-facing toolkit supports deployer-side disclosure and consumer-rights workflows.
  • NYC Local Law 144 of 2021 (Automated Employment Decision Tools; Title 20 §§20-870 to 20-874 of the NYC Administrative Code; 6 RCNY §§5-300 to 5-304). Applies where a Neuroscale feature substantially assists or replaces discretionary decision-making for employment in New York City. Developer-side obligations are addressed in the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure: independent-auditor engagement, public bias-audit summary on the AI Training-Data Transparency Notice, and Customer-facing notice template (≥ 10 business days before use, alternative-selection-process availability).
  • Illinois HB 3773 (effective Jan 1, 2026; amends 775 ILCS 5/2-103.1). Prohibits use of AI that subjects employees or applicants to discrimination in employment on the basis of a protected class, and imposes notice requirements. Covered by the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure and a Customer-facing notice template.
  • Illinois AI Video Interview Act (820 ILCS 42). Triggered only by AI analysis of video interviews. Neuroscale does not currently offer such a feature; any future feature requires consent, demographic reporting, and destruction obligations and is gated on an AI Review Log entry before launch.
  • Illinois BIPA (740 ILCS 14). Triggered only by collection or use of biometric identifiers. Neuroscale does not solicit or collect biometric data; any future facial-analysis or voiceprint feature requires BIPA-compliant notice, written consent, retention and destruction schedules, and is gated on an AI Review Log entry before launch.
  • Maryland HB 1202 (Md. Lab. & Empl. §3-717). Triggered only by facial-recognition technology in interviews. Neuroscale does not currently offer such a feature; if added, written consent is required before any analysis.
  • California ADMT regulations (CPPA, finalized 2025 under Cal. Civ. Code §1798.185(a)(16)). Pre-use notice, opt-out, and access mechanisms for automated decision-making technology in consumer contexts. Customer-facing toolkit provides ADMT pre-use notice and opt-out templates. The training-use opt-out under ToS §7.3 is contractual and does not substitute for the consumer-side ADMT opt-out, which Customers implement against their Candidates.
  • California AB 2013 (Cal. Civ. Code §§22610 et seq., effective Jan 1, 2026). Generative AI training-data transparency: a public summary of each Neuroscale-trained or Neuroscale-fine-tuned generative-AI model is published at the AI Training-Data Transparency Notice. The summary is updated within 30 days of each new production deployment or material change.
  • California SB 942 (AI Transparency Act, Bus. & Prof. Code §§22757–22757.4, effective Jan 1, 2026). Customer-facing AI-generated content — including candidate summaries, ranking explanations, and AI-drafted communications — is labeled at the point of customer or end-user display (“AI-generated”); engineering ensures the label is rendered in the product UI per the AI Generated-Content Provenance Disclosure section above. Customers are responsible for preserving the label in any onward distribution to candidates.
  • California SB 1001 (Bus. & Prof. Code §§17940–17943). Triggered only by an AI agent that engages a consumer in a conversation that may be mistaken for human. Neuroscale does not currently deploy such an agent; if added, the agent shall include a clear disclosure at the start of the conversation per SB 1001.
  • California AB 2655 (Defending Democracy from Deepfake Deception Act of 2024). Neuroscale does not generate or distribute synthetic media of political figures and prohibits such use under High-risk and prohibited use cases.
  • Texas Responsible AI Governance Act (“TRAIGA”) (in force 2025–2026). Risk-based obligations on high-risk AI systems in employment and other consequential-decision domains. Texas-deployed employment-AI features are covered by the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure and the Texas-specific Customer-facing toolkit (risk disclosure, opt-out cooperation).
  • Utah AI Policy Act (Utah Code §§13-72-101 et seq., effective May 2024). Disclosure obligation where generative AI is used in a material way in consumer transactions. Customer-facing UI labels AI-generated content; Customer-facing toolkit provides Utah-specific material-use disclosure language.
  • Tennessee ELVIS Act (Tenn. Code Ann. §47-25-1101 et seq.). Voice / image / likeness protections. Not applicable to Neuroscale’s recruiting product; voice cloning of any person is prohibited under Prohibited tools and uses.
  • Connecticut SB 2 (status pending — General Counsel re-confirms enacted-or-not on each annual review). If enacted, treated parallel to the Colorado AI Act per the Employment-AI Bias-Audit and Disparate-Impact Testing Procedure.
  • State data-broker statutes (Cal. Civ. Code §1798.99.80 et seq.; Tex. Bus. & Com. Code Ch. 509; Vt. Stat. Ann. Tit. 6 §4727; Or. Rev. Stat. §646A.600 et seq.). Neuroscale is not a data broker under any of these statutes: it does not buy, sell, license, or share Personal Data or Deidentified Data with third parties for the third party’s commercial use. The General Counsel re-confirms this characterization annually and on any new productized output that involves data leaving Neuroscale’s control.
  • State comprehensive privacy acts (CCPA / CPRA, VCDPA, CPA, CTDPA, UCPA, TDPSA, FDBR, OCPA, MTCDPA, ICDPA, TIPA, INCDPA, DPDPA, NHDPA, NJDPA, MNCDPA, MDOCPA, RIDTPPA, KCDPA). Profiling and automated-decision-making rights, sensitive-data opt-in / opt-out, and universal opt-out signals (Global Privacy Control / GPC) are addressed in the Privacy Notice and operationalized through a Candidate-level training-use opt-out available to any Candidate by writing to privacy@neuroscale.ai.

Customer prompts and outputs

  • Retention. Customer prompts and model outputs are retained per the Data Retention Matrix and applicable contractual commitments. Default retention is the minimum needed to provide and support the service.
  • Confidentiality. Customer prompts and outputs are Confidential data of the customer and are protected per the Data Management Policy and the executed MSA / DPA.
  • Use limitations. Customer prompts and outputs may be used to train, fine-tune, evaluate, and improve Neuroscale’s own AI models pursuant to Section 7 of the Terms of Service and subject to the tier-based training-use opt-out rules described therein, and only after passing the Deidentification Standard — raw prompts and outputs are never used in training. They are not used for marketing, for benchmarking individual customers against one another, for transmission to any third-party AI provider for that provider’s training, or for any purpose outside the executed agreement.
  • Sub-processor disclosure. Third-party AI-model providers used in delivering customer-facing features are listed on the public Neuroscale subprocessor list and notified per DPA terms.

High-risk and prohibited use cases

Neuroscale will not knowingly build or operate AI systems that:
  • Produce or facilitate child sexual abuse material, non-consensual intimate imagery, or content sexualizing minors.
  • Conduct social scoring of natural persons of the kind prohibited by EU AI Act Art. 5.
  • Perform real-time remote biometric identification in publicly accessible spaces for law-enforcement purposes outside narrow legal exceptions.
  • Are designed primarily to deceive, defraud, or manipulate users.
  • Develop or facilitate weapons of mass destruction, including chemical, biological, radiological, or nuclear weapons, or critical-infrastructure cyberattacks.
Sales must escalate any opportunity in these areas to the General Counsel for a go / no-go decision before signing.

Training

All Neuroscale personnel complete AI Acceptable Use training:
  • At hire as part of Onboarding.
  • At least annually thereafter, delivered via Vanta.
  • At material policy change, re-acknowledgement is required.
Engineers, product managers, and designers working on AI features complete additional responsible-AI training appropriate to their role.

Governance

  • Policy Owner: CTO.
  • Co-signers: CISO and General Counsel.
  • Annual review: This policy is reviewed at least annually and on any material change in applicable law or in Neuroscale’s AI product surface.

AI risk review

Material new AI features and models are reviewed by an AI risk review group consisting of the CTO, CISO, General Counsel, and the responsible product owner before launch. Decisions and conditions are recorded in the AI Review Log.

Exceptions

Requests for exceptions must be submitted to the CTO and the CISO for approval. Exceptions involving regulated data, customer-data training, or a use case in the “High-risk and prohibited” list also require General Counsel approval. Exceptions are documented and time-limited.

Violations & enforcement

Report violations to the CISO or via the Code of Conduct reporting channels. Violations may result in revocation of AI-tool access, suspension of system and network privileges, and disciplinary action up to and including termination. Material violations affecting customer data or regulatory compliance may also be reported to affected customers and regulators per the Incident Response Plan.

Version history

VersionDateDescriptionAuthorApproved by
1.0May 8, 2026Initial versionCameron WolfeIshan Jadhwani