For security questions: security@neuroscale.ai · For privacy questions: privacy@neuroscale.ai · For legal: legal@neuroscale.ai
Effective date: May 9, 2026 · Last updated: May 9, 2026
Compliance attestations
We design controls against recognized industry frameworks and submit them to independent third-party assessment. Reports and attestations available to customers and prospects under NDA.SOC 2 Type II
In progress. Audit covers the Security, Availability, Confidentiality, and Privacy Trust Services Criteria. Type I report observation date: May 8, 2026. Type II observation period: August 1, 2026 – August 1, 2027. Auditor: Prescient Assurance. Reports are made available to customers and prospects under NDA — request a copy.
ISO/IEC 27001:2022
In progress. Information Security Management System aligned to the 2022 standard.
CCPA / CPRA
Compliant. See our Privacy Notice for California-specific disclosures and how to exercise your rights.
GDPR
Compliant. We process personal data lawfully under the GDPR and offer a Data Processing Addendum (DPA) with Standard Contractual Clauses where applicable.
FedRAMP
Future target (Moderate). Not currently authorized.
Security highlights
- Encryption at rest — AES-256 for production data stores. Application-layer envelope encryption is performed via HashiCorp Vault Transit (Neuroscale-managed keys, held in Neuroscale’s self-hosted Vault cluster — key material never leaves Vault). Cloud-native at-rest encryption from each provider (AWS KMS for AWS-resident services; Vultr platform encryption for Vultr-resident services) is layered beneath the application-layer wrap.
- Encryption in transit — TLS 1.3 (or TLS 1.2 with strong cipher suites where 1.3 is unavailable) for all data in motion across public networks.
- Multi-factor authentication — required for all employee access to production and corporate systems.
- Least-privilege access — role-based access controls, just-in-time elevation for production, and quarterly access reviews.
- Single sign-on (SSO) — SAML and OIDC SSO available for customers on eligible plans.
- Logging and monitoring — security and application telemetry centralized, retained, and reviewed; alerts route to a 24x7 on-call rotation.
- Annual third-party penetration test — conducted by an independent assessor; summary letter available under NDA.
- Vulnerability management — continuous scanning, prioritized remediation SLAs, and tracked through to closure.
- Vulnerability disclosure & private bug bounty — Neuroscale operates a private, invitation-only bug-bounty program. Researchers may report a vulnerability or request an invitation by emailing security@neuroscale.ai; see the Vulnerability Management page for the disclosure SLA and our safe-harbor commitments.
- 24x7 incident response — paged on-call coverage, defined severity levels, and customer-notification commitments documented in our contracts.
Privacy highlights
We are transparent about what we collect, why, and how we share it. The documents below cover the public-facing details:Privacy Notice
Categories of data, purposes, legal bases, your rights, and how to exercise them.
Subprocessor List
Current third parties that process customer personal data on Neuroscale’s behalf.
Cookie Notice
Cookies and similar technologies used on Neuroscale properties.
Customer data
Customer-uploaded content (resumes, ATS data, prompts, and other Customer-submitted material) is logically segregated and access is bounded to the tenant that owns it. Use of Customer Content is governed by the executed Master Agreement and Data Processing Addendum. Where Customer Content is used to train, fine-tune, evaluate, or improve Neuroscale’s own AI models, it is first transformed into Deidentified Data under Neuroscale’s Deidentification Standard (direct-identifier redaction, quasi-identifier generalization, k-anonymity ≥ 10, sensitive-attribute removal, differentially-private training, and a post-training reidentification audit — see AI Acceptable Use Policy → Deidentification standard). Raw Customer Content is never used in training; only the resulting Deidentified Data is, and this applies to every subscription tier. Tier-based controls govern training-use, not deidentification: Free-Tier Customers cannot opt out of training-use; paid-tier Customers may opt out of training-use via the Customer Admin settings or by writing to privacy@neuroscale.ai. Individual Candidates may also request opt-out from training by writing to privacy@neuroscale.ai. Neither Customer Content nor Deidentified Data is provided to any third-party AI provider for that provider’s training. Neuroscale is not a data broker under California, Texas, Vermont, Oregon, or analogous state laws. Where Neuroscale uses third-party sub-processors, see the Subprocessor List.AI transparency, bias-audit, and disparate-impact testing
Current status (May 9, 2026): No Neuroscale-trained or Neuroscale-fine-tuned model is in production. Customer-facing AI features currently rely on third-party foundation models from approved enterprise providers (Anthropic, OpenAI, xAI, Cerebras), each of which operates under contractual prohibitions on training on Neuroscale-submitted inputs. The procedures and public notices below are stood up in advance of the first Neuroscale-trained or Neuroscale-fine-tuned model entering production. Neuroscale publishes the following resources for Customers, Candidates, and regulators:- The AI Training-Data Transparency Notice — per-model summaries of training-data sources, deidentification method, reidentification-audit result, and bias-audit summary, satisfying California AB 2013 (Cal. Civ. Code §§22610 et seq., effective Jan 1, 2026) and the Colorado AI Act developer-side public statement (C.R.S. §6-1-1703(1)(a)).
- The Employment-AI Bias-Audit and Disparate-Impact Testing Procedure — Neuroscale’s developer-side procedure for any feature used in consequential employment decisions: independent-auditor sign-off, 4/5ths adverse-impact testing per the EEOC and the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. §1607), subgroup performance parity, and quarterly re-audit, in support of NYC Local Law 144, the Colorado AI Act, Illinois HB 3773, Texas TRAIGA, and analogous state laws.
- The Reidentification Audit Procedure — corpus-level reidentification testing (k-anonymity verification, linkage attack, singling-out test) and model-level memorization testing (membership-inference, regurgitation probing, PII-leakage scan, adversarial extraction) before any production deployment of a Neuroscale-trained or Neuroscale-fine-tuned model.
EU and UK readiness
Neuroscale’s documentation set is built to support EU and UK rollout when commercial readiness aligns with the regulatory build-out. As of the effective date of this page, no Neuroscale feature has been placed on the EU market. The General Counsel re-confirms tier and obligation set before any EU launch and on each material feature change. The applicable framework includes:- EU AI Act (Regulation (EU) 2024/1689) — Neuroscale’s recruitment-AI features are high-risk under Annex III(4) (employment / worker management). Neuroscale, as the Provider, performs the obligations of Arts. 9–17 (risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity, quality-management system) and the conformity-assessment, CE-marking, EU-database-registration, and post-market monitoring obligations of Arts. 43–73 before placement on the EU market. Where a Neuroscale-trained model qualifies as a general-purpose AI model, the Art. 53(1)(d) “sufficiently detailed summary of training content” attaches to the per-model row in the AI Training-Data Transparency Notice.
- GDPR and UK GDPR — Neuroscale operates as Processor under DPA Section 4 documented instructions. Customer-Controller obligations under Arts. 6, 9, 14, 22, 28, and 35 are addressed in the Privacy Notice and Section 7 of the Terms of Service; developer-side technical inputs for Customer DPIAs are produced by the Employment-AI Bias-Audit Procedure and the Reidentification Audit Procedure.
- Cross-border transfers — current reliance on EU-US DPF participation, Standard Contractual Clauses (Implementing Decision (EU) 2021/914), and the UK International Data Transfer Addendum is described in the Cross-Border Transfers Procedure.
- UK ICO guidance on AI in recruitment (2024) — UK-deployed features apply the ICO’s AI-and-data-protection toolkit; UK-specific candidate-notice templates are part of the Customer-facing toolkit referenced in the Employment-AI Bias-Audit Procedure → Customer-facing toolkit.
Vulnerability disclosure
If you believe you’ve found a security vulnerability in a Neuroscale product or service, please report it to security@neuroscale.ai. Encrypted submissions are welcome — request our PGP key in your initial message. We commit to:- Acknowledge receipt within 2 business days.
- Provide an initial assessment within 5 business days.
- Keep you updated on remediation status until the issue is resolved.
- Recognize researchers who report in good faith and follow this policy.
- Give us a reasonable time to investigate and remediate before any public disclosure.
- Avoid privacy violations, data destruction, service degradation, or interruption of service to other customers.
- Only test against accounts you own or have explicit permission to test.
- Do not exploit a vulnerability beyond the minimum necessary to demonstrate it.
Sub-processors
Neuroscale uses a limited set of vetted third parties to deliver the service. The current list, the data they process, and their location are published on our Subprocessor List. We commit to providing at least 30 days advance notice before adding a new sub-processor that processes customer personal data (or 14 days where exigent circumstances exist), with a right to object as set out in the DPA. Customers can subscribe to change notifications via the Subprocessor List page.Contact
| Topic | Where |
|---|---|
| Report a security vulnerability or incident | security@neuroscale.ai |
| Privacy questions, DSR / DSAR requests | privacy@neuroscale.ai |
| Contracts, DPA, legal questions | legal@neuroscale.ai |
| Request our SOC 2 report (under NDA) | trust@neuroscale.ai |
Neuroscale’s internal security and compliance program is documented and regularly reviewed. Customers and prospects under NDA may request relevant excerpts of our internal documentation as part of a security review.