The operational procedure that backs Neuroscale’s developer-side obligations for any AI system used to make or substantially assist consequential decisions in employment. Triggered by the AI Acceptable Use Policy → Bias, fairness, and explainability, Section 7 of the Terms of Service, the AI Training-Data Transparency Notice, and the AI Model Registry feature-card requirement.

Statutory triggers

This procedure shall be performed for any Neuroscale-built or Neuroscale-fine-tuned AI feature that is used by a Customer to substantially assist or replace human discretion in candidate sourcing, screening, ranking, scoring, qualification, interview shortlisting, offer recommendation, or any other consequential decision in employment within the meaning of:
  • NYC Local Law 144 of 2021 (Automated Employment Decision Tools; Title 20 §§20-870 to 20-874 of the NYC Administrative Code, and 6 RCNY §§5-300 to 5-304) — bias audit by independent auditor within 12 months of use, public bias-audit summary, candidate notice ≥ 10 business days before use, alternative-selection-process availability.
  • Colorado AI Act (C.R.S. §§6-1-1701 et seq., effective February 1, 2026) — high-risk AI system in employment is in scope; risk-management program, impact assessments, public statement (C.R.S. §6-1-1703(1)(a)), and Attorney-General notification on algorithmic-discrimination findings (C.R.S. §6-1-1703(7)).
  • Illinois HB 3773 (effective Jan 1, 2026, amending 775 ILCS 5/2-103.1) — prohibition on use of AI that has the effect of subjecting employees to discrimination on the basis of a protected class.
  • Illinois AI Video Interview Act (820 ILCS 42) — only triggered if the feature analyzes video interviews; Neuroscale does not currently offer such a feature, and any future such feature requires consent and demographic reporting before launch.
  • Maryland HB 1202 (Md. Lab. & Empl. §3-717) — only triggered if facial-recognition technology is used in interviews; Neuroscale does not currently offer such a feature.
  • Texas Responsible AI Governance Act (“TRAIGA”) — risk-based obligations on high-risk AI systems including employment.
  • California ADMT regulations (CPPA, finalized 2025 under CPRA §1798.185(a)(16)) — pre-use notice, opt-out, and access mechanisms for automated decision-making technology.
  • Utah AI Policy Act (Utah Code §§13-72-101 et seq.) — disclosure where generative AI is used in a material way.
  • EEOC technical assistance documents on algorithmic discrimination under Title VII (May 2023), the ADA (May 2022), and the ADEA — disparate-impact testing under the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. §1607).
  • EU AI Act (Regulation (EU) 2024/1689). A Neuroscale feature placed on the EU market that substantially assists or replaces discretionary decision-making in recruitment, screening, ranking, or candidate evaluation is a high-risk AI system under Annex III(4). Neuroscale, as provider within the meaning of Art. 3(3), performs the developer-side obligations of Arts. 9–17, including: risk-management system across the lifecycle (Art. 9); data-governance obligations for training, validation, and testing data with bias-examination and representativeness (Art. 10); technical documentation per Annex IV; record-keeping and logging (Art. 12); transparency to deployers (Art. 13); human-oversight design (Art. 14); accuracy, robustness, and cybersecurity (Art. 15); quality-management system (Art. 17); conformity assessment, CE marking, and EU declaration of conformity (Arts. 43–48); EU-database registration (Art. 49); post-market monitoring and serious-incident reporting (Arts. 72–73); and authorized-EU-representative designation (Art. 22), since Neuroscale has no EU establishment. As of the effective date of this procedure, no Neuroscale feature is placed on the EU market; the General Counsel re-confirms tier and obligation set before any EU launch.
  • GDPR / UK GDPR Arts. 6, 9, 14, 22, 28, 35. The bias-audit interlocks with: Art. 22 sole-automated-decision rights (audit verifies that human-oversight controls are in place where Customer relies on Art. 22(2) bases); Art. 35 DPIA (Customer-side, with Neuroscale providing the developer-side technical inputs); Art. 14 indirect-collection notice obligations on Customer (deployer); and Art. 9 special-category-data prohibitions (Stage 4 of the Deidentification Standard removes special-category data before training).
  • UK ICO guidance on AI in recruitment (2024). UK-deployed employment-AI features apply the ICO’s AI-and-data-protection toolkit; the Customer-facing toolkit includes a UK-specific candidate-notice template that satisfies UK GDPR Art. 14 and the ICO’s recruitment-AI expectations.
A scope assessment per this procedure shall be recorded in the AI Review Log before any feature launch and on each annual re-review.

Roles and responsibilities

RoleResponsibility
Developer (Neuroscale)Sponsor and pay for the bias audit; provide model documentation, training-corpus aggregate statistics, and evaluation outputs to the independent auditor; publish the bias-audit summary; cooperate with Customer-side compliance.
Deployer (Customer / employer)Provide the candidate-notice required by NYC LL 144 ≥ 10 business days before use; offer alternative selection process where required; conduct deployer-side impact assessment per Colorado AI Act §6-1-1703(2); maintain employment records per state-law obligations.
Independent AuditorA natural or legal person (or group) that has not been involved in the use, development, or distribution of the AEDT; not employed by, contracted with, or otherwise under the influence of Neuroscale or the Customer beyond the audit engagement; meets the independence criteria of 6 RCNY §5-301.
Neuroscale General CounselEngage and contract the Independent Auditor; review audit results; determine whether algorithmic-discrimination findings trigger notice to a state Attorney General.
Neuroscale CTOTechnical owner of the model and of the data pipeline that feeds the audit; signs off on remediation.
Neuroscale CISOCo-signs the audit report and coordinates with the Reidentification Audit Procedure where the same model is in scope for both audits.

When the audit runs

TriggerRequired testsTimeline
Before any production deployment of a new employment-AI featureAll sections below; mandatory PASS on disparate-impact gatesAudit completion required before customer onboarding
Annually for any in-production employment-AI featureAll sections belowWithin 12 months of last audit; NYC LL 144 §20-871(b) hard deadline
On material change to model identifier or version, training corpus composition, or deidentification parametersAll sections belowBefore the changed model enters production
On a credible algorithmic-discrimination report from a Customer, Candidate, employee, regulator, or independent third partyTargeted Sections C and EWithin 30 days of report
On entry into a new state with applicable AI-in-employment law (e.g., a Customer first deploying Arbi in NYC, Colorado, Illinois, Maryland, Texas, Utah)Re-scope of A; jurisdictional addenda in B and DBefore deployment in that state

A. Scope and inputs

Each audit cycle shall be scoped to a (model, deployment surface, jurisdictional scope) tuple and consume:
  1. The frozen model artifact and dataset card from the AI Model Registry.
  2. The most recent Reidentification Audit Report (Reidentification Audit Procedure).
  3. A representative evaluation dataset with demographic attributes attached. Because the training corpus is deidentified and stripped of protected-class data, the audit shall use an external, demographically labeled benchmark (constructed from licensed sources, synthetic data with controlled demographic distribution, or a Customer-supplied evaluation cohort under a written audit-only data-sharing agreement).
  4. The Customer use-case description (which step of the hiring funnel the feature substantially assists; geographic deployment).
  5. The independent-auditor engagement letter and statement of independence per 6 RCNY §5-301.

B. Disparate-impact testing (EEOC and state-law analog)

Performed against the model artifact using the demographically labeled benchmark.

B.1 Adverse-impact ratio (4/5ths rule)

For each protected class identified by Title VII (race, color, religion, sex, national origin), the ADEA (age 40+), the ADA (disability where signal is available), and any state-law-recognized class (e.g., California protected classes per FEHA), compute:
Selection rate (focal group) ÷ Selection rate (reference group)
Pass criteria:
  • Ratio ≥ 0.80 for every focal-vs-reference comparison; and
  • Where the ratio falls below 0.80, the model owner must (i) demonstrate validity per the Uniform Guidelines on Employee Selection Procedures (29 C.F.R. §1607.5–1607.15), (ii) demonstrate the model is the least-discriminatory alternative reasonably available, and (iii) record the validation studies in the model card.
  • A failure that is not validated halts production deployment and triggers Section E.2.

B.2 Subgroup performance parity

For each protected-class subgroup, compute:
  • True-positive rate (recall), false-positive rate, and area under the ROC curve.
  • Equalized-odds difference and demographic-parity difference per Hardt et al. (2016).
Pass criteria:
  • No subgroup TPR or FPR differs from the overall population by more than 5 percentage points.
  • No equalized-odds difference exceeds 0.10 across protected classes.

B.3 Statistical-significance tests

Where sample sizes permit, compute:
  • Two-proportion z-test on selection-rate differences with α = 0.05.
  • Standardized mean-difference (Cohen’s d) on ranking-score distributions across subgroups.
Pass criteria:
  • No statistically significant difference at α = 0.05 that is not validity-justified per B.1.

B.4 Intersectional analysis

Apply B.1 and B.2 to intersectional subgroups (e.g., Black women, older men, disabled applicants of color) where the benchmark provides ≥ 30 records per intersection. Singletons and small cells are reported but not gated.

C. Algorithmic-discrimination monitoring (Colorado AI Act, Illinois HB 3773)

C.1 Documented risk-management program

Per C.R.S. §6-1-1702(1), the model owner shall maintain, and the audit shall verify, a written risk-management program covering:
  • Identification of reasonably foreseeable risks of algorithmic discrimination.
  • Mitigation measures (data preprocessing, fairness-aware training objectives, post-hoc calibration).
  • Validation testing per Section B.
  • Ongoing monitoring of production output for drift.
  • A documented incident-response path that ties to the Incident Response Plan.

C.2 Public statement (Colorado §6-1-1703(1)(a))

Verify that the AI Training-Data Transparency Notice per-model row contains:
  • A description of the system’s intended use and operation in plain language.
  • Reasonably foreseeable impacts and limitations.
  • Categories of data on which the system was trained (post-deidentification).
  • Bias-audit summary link.

C.3 Algorithmic-discrimination disclosure

If the audit identifies that the system has caused, or is reasonably likely to have caused, algorithmic discrimination within the meaning of C.R.S. §6-1-1701(1), the General Counsel shall:
  • Notify the Colorado Attorney General within the time period required by C.R.S. §6-1-1703(7).
  • Notify any affected Customers and (where the Customer is the controller) cooperate on Candidate notification.
  • Notify any other state Attorney General to whom an analogous notice obligation applies.
  • Halt production deployment in the affected jurisdiction(s) until remediation.

D. NYC Local Law 144 deliverables

D.1 Independent-auditor sign-off

The Independent Auditor’s report shall, at minimum, include the items required by 6 RCNY §5-302:
  • The date of the audit.
  • The selection rate and impact ratio for each category required by 6 RCNY §5-303.
  • The number of individuals the AEDT assessed that fall within an unknown category.
  • A description of the source and explanation of the data used.
  • The auditor’s independence statement per 6 RCNY §5-301.

D.2 Public bias-audit summary

Within 60 days of audit completion, Neuroscale shall publish on the AI Training-Data Transparency Notice a public summary that complies with 6 RCNY §5-304:
  • Date of the most recent bias audit.
  • Distribution date of the AEDT to which the audit applies.
  • Selection-rate and impact-ratio data per the auditor’s report.

D.3 Candidate notice template

Neuroscale shall maintain a Customer-facing notice template that the Customer may adapt to satisfy NYC LL 144 §20-871(b)(1) (≥ 10 business days before use) and the alternative-selection-process requirement of §20-871(b)(2). The template lives in the customer-success knowledge base and is referenced from Section 7 of the Terms of Service and the California Applicant and Personnel Privacy Notice. Customers using Arbi in NYC are responsible for delivering the notice; Neuroscale will provide model-documentation excerpts on request.

E. Audit results, escalation, and recordkeeping

E.1 Decision matrix

ResultAction
All B, C, D criteria PASSIndependent-auditor sign-off; publish summary on AI Training-Data Transparency Notice; record in AI Model Registry and AI Review Log; production deployment continues.
FAIL on B.1 (4/5ths) without a documented and accepted validity defenseHalt deployment in the affected jurisdiction. Retrain or remediate; re-audit.
FAIL on B.2 (subgroup parity) or B.3 (statistical significance)P1 finding. Diagnose root cause (training-data composition; objective function; calibration). Remediation plan with auditor sign-off before re-deployment.
FAIL on C.1 (risk-management program incomplete) or C.2 (public statement missing)Halt deployment until the program is documented and the public statement is posted.
Identification of algorithmic discrimination per C.3P0 incident. Trigger Incident Response; General Counsel determines AG-notification obligations within applicable statutory windows; halt deployment in the affected jurisdiction.
FAIL on D.1 (auditor independence)The auditor is replaced; the audit is re-run with a qualified independent auditor before deployment.
FAIL on D.2 (publication missed)P1 finding. Publish within 7 days of detection; document the lapse in the audit log.

E.2 Recordkeeping

Each audit cycle produces:
  • A signed audit report (Independent Auditor + Neuroscale GC + CTO + CISO co-sign), filed in the AI Model Registry entry for the affected model.
  • The frozen evaluation dataset, model artifact hash, audit-tooling versions, and parameter values tested.
  • The independent-auditor independence statement.
  • Retention: 7 years per the Records Retention Schedule, as evidence under NYC LL 144 (annual cycle), Colorado AI Act (developer obligations), Illinois HB 3773 (record-keeping), and EEOC (Uniform Guidelines record-keeping at 29 C.F.R. §1607.4).
Where the same model is in scope for the Reidentification Audit Procedure, the two audits shall be conducted in close sequence and their reports cross-linked. A model may not be deployed if either audit fails.

F. Customer-facing toolkit

Neuroscale maintains the following customer-facing artifacts to support deployer-side compliance, available from the customer-success knowledge base or on request to legal@neuroscale.ai:
  • NYC LL 144 candidate-notice template — adaptable Customer language to satisfy §20-871(b)(1).
  • Colorado AI Act consumer-disclosure template — adaptable language for deployer-side disclosure under C.R.S. §6-1-1703(4).
  • Illinois HB 3773 employee/applicant notice template.
  • Maryland HB 1202 facial-recognition consent template (only relevant if a future Neuroscale feature uses facial recognition; not currently in scope).
  • California ADMT pre-use notice template — adaptable language for CPRA §1798.185(a)(16) ADMT obligations.
  • Utah AI Policy Act material-use disclosure template.
  • Texas TRAIGA risk-disclosure template.
  • GDPR Art. 14 indirect-collection notice template — adaptable language for Customer-Controllers serving EU and UK candidates whose data is included in Customer Content. Covers the items required by Art. 14(1)–(2): identity and contact details of the controller, purposes and lawful basis, categories of data, recipients (including Neuroscale as processor), retention, data-subject rights, source, and existence of automated decision-making.
  • GDPR Art. 22 automated-decision-making notice and rights template — adaptable language for Customer-Controllers relying on an Art. 22(2) basis; includes the human-intervention, point-of-view, and contestation safeguards required by Art. 22(3).
  • UK GDPR / ICO recruitment-AI candidate-notice template — adaptable language aligned to the UK ICO’s 2024 AI-in-recruitment guidance.
  • Bias-audit summary excerpt — Customer-facing version of the most recent independent-auditor report (1–2 pages) that Customers may distribute to their candidates or display in their hiring-process disclosures.

Cross-references

Version history

VersionDateDescriptionAuthorApproved by
1.0May 9, 2026Initial version. Codifies developer-side obligations under NYC Local Law 144, Colorado AI Act, Illinois HB 3773, EEOC technical guidance, and analogous state laws.Cameron WolfeIshan Jadhwani