Policy Owner: CISO
Effective Date: May 8, 2026
Reviewed: Annually
Next Review: May 8, 2027
Effective Date: May 8, 2026
Reviewed: Annually
Next Review: May 8, 2027
Purpose
To prepare Neuroscale for service outages caused by factors beyond our control (natural disasters, man-made events) and to restore services to the widest extent possible in a minimum time frame.Scope
All Neuroscale IT systems that are business-critical. Applies to all Neuroscale employees and relevant external parties (consultants, contractors). The following are excluded from BC/DR scope:- Loss of availability of a production hosting service provider (AWS or Vultr) — handled per the relevant cloud provider’s SLAs and the Incident Response Policy. Workloads designed for cross-cloud DR may fail over from one provider to the other; workloads pinned to one cloud rely on that cloud’s SLA.
- Loss of availability of Neuroscale satellite offices — handled as incidents.
Policy
In the event of a major disruption to production services or a disaster affecting the availability or security of a Neuroscale office, executive staff and senior managers determine mitigation actions. A disaster recovery test, including a test of backup restoration, is performed annually. The test plan, execution log, restoration evidence, and an after-action report (gaps identified, owners, target close dates) are filed in the SharePoint evidence library used for SOC 2 / ISO 27001 audits, and the after-action items are tracked to closure in Linear. Continuity of information security is considered alongside operational continuity. For information-security events or incidents, refer to the Incident Response Policy.Alternate work facilities
If a Neuroscale office becomes unavailable, all staff work remotely from their homes or any safe location.Communications and escalation
Executive staff and senior managers are notified of any disaster affecting Neuroscale facilities or operations. Communications use any available channel — Slack, email, phone, video. Key contacts and the on-call schedule are maintained at Key contacts & on-call — Better Stack.Roles and responsibilities
| Role | Responsibility |
|---|---|
| CISO | Leads BC/DR efforts to mitigate losses and recover the corporate network and information systems. |
| Function Leads (CTO, CFO, CHRO, etc.) | Communications with departmental staff and any actions needed to maintain continuity of business functions. Communicate regularly with executive staff and IT. |
| Managers | Communicate with direct reports; help staff continue working from alternative locations. |
| CEO | External and customer communications about disaster or BC actions, in conjunction with the CFO and General Counsel. |
| CTO | Maintains continuity of Neuroscale services to customers during a disaster, in conjunction with the CISO and Engineering on-call. |
| CHRO | Internal communications to employees; physical health and safety; works with IT to maintain physical security at the office. |
Continuity of critical services
| Key business process | Continuity strategy |
|---|---|
| Customer (production) service delivery | Rely on AWS and Vultr availability commitments and SLAs. Cross-cloud DR is enabled where the architecture supports it; workloads pinned to a single cloud rely on that cloud’s regional failover. |
| Internal IT operations | Not dependent on HQ. Critical data is backed up to alternate locations. |
| Microsoft 365 (Outlook), distributed by design; rely on provider SLAs. | |
| Finance, Legal, HR | Vendor-hosted SaaS applications. |
| Sales & Marketing | Vendor-hosted SaaS applications. |
RTO & RPO
Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) are tracked in the RTO / RPO Matrix.Plan activation
This BC/DR plan is automatically activated in the event of:- Loss or unavailability of a Neuroscale office, or
- A natural disaster (severe weather, regional power outage, earthquake) affecting the regions where Neuroscale staff are concentrated.
Scenarios & procedures
HQ offline (power and/or network)
- CRM, telephony, video conferencing, corp email unaffected.
- HQ staff offline ~30–60 minutes; remote staff unaffected.
- Procedure: HQ staff relocate to home offices, verify connectivity, resume normal operations.
SaaS tools down
- Telephony down → notify customers to use support portal/email; staff use mobile/landlines.
- Email down → staff manually manage case-related communications via alternate accounts.
- CRM down → notify customers; activate spreadsheet-based tracking.
- Video conferencing down → use alternate service.
Production hosting region outage
See Incident Response Policy. Coordinate with the affected cloud provider’s support — AWS Support for AWS-side outages, Vultr Support for Vultr-side outages — and execute multi-region failover where applicable. Where a workload supports cross-cloud DR (per the RTO/RPO Matrix), the on-call engineer may also fail over from the affected cloud to the alternate cloud.Exceptions
Requests for exceptions must be submitted to the CISO for approval.Violations & enforcement
Report violations to the CISO. Violations may result in disciplinary action up to and including termination.Version history
| Version | Date | Description | Author | Approved by |
|---|---|---|---|---|
| 1.0 | May 8, 2026 | Initial version | Cameron Wolfe | Ishan Jadhwani |