Cyber incidents rarely arrive with a calendar invite. They show up as a burst of suspicious traffic, an admin account behaving oddly, a payment workflow failing, or a customer reporting a message that never came from your brand. When the signal appears, teams have two jobs running in parallel: contain the threat and meet regulatory duties.
CERT-In’s Directions (issued 28 April 2022 under the IT Act) made that second job time-bound. Many organisations are still building muscle memory around it, especially across hybrid IT, cloud platforms, SaaS, remote work endpoints, and third-party dependencies.
The six-hour rule, explained in operational terms
CERT-In expects reporting within 6 hours of discovery or being brought to notice, for specified categories of cyber security incidents. The detail that matters is the trigger: the clock is tied to awareness, not to a full root-cause report.
This changes how incident response is designed. If your detection and triage are not built for speed, the reporting timeline becomes stressful even when the incident itself is manageable.
One practical takeaway is to treat six hours as a coordination window, not an investigation window.
What counts as a reportable incident
CERT-In’s list is intentionally broad and covers modern enterprise attack paths: network, endpoint, application, cloud, identity, payment systems, and even emerging tech workloads. If you run critical systems, handle sensitive data, provide digital services to customers, or operate large-scale IT, you should assume that many “serious” security events will fall within the mandatory reporting categories.
A useful way to internalise the scope is to map common alerts to the categories CERT-In highlights.
- Targeted scanning or probing of critical systems
- Unauthorised access or compromise of systems, servers, or data
- Website defacement or unauthorised code insertion
- Malware outbreaks, including ransomware, botnets, spyware, Trojans
- Identity theft, spoofing, phishing and related credential abuse
- DoS or DDoS affecting availability
- Attacks on digital payment systems and financial transaction flows
- Data breach or data leak events
- Malicious or fake mobile applications impacting users
- Unauthorised access to social media accounts linked to the organisation
- Suspicious activity impacting cloud platforms or cloud-hosted applications
- Attacks on IoT and connected systems, OT, SCADA, wireless networks
- Incidents impacting emerging tech deployments (AI/ML, blockchain wallets, robotics, drones, additive manufacturing)
Two nuance points help reduce confusion inside teams:
- A vulnerability by itself is not always an incident. If there is no evidence of exploitation and the situation is a routine patching activity, mandatory reporting may not apply.
- When in doubt, design your triage to decide fast. The cost of a quick internal classification is far lower than the cost of missing a reporting deadline.
What CERT-In expects you to submit (and why templates matter)
CERT-In provides a prescribed reporting format (commonly referenced as Annexure A). Most of the fields are not hard to produce, but they become difficult when teams scramble across emails, chat threads, screenshots, and partial logs.
The format is essentially built around five questions: who is reporting, when it was detected, what happened, what got affected, and what actions have been taken so far. A strong incident process keeps these fields warm from the first hour.
After a quick internal confirmation that the event is reportable, teams typically prepare:
- Reporter details: organisation name, sector, address, and a reachable contact person
- Incident timeline: detection time, discovery source, and whether it is ongoing
- Technical footprint: affected systems, IPs, hostnames, locations, user impact
- Observed indicators: symptoms, suspected vectors, artefacts and evidence
- Actions taken: containment steps, blocks applied, user resets, isolation, restorations
Reports can be sent through CERT-In’s official channels, including email (incident@cert-in.org.in) and the online reporting mechanism. Phone and fax channels are also published for urgent communication, with the expectation that the structured details will still be provided.
Speed comes from preparation, not heroics. Many organisations keep a pre-filled “organisation” section (addresses, sector tags, POC contacts) so the team only fills incident-specific fields during the six-hour window.
Penalties, accountability, and the real business impact
The reporting duty is backed by the IT Act. Under Section 70B, failure to comply with CERT-In directions can trigger penal consequences, including imprisonment up to one year and a fine (updated to as high as ₹1 crore).
Regulatory risk is only one part of the picture. Delayed reporting often correlates with deeper operational issues: incomplete logging, unclear ownership, fragile escalation paths, or a culture where incidents are quietly “handled” rather than formally managed. Fixing reporting readiness usually improves detection and response maturity at the same time.
Log retention: the 180-day baseline and what it changes
CERT-In’s Directions require organisations to maintain logs of all ICT systems for a rolling period of 180 days, and to keep them within India. That baseline affects architecture choices across on-prem systems, cloud services, and managed platforms.
From an incident response standpoint, 180 days is also a practical minimum. Many investigations start late: a credential theft incident may be detected weeks after initial access, or a data leak may surface only when an external party reports it. Without retention, attribution and scope become guesswork.
The requirement typically spans:
- Network device logs (routers, switches, firewalls, WAFs, load balancers)
- Security telemetry (IDS/IPS, EDR, email security, DLP, IAM events)
- System logs (Windows Event Logs, Linux auth logs, database logs)
- Application and API logs (web servers, app servers, microservices, gateways)
- Cloud audit logs (control-plane activity, storage access, identity events)
Best-practice log retention that stands up in audits and incidents
Many teams focus only on “keeping logs”. The tougher part is keeping logs that are usable, trustworthy, and searchable under pressure.
A reliable approach balances integrity, confidentiality, and availability:
- Centralisation: forward logs to a controlled log store or SIEM instead of leaving them scattered across hosts
- Encryption: protect logs in transit (TLS) and at rest (strong encryption with managed keys)
- Tamper evidence: use append-only controls, WORM-capable storage, hashing, or signing to detect manipulation
- Access control: restrict log access through RBAC and separation of duties, with audited admin actions
- Time hygiene: enforce NTP across systems so timelines match during investigations
- Retention enforcement: implement policy-based lifecycle rules that guarantee 180 days without manual effort
The table below helps situate the CERT-In baseline alongside other widely used reference points. It is common for global organisations to retain longer than 180 days for operational security, while still meeting India-specific storage requirements.
| Standard / Rule | Typical retention expectation | Practical note for implementation |
|---|---|---|
| CERT-In Directions (India) | 180 days minimum | Storage within India; build integrity and controlled access from day one |
| PCI DSS (common audit practice) | 1 year total, with recent logs readily available | Often drives longer retention for payment environments |
| ISO/IEC 27001 | Policy-defined | Retention should be documented, justified, and reviewed periodically |
| GDPR (EU) | No fixed period | If logs contain personal data, retention must be tied to purpose and minimisation principles |
A six-hour-ready operating model (from detection to report)
Meeting the reporting window consistently requires a repeatable workflow that is rehearsed. The fastest teams treat it like a fire drill: clear roles, clear thresholds, pre-built artefacts.
A proven structure looks like this:
- Detect and alert: monitoring generates an actionable alert with severity, asset context, and timestamp
- Triage fast: validate signal vs noise, classify against CERT-In reportable categories, estimate blast radius
- Escalate to the CERT-In POC: notify the designated point of contact and a backup contact with a standard pack (what, where, when, evidence)
- Draft the report using the prescribed format: fill known fields, attach supporting artefacts, clearly mark unknowns as “under investigation”
- Submit and track: send through the official channel, record acknowledgement details, maintain an internal case file for follow-up requests
This workflow works best when the incident commander has the authority to declare “reportable” without waiting for consensus across multiple committees.
Roles, responsibilities, and third parties: where delays usually happen
Most reporting failures are not caused by missing tools. They happen when accountability is unclear.
A mature structure usually includes:
- A designated CERT-In point of contact (and a backup)
- A security operations or incident response team that can triage 24×7 (internal or managed)
- IT and cloud owners who can provide asset context quickly
- Legal and communications stakeholders who advise on disclosures, while security proceeds with reporting timelines
Third-party providers can help with monitoring, triage, forensics, and log management, yet the reporting duty still needs crisp contractual clarity: who informs whom, in what format, and within what time. If your SOC is outsourced, insist on an explicit “reporting clock” clause and a shared incident classification matrix.
How Atrity Info Solutions Private Limited can support CERT-In readiness
CERT-In compliance is easiest when security engineering, operations, and documentation are treated as one programme rather than isolated tasks.
Atrity Info Solutions Private Limited, an ISO 9001 and ISO 27001 certified Indian IT company, supports organisations across the lifecycle that matters here: building security monitoring foundations, designing incident response workflows, and implementing log retention architectures across on-prem, hybrid, and multi-cloud environments.
Typical support areas include security consulting, deployment of centralised logging and analytics, integration of endpoint and network controls that generate high-fidelity evidence, and process design around incident reporting. For teams that need predictable execution, ISO-aligned quality and security practices can be helpful for change control, access governance, and audit-friendly documentation.
The goal is straightforward: faster detection, cleaner evidence, and a reporting process that works even at 2 a.m.
A practical way to start without overhauling everything
If you want measurable progress in a short cycle, start with two tracks running together.
First, confirm coverage: are your crown-jewel systems, identity plane, internet-facing applications, and payment flows actually generating logs and sending them to a central store with 180-day retention in India?
Second, run a timed tabletop exercise focused only on the six-hour window. Use one realistic scenario (ransomware alert, cloud key exposure, website defacement, payment outage with suspicious indicators) and practise producing a draft Annexure A report from whatever telemetry you have today.
Teams that rehearse once tend to see the bottlenecks immediately: missing asset inventory, inconsistent timestamps, unclear escalation, or logs that exist but cannot be searched quickly. Fix those, and CERT-In reporting stops being a last-minute scramble and becomes just another disciplined part of incident response.