Some systems are under maintenance.
Status of AS200462
Last updated 2025-09-26 20:44:59
Last updated 2025-09-18 17:23:01
Last updated 2025-09-18 17:23:01
Last updated 2025-12-21 18:01:08
Last updated 2026-01-10 23:27:39
Last updated 2026-01-20 03:34:47
Last updated 2025-11-29 13:35:41
Last updated 2026-01-07 23:05:58
Last updated 2025-12-21 18:01:09
Last updated 2026-01-19 12:35:02
Last updated 2025-11-11 22:18:39
Last updated 2025-12-12 15:06:02
Last updated 2025-11-11 22:18:39
Last updated 2025-09-18 17:23:01
Last updated 2025-10-23 12:38:37
Last updated 2025-09-18 17:23:01
Last updated 2025-09-18 17:23:01
Last updated 2025-09-18 17:23:01
Last updated 2025-09-18 17:23:01
Last updated 2025-09-18 17:23:01
The outage was caused due to human error pulling incorrect fibers. After intervention from NOC and local technician, the port is already in service again.
Original RFO:
circuit is down not due to a miscommunication on the change (GIN-CHG00XXXX). Unfortunately due to another migration tonight the local hands incorrectly pulled the incorrect fibers. I talked to the tech that was working with the migration and this has been corrected at this time.
We are currently observing an outage affecting carrier NTT (AS2914). We were previously informed of a scheduled maintenance planned for tomorrow. However, after speaking with the NTT NOC, we have learned that there is also an ongoing maintenance today. This maintenance was not expected to impact our service and therefore no notification was sent.
The NTT NOC is currently investigating the issue.
Sparkle has implemented various changes that resolve the situation. BGP has been enabled again, and the situation is stable. We will monitor the situation carefully and intervene when necessary.
The latest update remains in place. Sparkle NOC applied various changes, which remain unsuccessful. The carrier remains drained until restoration is in place. No customer impact is expected at this point.
After further communication, Sparkle installed another faulty configuration change, resulting in blackholing of egress traffic. Currently, BGP is disabled and traffic is flowing via alternative paths to mitigate the impact.
Customers traversing the Sparkle backbone may have experienced packet loss between 10:30 UTC and 12:30 UTC.
We are in constant contact with the Sparkle NOC to address the issue, and we apologize for the inconvenience caused.
Sparkle has reverted the faulty changes and sessions have been enabled again.
We are currently observing a partial null route of traffic via Carrier Sparkle due to a false configuration change on Sparkle’s side. BGP has been drained, and the impact is mitigated for the moment. We will get in touch with the Sparkle NOC for a permanent solution.