3rd Party Maintenance (12/Feb/2017)

We have received a maintenance notification from Cogent Communications regarding one of IP Transit connections to the global internet.

This maintenance is due on 12th February 2017 between midnight and 7am, with an expected outage time of 45-60 minutes.

No outage to Netnorth customers is expected as we have multiple IP Transit connections to the global internet.  Our routers should gracefully re-route any affected traffic paths to our other connectivity providers.

Emergency Router Maintenance (28/Jan/2017)

Following on from the router crash experienced yesterday, we have performed an emergency upgrade of the Cisco IOS-XE software on all of our internet-facing routers.

This update should resolve the issue that was triggered yesterday.

No outage to customers should have been experienced today, but there may have been brief periods of slightly increased latency during route changes.

Cisco Router Crash – 27/Jan/2017

At 3:52pm today, one of our Cisco ASR routers experienced a crash within its routing engine.

This caused the router to instantly stop routing and any destinations via the router experienced an outage.

Unfortunately, this did not just sever connectivity cleanly… it started causing “flapping” (where routes are introduced and removed over and over again causing instability).  Once this flapping was identified, we severed all network connectivity to the affected router.

After a few minutes, BGP failover took over and traffic re-routed via alternative paths as it is designed to do.  This is how a normal crash would be handled.

The router crashed in such a way that it had to be physically power cycled to regain control afterwards.  We then brought its routing online in a slow and controlled fashion to prevent any further disruption to the network.

 

After some research, it appears that we hit CSCus82903 which is a known Cisco Bug in our edition of routing software.

This was triggered when attempting to bring online our new IP connectivity provider, GTT, this afternoon – a normally routine procedure with no impact to customer traffic.

 

Our routers are currently stable and operating normally, however we need to perform some emergency maintenance to upgrade the software of the routers to a patched version provided by Cisco.

This should be able to occur without causing any additional outages, although the network routing should be considered “at risk” during the actual software upgrade.

In the meantime, our GTT connection has been kept offline to prevent the issue reappearing.  We will re-establish the connection once the software upgrades are complete.

3rd Party Maintenance (23/Feb/2017)

We have received a maintenance notification from Virgin Media regarding one of our metro fibre circuits between Bolton and Manchester.

This maintenance is due on 23rd February 2017 between midnight and 7am, with an expected outage time of 20 minutes.

No outage to Netnorth customers is expected as we have multiple metro fibre links via diverse paths via diverse fibre providers.  Our network will automatically re-route any traffic via our other fibres during the outage.

Level3 Transit Failure

At 11:55pm tonight, our connection to Level3 appeared to disconnect.  The symptoms are identical to the previous occasions where the Level3 router rebooted so we assume this to be the case again tonight.

The connection is slowly coming back online, which further supports this suspicion.

As Level3 appear to be unconcerned by a router that reboots itself, we have already made plans to migrate to an alternative transit supplier.  This is due to take place early February 2017.

Once this replacement is online, we will likely retire our Level3 connectivity entirely.

Some routes may have experienced a brief period of packet loss or increased latency during this transit fault whilst routers around the internet changed paths to our alternate transit links.  The majority of destinations would have been entirely unaffected however.

As this is caused by routers outside of our control, we have no way to prevent this brief packet loss to some destinations.

Any paths via our peering links or other transit connectivity are unaffected.

Network Connectivity

We’re seeing packet loss on the LINX Extreme network this morning.
We’ve also got alerts coming in for our London location too, this is routed entirely separately to our Bolton/Manchester locations, so it looks like it is affecting the entire LINX LAN.

We’ve shunted most traffic over to the LINX Juniper network so this should die down now. We’ve not had any official word from LINX yet, however we will continue to monitor and migrate traffic paths where possible.

Netnorth Support

Storms / Power – Bolton

There is currently a storm over the Bolton area (with very nice fork lightning!) which has caused a brief power outage this evening around 7pm UK local time.

Our UPS units continued to operate during the outage with no loss of power to the datacentres, and our generators started to take over the load if required.

This was not required as the mains was restored within a few seconds.

Our generators returned to their idle state after a few minutes once mains power stability was confirmed.

 

Should there be any further outages, the generators will re-start automatically.  The process to transfer power to generator in a mains fail situation is fully automated.

LINX Extreme LAN – IGMP issue

Yesterday, at around 11:50am, we had some reports of unusual activity on our ring network.

We traced this down to elevated levels of CPU usage on our switches.  Further investigation pointed to a high level of IGMP traffic being received.

We further traced this to our LINX Extreme peering port in London and shut the port down.  This resolved the issue instantly.

We contacted our Connexions provider, who provide our LINX ports.  They did investigation to ensure the traffic was not originating internally and then passed the query onto LINX.

LINX confirmed this morning that they had a member port injecting IGMP traffic to the peering LAN yesterday and that this has now been resolved.

 

We have re-enabled our LINX Extreme port and confirmed the issue no longer exists.

We will re-enable our peering connections via this LAN shortly.