3rd Party Maintenance (12/Feb/2017)

We have received a maintenance notification from Cogent Communications regarding one of IP Transit connections to the global internet.

This maintenance is due on 12th February 2017 between midnight and 7am, with an expected outage time of 45-60 minutes.

No outage to Netnorth customers is expected as we have multiple IP Transit connections to the global internet.  Our routers should gracefully re-route any affected traffic paths to our other connectivity providers.

Emergency Router Maintenance (28/Jan/2017)

Following on from the router crash experienced yesterday, we have performed an emergency upgrade of the Cisco IOS-XE software on all of our internet-facing routers.

This update should resolve the issue that was triggered yesterday.

No outage to customers should have been experienced today, but there may have been brief periods of slightly increased latency during route changes.

Cisco Router Crash – 27/Jan/2017

At 3:52pm today, one of our Cisco ASR routers experienced a crash within its routing engine.

This caused the router to instantly stop routing and any destinations via the router experienced an outage.

Unfortunately, this did not just sever connectivity cleanly… it started causing “flapping” (where routes are introduced and removed over and over again causing instability).  Once this flapping was identified, we severed all network connectivity to the affected router.

After a few minutes, BGP failover took over and traffic re-routed via alternative paths as it is designed to do.  This is how a normal crash would be handled.

The router crashed in such a way that it had to be physically power cycled to regain control afterwards.  We then brought its routing online in a slow and controlled fashion to prevent any further disruption to the network.

 

After some research, it appears that we hit CSCus82903 which is a known Cisco Bug in our edition of routing software.

This was triggered when attempting to bring online our new IP connectivity provider, GTT, this afternoon – a normally routine procedure with no impact to customer traffic.

 

Our routers are currently stable and operating normally, however we need to perform some emergency maintenance to upgrade the software of the routers to a patched version provided by Cisco.

This should be able to occur without causing any additional outages, although the network routing should be considered “at risk” during the actual software upgrade.

In the meantime, our GTT connection has been kept offline to prevent the issue reappearing.  We will re-establish the connection once the software upgrades are complete.

3rd Party Maintenance (23/Feb/2017)

We have received a maintenance notification from Virgin Media regarding one of our metro fibre circuits between Bolton and Manchester.

This maintenance is due on 23rd February 2017 between midnight and 7am, with an expected outage time of 20 minutes.

No outage to Netnorth customers is expected as we have multiple metro fibre links via diverse paths via diverse fibre providers.  Our network will automatically re-route any traffic via our other fibres during the outage.

Level3 Transit Failure

At 11:55pm tonight, our connection to Level3 appeared to disconnect.  The symptoms are identical to the previous occasions where the Level3 router rebooted so we assume this to be the case again tonight.

The connection is slowly coming back online, which further supports this suspicion.

As Level3 appear to be unconcerned by a router that reboots itself, we have already made plans to migrate to an alternative transit supplier.  This is due to take place early February 2017.

Once this replacement is online, we will likely retire our Level3 connectivity entirely.

Some routes may have experienced a brief period of packet loss or increased latency during this transit fault whilst routers around the internet changed paths to our alternate transit links.  The majority of destinations would have been entirely unaffected however.

As this is caused by routers outside of our control, we have no way to prevent this brief packet loss to some destinations.

Any paths via our peering links or other transit connectivity are unaffected.

Network Connectivity

We’re seeing packet loss on the LINX Extreme network this morning.
We’ve also got alerts coming in for our London location too, this is routed entirely separately to our Bolton/Manchester locations, so it looks like it is affecting the entire LINX LAN.

We’ve shunted most traffic over to the LINX Juniper network so this should die down now. We’ve not had any official word from LINX yet, however we will continue to monitor and migrate traffic paths where possible.

Netnorth Support

LINX Extreme LAN – IGMP issue

Yesterday, at around 11:50am, we had some reports of unusual activity on our ring network.

We traced this down to elevated levels of CPU usage on our switches.  Further investigation pointed to a high level of IGMP traffic being received.

We further traced this to our LINX Extreme peering port in London and shut the port down.  This resolved the issue instantly.

We contacted our Connexions provider, who provide our LINX ports.  They did investigation to ensure the traffic was not originating internally and then passed the query onto LINX.

LINX confirmed this morning that they had a member port injecting IGMP traffic to the peering LAN yesterday and that this has now been resolved.

 

We have re-enabled our LINX Extreme port and confirmed the issue no longer exists.

We will re-enable our peering connections via this LAN shortly.

Cisco switch stack issue (DS-101)

Earlier this evening we identified one of our Cisco switch stacks misbehaving causing a constant stream of stack reconverges – this constant reconverge event has been causing layer 2 network instability for traffic flowing via the stack of nine switches.

We have just completed a physical inspection of all stack cables, including full reseating of cables as per Cisco’s guidelines – however the problem still persists.

It is possible that the stack issues are caused via a software fault within the Cisco IOS software.

We are currently applying an upgrade to the switch stack and will reboot the full stack afterwards to activate the changes.

Cisco advise doing this as a full cold reboot by removing the power from the stack members so this reload will take longer than usual.

 

Any customers connected to a different switch stack will only see a momentary outage during a layer 2 reconvergence.

Customers directly connected to switch stack DS-101 will see a total outage for up to 15 minutes.

Connectivity Issues (mostly BT)

From just before 8am UK local time this morning, we have seen reports of intermittent connectivity issues with both BT and Plusnet.

We believe this is due to a power outage in one of the Telehouse datacentres in London.

(This is in addition to a reported power outage in Telecity Harbour Exchange Square in London yesterday which also affected BT)

This outage does not directly affect Netnorth, however it seems to be causing congestion for BT and Plusnet which means you may have trouble reaching some destinations if you use one of these providers for your internet connection.

This may also affect VirginMedia, however we have a direct link to VM in Manchester which bypasses most congestion on their network.

 

As the fault lies external to our network, we are unable to take any remedial action from our side.  The BT issue lies inside BT’s network at this time.