Cisco switch stack issue (DS-101)

Earlier this evening we identified one of our Cisco switch stacks misbehaving causing a constant stream of stack reconverges – this constant reconverge event has been causing layer 2 network instability for traffic flowing via the stack of nine switches.

We have just completed a physical inspection of all stack cables, including full reseating of cables as per Cisco’s guidelines – however the problem still persists.

It is possible that the stack issues are caused via a software fault within the Cisco IOS software.

We are currently applying an upgrade to the switch stack and will reboot the full stack afterwards to activate the changes.

Cisco advise doing this as a full cold reboot by removing the power from the stack members so this reload will take longer than usual.


Any customers connected to a different switch stack will only see a momentary outage during a layer 2 reconvergence.

Customers directly connected to switch stack DS-101 will see a total outage for up to 15 minutes.

Generator Tests – BOL

After a minor change in operating procedure, we briefly neglected to post the results of our weekly generator tests. For completeness, here’s a list of the intervening tests…

BOL1  10:12 - 10:38  Off load  Passed
BOL2  10:56 - 11:13  Off load  Passed

BOL1  10:24 - 10:42  On load   Passed
BOL2  10:53 - 11:16  On load   Passed

BOL1  10:20 - 10:49  Off load  Passed
BOL2  11:02 - 11:22  Off load  Passed

BOL1  10:15 - 10:25  Off load  Passed
BOL2  10:55 - 11:14  Off load  Passed

BOL1  10:08 - 10:32  Off load  Passed
BOL2  10:41 - 11:03  Off load  Passed

The generator was started, then ran off-load / on-load for the durations given above, before detecting mains and shutting down in the expected timeframe.
All measured values were within their normal ranges.

Hypervisor Restart

At 4:35pm we received alerts that one of our hypervisors in Telecity Williams House, Manchester had ceased to process disk activity.

No hardware alerts were present, so a forced reboot of the hypervisor was required.

The server is now responding normally, and all virtual machines powered back up.

Connectivity Issues (mostly BT)

From just before 8am UK local time this morning, we have seen reports of intermittent connectivity issues with both BT and Plusnet.

We believe this is due to a power outage in one of the Telehouse datacentres in London.

(This is in addition to a reported power outage in Telecity Harbour Exchange Square in London yesterday which also affected BT)

This outage does not directly affect Netnorth, however it seems to be causing congestion for BT and Plusnet which means you may have trouble reaching some destinations if you use one of these providers for your internet connection.

This may also affect VirginMedia, however we have a direct link to VM in Manchester which bypasses most congestion on their network.


As the fault lies external to our network, we are unable to take any remedial action from our side.  The BT issue lies inside BT’s network at this time.