The network is live, and improved!

The big switch on
Last Friday (24th July) we switched on our network and made the internet live to all of our connected customers. Understandably many of you connected at once and this brought about some new issues we hadn't seen in testing, all relating to the high load on the network. We've therefore classed this first week as a testing week and have been working hard improving the network. Working on a live network is inevitably going to cause some outages, so thanks for your patience whilst we've made these improvements.

What we've learned and improved
The main speed issues have been related to the configuration and design of our core network. Without getting too technical here's what we've done to improve things:
  • Reduced the latency (ping) and improved connection stability of our main uplink to the Internet through re-alignment and configuration.
  • We've optimised the flow of traffic over the network so at times of heavy congestion traffic can flow much better.
  • Re-organised the recovery procedure so that the network comes back online much quicker after an outage.
Future plans and improvements
Despite the massive improvements outlined above, some customers are still experiencing poor performance due to a weak signal between their receiver and the mast. We need to assess each of these cases individually because the reason for a weak signal will be different for each customer. We're able to monitor signal levels from Bliss HQ so we'll be in touch with anyone who's affected by this.

We're also in talks with St Helens Church about the possibility of locating some extra equipment on the church tower. This will allow us to direct extra signal to weaker areas and improve the overall performance of the network.

Some more technical details
For those technically minded, we were transferring the entire BGP routing table over our wireless link. This was the reason that recovery time from an outage was extended. We're experimenting with removing this in favour of default routes in our local equipment at Treeton, with full routing tables handled in our edge equipment over in the DC. The results of this look promising and we've just implemented this across the network.