#2 My 1993 predictions for 2003 – hah!

March 6, 2007

The Importance of Data and Multimedia.

After looking at my 1992 forecasts for 2003 for Traditional Telephony: Advanced Services in #1 My 1993 predictions for 2003 – hah! let’s look at Importance of Data and Multimedia. I’ll mark the things I got right in green, things I got wrong in red and maybe right in orange.

The public network operators have still not written down all their 64kbit/s switching networks, but all now have the capability of transporting integrated data in the form of voice, image, data, and video. At least 50% of the LAN-originated ATM packets carried by the public operators is non-voice traffic. Video traffic such as video mail, video telephone calls, and multimedia is common. Information delivered to the home, business, and individuals while on the move is managed by advanced network services integrated with customers’ equipment whether that be a simple telephone, smart telephone, PC, or PDA.

Well it’s certainly the case that carriers have still not written off their 64kbit/s switching networks and I guess this will take several decades more to happen! Video is still not that common, but with the advent of YouTube and Joost, maybe video nirvana is just around the corner. I’m not sure that we have seen advanced network services embedded in devices either! However, on consideration maybe Wi-Fi fits the prediction rather nicely?

Most public operators are now not only transporting video but also delivering and originating information, business video, and entertainment services. Telecommunications operators have strong alliances or joint ventures with information providers (IPs), software houses, and equipment manufacturers, as it is now realised that none by themselves can succeed or can invest sufficient skills and capital to succeed alone. Telecommunications operators have developed strong software skills to support these new businesses. Many staff, who were previously working in the computer industry, have now moved to the new sunrise companies of telecommunications.

Telecommunication operators have not really developed strong software skills from the perspective of developing applications themselves, but they have certainly embraced integration! The last prediction was interesting as it could be said that it pertained to the Internet bubble where many telecommunications staff moved to the telecoms industry from the computing industry. However, they were forced to leave just as rapidly when the bubble burst!

  • the network should be able to store the required information
  • the network should have the capability to transfer and switch multiple media instead of just voice
  • multimedia network services need to be integrated with desktop equipment to form a seamless feature-rich application
  • rapidly changing customer requirements means that operators should be able to reduce the new product development cycle and launch products quickly and effectively, ahead of competition; being proactive instead of reactive to competitive moves would offer a considerable edge.

Information storage on the network is pretty much of a reality when you consider on-line back-up storage services, though it has hardly been pervasive. Few are generally willing to pay for on-line storage when hard disk prices have been plummeting while their capacities have been exploding.

But, what I think I had in mind was the storage of information in the network that was then held (and still is) on personal computers and also network-based productivity tools. This has happened in a variety of ways

  • The rise and fall of Application Service Providers in the bubble
  • The success of certain network-based such as SalesForce.com
  • The recent interest in pushing network-based productivity tools by the large media companies such as Google and Yahoo.

The comment about converged networks is still a theme in progress, but the integration between network-based services and desktop equipment is certainly the pretty much the norm these days with the Internet.

This is a mixed bag of success really and seems really dated in its language. This shows just how the Internet has truly transformed our view of network based data services.

Advertisements

Where now frame relay?

February 28, 2007

The invite to the 2007 MPLS Roadmap conference provides some interesting and timely ‘facts’ about frame relay and its current use by service providers.

Fact #1: Carriers are rushing to move from legacy voice/data networks to an MPLS-based network for their enterprise services.

Fact #2: Frame relay, the dominant enterprise network service of the past 10 years, runs on the carriers’ legacy networks.

Fact #3: Some carriers will no longer respond to a frame relay RFP…and even those that do are including contract language that offers little – sometimes no – assurance that frame relay will be supported for the term of the agreement.

Reading this, I thought it about time I brought the story of frame relay up to date. I first wrote about frame relay services (FR) an absolute age ago in 1992 when they were the very latest innovation along side SDH and ATM. The core innovation in the frame relay protocol was its simplicity and data efficiency compared to its complex predecessor – X.25 and filled the gap for an efficient protocol wide area network services. Interestingly this is the same KISS motivation that lies behind Ethernet.

Throughout the 90s, frame relay services went from strength to strength and was the principle data service that carriers pushed to enterprises to use as the basis of their wide area networks (WANs). Mind you, there were mixed views from enterprises about whether they actually trusted frame relay services and they have the same concerns about IP-VPNs today.

As FR is a packet based protocol running on a shared carrier infrastructure (over ATM usually), many enterprises wouldn’t touch it with a bargepole and stuck to buying basic TDM E1 / T1 bandwidth services and managed the WAN themselves. It was the financial community that principally took this view and they have probably not changed their minds even now – in with the advent of MPLS IP-VPNs. Remember, the principle traffic protocol being transferred over WANs is IP!

Often things seem quite funny when looked at in hindsight. In The rise and maturity of MPLS, I talked about the Achilles heel in connectionless IP data services as exemplified by the Internet – lack of Class of Service capability. This can have tremendous impact when the network is carrying real time services such as voice of video traffic.

FR services, because they were run over ATM networks actually had this capability and carriers offered several levels of service derived from the ATM bearer such as:

  • best effort (CIR=0, EIR=0) »
  • guaranteed (CIR>0, EIR=0) »
  • bandwidth on demand (CIR>0, EIR>0)

Where CIR is Committed Information Rate and EIR is Expected Information Rate and were applied to what were known as Permanent Virtual Networks (PVCs) although towards the end of the 90s, temporary Switched Virtual Networks (SVCs) became available. Also, voice over frame relay services were launched around the same time.

So frame relay did not suffer from the same lack as QoS as IP but still got steam rollered by the IP and MPLS bandwagon. Also, unlike IP, because frame relay was based on telecommunications standards that were defined by the ITU inter-connect standards were defined. Because of this, carriers did inter-connect frame relay services enabling the deployment of multi-carrier WANs. Again this was, and is, a weakness in MPLS as described in MPLS and the limitations of the Internet. It’s a strange world sometimes!

One of the principle reasons for frame relay’s downfall lay with its associated bearer service, ATM. In The demise of ATM I talked about how ATM was extinguished by IP and the direct consequence of this was that the management boards of carriers were unwilling to invest any more in ATM based infrastructure. Thus FR became tarred with the same brush and bracketed as a legacy service as well.

The second reason was the rise of MPLS and its associated wide area service, MPLS based IP-VPNs. Carriers wanted to reduce infrastructure CAPEX and OPEX by focusing on IP and MPLS based infrastructures only.

Traffic growth by protocol (Source: Infonetics in Feb 07 Capacity Magazine)

Where has it all ended up? Although frame relay was not a service that had so much religion associated with it as ATM and IP (as exemplified by ‘bellheads’ and ‘netheads’) it was the main data service for WANs for pretty much a decade. When, MPLS IP-VPNs, came along in the early years of this century, there seemed to be a tremendous resistance from the carrier frame relay data network folks who never believed for one moment that frame relay services were doomed.

»

We are now pretty much up to date. Carriers still have much legacy frame relay equipment installed and frame relay services are being supported for their customers, but frame relay can undoubtedly can be classed as a legacy service.

Like ATM and Ethernet, many carriers are migrating their legacy data services to being carried over their core MPLS networks encapsulated in ‘tunnels’. Thus frame relay could be considered to be just an access protocol in this new IP/MPLS world supported while customers are still using it.

Any Transport over MPLS is what it is all about these days. Ethernet, frame relay, IP and even ATM services are all being carried over MPLS-based core networks.

As an access service and because of its previous popularity with enterprises, frame relay will live on but it’s been several years since there has been any significant investment in frame relay service infrastructure in the majority of carriers around the world.

Other services that did not survive the IP and MPLS onslaught in the same time-frame were Switched Multi-Megabit Data services (SMDS) and the ill-feted Novell Network Connect Services (NCS) – more on this in a future post.

One protocol, that did not suffer but actually flourished was Ethernet whose genesis lay in local area networks like IP. Indeed, it could be said that LAN protocols have won the battles and wars with telecommunications industry defined standards.


#1 My 1993 predictions for 2003 – hah!

February 27, 2007

#1 Traditional Telephony: Advanced Services

Way back in 1993 I wrote a paper entitled Vision 2003 that dared to try and predict what the telecommunications would look like ten years in the future. I looked at ten core issues, telephony services was the first. I thought it might be fun to take a look at how much I got right and how much I got wrong! This is a cut down of the original version and I’ll mark the things I got right in green, things I got wrong in red.

Caveats: Although it is stating the obvious, it should be remembered that nobody knows the future and, even though we have used as many external attributable market forecasts from reputable market research companies as possible to size the opportunities, they in effect, know no more than ourselves. These forecasts should not be considered as being quantitative in the strictest sense of the word, but rather as qualitative in nature. Also, there is very little by way of available forecasts out to the year 2003 and certainly even fewer that talk about network revenues. You only need look back to 1983 and see whether the phenomenal changes that happened in the computer industry were forecasted to see that forecasting is a dangerous game.

Well I had to protect myself didn’t I?

As far as users are concerned, the differentiation between fixed and cellular networks will have completely disappeared and both will appear as a seamless whole. Although there will be still be a small percentage of the market still using the classical two-part telephone, most customers will be using a portable phone for most of their calls. Data and video services, as well as POTs, will be key business and residential services. Voice and computer capability will be integrated together and delivered in a variety of forms such as interactive TVs, smart phones, PCs, and PDAs. The use of fixed networks is cheap, so portable phones will automatically switch across to cordless terminals in the home or PABXs in the office to access the broad band services that cannot be delivered by wireless.

A good call on the dominance of mobile phones (It’s quant that I called them “portable phones”, I guess I was thinking of the house brick size phones of that era. The convergence of mobile and fixed phones still eludes us even in 2007 – now that really is amazing!

Network operators have deployed intelligent networks on a network-wide basis and utilise SDH together with coherent network-wide software management to maximise quality of service and minimise cost. As all operators have employed this technology, prices are still dropping and price is still the principal differentiator on core telephony services. Most individuals have access to advanced services such as CENTREX and network based electronic secretaries that were only previously available to large organisations in the early 1990s. Because of severe competition, most services are now designed for, and delivered to, the individual rather than being targeted at the large company. All operators are in charge of their own destiny and develop service application software in-house, rather than buying it from 3rd party switch vendors.

A real mixed bag here I think I was certainly right about the dominance of the mobile phone but way out about operators all developing their own service application software. I rabbited on for several pages about Intelligent (IN) networks bringing the the power of software to the network. This certainly happened but it didn’t lead to the plethora of new services that were all the rage at the time – electronic secretary services etc. What we really saw was a phenomenal rise in annoying services based on Automatic Call Distribution (ACD) – “Press 1 for….” then “Press n for…” and so loved by CFOs.

Customers, whether in the office or at home, will be sending significant amounts of data around the public networks. Simple voice data streams will have disappeared to be replaced with integrated data, signalling, and voice. Video telephony is taken for granted and a significant number of individuals use this by choice. There is no cost differential between voice and video.

All public network operators are involved in many joint ventures delivering services, products, software, information and entertainment services that were unimagined in 1993.

Tongue in cheek, I’m going to claim that I got a good hit predicting Next Generation Networks that integrate services on a single network. Wouldn’t it have been great if I had predicted that it would all be based on IP? It was a bit too early for this at the time. Wow, did I get video telephony wrong! This product sector has not even started, let alone taken off?

What did I really not see at all because it was way too into the future were free VoIP telephony services of course as discussed in Are voice (profits) history?

Next: #2 Integration of Information and the Importance of Data and Multimedia


MPLS and the limitations of the Internet

February 21, 2007

In my post The rise and maturity of MPLS, I mentioned a number of challenges that face those organisations wishing to deliver or manage global Wide Are Networks (WANs). Whether it be a carrier, a systems integrator, an outsourcer or a global enterprise managing a global WAN, they are all faced with one particular issue that just can not be ignored and is quite profound in its nature.It is also one of the most un-understood issues in the telecommunications and Information and Communication Technology (ICT) industries today. This issue is managing end-to-end service performance where the service is being delivered over multiple carrier networks and exemplified in WANs.

The impact of this is extremely wide, affecting closed VoIP VPNs, Internet VPNs, layer-2 based VPNs and layer-3 IP-VPNs. This post will focus principally on Internet VPNs and MPLS-based layer-3 IP-VPNs although the concepts discussed are just as applicable to layer-2 services such as Layer 2 Tunnelling Protocol (L2TP).

The downside of the Internet

It strange to talk about such issues in this day and age where the Internet is all all pervasive in homes and businesses around the world. When we think of the Internet we often think of it as a single ‘cloud’ as shown on the right, where a web site is one side of the cloud and users accessing it from the other. That this is even possible, is testament to the founders of the Internet and the resilience of the IP routing algorithms (an excellent book that pragmatically goes through the origins of the Internet is Where Wizards Stay Up Late. This is well worth a read).

However in reality, the Internet is unable to deliver many types services that individuals and enterprises need at an acceptable level of performance. Why should this be so?

At an abstracted level, this perception of the Internet being a single cloud is correct, but the reality at the network level, this is somewhat different.

As the name implies – Inter and net, the Internet is built from 100s of thousands of individual networks known as Autonomous Systems (AS) connected together in a hierarchy. If you go to An Atlas of Cyberspace or The Internet Mapping Project you can see this drawn in quite an artistic way. The hierarchy is made of major teir-1 carriers such as Level3 who provide the inter-continental backbones, connecting to local regional or country carriers such as BT who, in turn, connect to small local ISPs. Consumers and enterprises can connect to the cloud at any level of the hierarchy dependent on their scale and how deep their pockets are.

Each carrier uses the standard IP routing algorithms such as Open Shortest Path First (OSPF) internally within their ‘domain’ and Border gateway Protocol (BGP) to inter-connect domains, thus creating a highly resilient network. In fact, providers of geographic components of the Internet come and go on a frequent basis with little disruption of the Internet holistically (of course, this is a pain to us as individuals if we were one of their customers!).

So what could be possibly wrong with this? It comes down to a question of predictable end-to-end performance, or rather the lack of it. Think of one of the red dots in the picture above as your local broadband DSL provider (e.g. ZEN), connecting to your local incumbent carrier – light blue diamond – who is providing the copper connection to your house or fibre into your business premises (e.g. BT), connecting via one of the global carriers to the USA (e.g. Level3) – large blue dots. In practice, you can end up transiting 60 to 800 separate routers and 40 different networks going to and from the web site you are accessing. This is shown below in the path from my PC to www.cisco.com on the West Coat of the USA using Ping Plotter.

Getting back to technology, it may be that every one of these carriers has deployed MPLS within their own so-called private network (this is a bit of a misnomer as they are carrying public traffic) that carries Internet traffic to and from their customers’ houses or business premises. This enables them to better manage Quality of Service while that traffic is on their network – once packets leave their network winding their way to and from the web site server they have no control over them at all. On the Internet, although there are supervisory bodies looking after such things as standards (IETF) or domain registration (ICANN), but there no one in control of end-to-end performance.

If a particular path becomes congested due to for any reason such as under-powered routers being used as a carrier is cash-strapped, or having insufficient bandwidth available to support the number of customers they have due to a successful advertising campaigns or any number of other reasons, then packets get put into ‘queues’ and unpredictable performance, high latency or delays or even a cessation of a connection when the link times out could be experienced.

Business use of the Internet

Many companies use IP-SEC based Internet VPNs to inter-connect their office sites quite effectively as they represent a low-cost solution. However, in most situations for larger enterprises, they provide a too unpredictable and unreliable service for use as WAN replacements.

Unpredictable performance may be acceptable for consumer browsing of the Internet and what are known as ‘store and forward’ services such as email, but it is a real killer for real-time times and other services that must have a guaranteed and predictable end-to-end performance to function effectively. Here are some examples of real time services:

Real time interactive services: Many of have experienced break up of a Voice over IP call when using services such as Skype on the Internet. One minute, the conversation is going well and next you experience weird distortion and ringing which makes the person who you are talking to unintelligible. This is due to packet loss and the VoIP software trying to interpolate gaps left by the lost packets. Would you be prepared to accept this if your were having a critical sales discussion with a potential customer or investor? Of course not. Would you ever be prepared to accept this on a fixed landline? No. If you use the Public Switched Telephone Network, then there are quality standards in place to prevent this.

Ironically, we DO seem to be prepared to accept this on mobile or cell phones where quality can be sometimes abysmal but we are prepared to accept it because we can use the phone anywhere. More on this in a future post.

One other aspect that affects intelligibility and the free conversational interplay on voice calls is delay. Once delays get above about 180mS then delay becomes very noticeable and starts to make conversations awkward. Combine this with packet loss and voice conversations become very difficult.

Another service that faces the same issue, but in a compounded way, is video conferencing where both voice and video are disrupted. Poor Internet performance killed many video conferencing start-ups in the late 90s.

Streamed but non-interruptible services: The prime example of this is video streaming on the Internet where loss of packets can be catastrophic – delay is of less importance.. This may be acceptable when looking at short snippets as exemplified by the content of YouTube, but if you want to immerse yourself in a two hour psychological thriller and ‘suspend disbelief’ then packet loss or drop-outs is a killer.

This has been raised by many people when considering the new global video service Joost recently announced by the founders of Skype. Is the quality of today’s Internet really adequate to support this type of service? I guess we will find out.

Client server, thin client and web applications: Although client / server terminology seems a little dated and is being replaced by web based services as exemplified by salesforce.com, they all have the same high performance need no matter where the application server resides.

If key press commands get ‘lost’ or edited text ‘disappears due to packet loss the service will soon become unusable and the company providing buried under a snow storm of complaints.

The main take away from the above is that guaranteed Internet performance cannot really be relied upon for real-time or client-server based services if a predicable service is mandatory. This clearly the case for the majority of business services where end-to-end predicable and guaranteed performance is an absolute need covered by tight Service Level Agreements with their network or service providers.

Carriers’ private IP networks

So how do carriers provide reliable IP data services to their business customers? They use MPLS to segment traffic on their networks into two strands:

(a) Label Switch paths (LSPs) dedicated to Internet traffic from their own customers or traffic transiting their network. Carriers want to get this traffic off their network and pass the burden to another carrier as quickly as possible to reduce costs and risk. They use a routing policy called hot potato routing to ensure this happens as quickly as possibly for traffic that does not originate from their customers.

(b) LSPs dedicated to providing VoIP, video and IP-VPN services with SLAs to their business customers. This traffic is kept on their network for as long as possible – this is called cold potato routing.

In general, carriers are only able to provide IP performance guarantees for packets on their own network i.e. only for users and business locations directly connected to the carrier’s own network. The reason for this is that carriers are not able or generally willing to physically interconnect their networks with other carriers to provide seamless, performance guaranteed services that straddle multiple networks for their customers. In general, each carrier stands alone in isolation. Why should this be so in the age of the ubiquitous Internet? Therein lies the big elephant in the room!

 

  1. Each carrier defines Class of Service in a different way as this is not explicitly covered in IETF standards, so it is not easy to interconnect and maintain the same level of priority without translation. (The easy way of doing this is not to translate but to just back-to-back two routers that terminate both carriers IP-VPNs without the need for translation. Several carriers and network integrators use this approach).
  2. Each carrier has adopted an entirely different set of Operation Support Software (OSS) tools make interconnect to other carriers to exchange performance data exceedingly challenging. OSS systems are usually a mixture of internally developed proprietary tools and bought-in 3rd party products (This is a major issue that affects all IP services in a big way and is not seen in the PSTN world because inter-connect standards exist as defined by the ITU).
  3. Carriers are generally unwilling to provide SLAs on behalf of other carriers.

Note: I would like to make it clear that this is not always the case and there are several large carriers and integrators who have proactively followed a strategy of IP-VPN interconnect with partners to better support their customers or extend the reach of their network such as Cable and Wireless, Global Crossing and Vanco (with their MPLS Matrix) to name but three.

So, if you are a small to medium UK enterprise (SME) and you are able to connect all your offices and home workers to your national incumbent carrier then they would be prepared to provide you with as SLA. If performance dropped below the performance specified in the the SLA, then you will be able to claim compensation from that provider.

However if you are a multinational enterprise, with sites located in many counties you need to work with many carriers. In general, there is no way today that any of those carriers could provide you with end-to-end performance guarantees (there is no such thing as a global carrier). So what are the alternatives to managing entyerprise multi-carrier VoIP services, MPLS IP-VPNs or layer-2 VPNs in 2007?

  1. As an regional enterprise, manage the integration of multiple carriers yourself.
  2. Go to a systems integrator, outsourcer or carrier who will act as a prime contractor by having a single SLA with you. They will back this up with multiple SLAs with each carrier providing a geographic component of the WAN.

Round up

To address IP QoS issues on their own networks MPLS has been rolled out in the majority of carriers around the world today and most offer SLA-based IP-VPNs to their business customers as one of the ways they can create WANs. It is generally perceived that IP-VPNs represent a lower cost solution than legacy services services such as frame relay. Of course, enterprises can avoid reliance on carriers altogether by just buying bandwidth in the form of TDM E1 circuits and managing WAN IP themselves as per (1) above.

There is still no universal business Internet that mirrors the public Internet so if a company requires end-to-end performance guarantees to support their client-server database or VoIP service, they will need to manage multiple carriers themselves or go to a integrator who is willing to act as a prime contractor as described above.

If you are not a network person, this may all seem quite strange and will make you wonder why the global telecommunications industry has not got its act together to better support its enterprise customers who all require this capability and spend 100s of billions of $ annually? It would seem be one of the biggest, if not the biggest, commercial opportunity in telecoms today.

One technology company that is addressing this end-to-end solution delivery issue is Nexagent and I will talk about them in a post shortly (Note: now posted as Nexagent, an enigmatic company?). Note: I should declare a personal interest in Nexagent as a co-founder in 2000.


Amazing, from 2″ to 17″ wafer sizes in 35 years!

February 9, 2007

In January my son, Steve, popped into the Intel museum in Santa Clara, California to look at the 4004 display mentioned in a previous post.

He came back with various photos some of which showed how semiconductor wafer sizes have increased over the last 30 years. This is the most interesting one.

1969 2-inch wafer containing the 1101 static random access memory (SRAM) which stored 256 bits of data.

1972 3-inch wafer containing 2102 SRAMswhich stored 1024 bits of data.

1976 4-inch wafer containing 82586 LAN co-processors.

1983 6-inch wafer containing 1.2 million transistor 486 microprocessors

1993 8-inch wafer containing 32Mbit flash memory.

2001 12-inch wafer containing Pentium 4 microprocessors moving to 90 nanometer line widths.

Bringing the subject of wafer sizes up to date, here is a photo of an Intel engineer holding a recent 18-inch (450mm) wafer! The photo was taken by a colleague of mine at the Materials Integrity Management Symposium – sponsored by Entegris, Stanford CA June 2006.

The wafer photo on the left is a 5″ wafer from a company that I worked for in the 1980s called European Silicon Structures (ES2). They offered low-cost ASIC (Application Specific Integrated Circuit) prototypes using electron beam beam lithography rather than the more ubiquitous optical lithography of the time. The technique never really caught on as it was uneconomic, however I did come across the current use of such machines in a Chalmers University of Technology in Göteborg, Sweden if I remember rightly.

If you want to catch up with all the machinations in the semiconductor world take a look at ChipGeek.


Cingular to AT&T renaming spoof video…

February 8, 2007

After posting about C&W renaming, C&W’s UK rebranding and another post on creating company names, this spoof YouTube video about the convoluted AT&T branding history will definitely make you smile.

Each new management team who arrives on the scene has its own branding ideas as part of the ‘turn around’ strategy so I guess we havn’t seen the end of this in this age of consolidation in the telecommunications industry!

Enjoy!


Virgin Media near a Cable and Wireless tie-up?

February 8, 2007

The newspapers are full of rumour about C&W and Virgin Media – arn’t they always?

In this case its “Virgin Media is close to announcing a strategic tie-up with Cable & Wireless (C&W) to help it provide telecom and television services to homes not served by its existing cable network.”

Virgin Media, let us not forget, is the new name for NT who merged with Telewest…

Let us also not forget that Mercury Communications changed its name to Cable & Wireless Communications (CWC) in 1997, to bring it closer from a brand perspective to its parent company C&W. Along with the CWC rebrand, Dick Brown C&W’s CEO at the time, acquired three cable companies.

In 1999 CWC sold its consumer assets to NTL. It was these cable businesses along with Mercury’s consumer telephony business that was sold. The reason given at the time was that CWC wanted to focus on UK business services.

After CWC sold its consumer business, it was merged with Cable and Wireless and renamed it C&W. You can see a logo history of C&W here.

Then came the Bulldog and Energis acquisition and the sale of the customer bit of Bulldog to Pipex in 2006 to concentrate on UK DSL wholesale deals (C&W UK kept the co-location assets).

So somewhere in this is, perhaps, a bit of irony that C&W UK might sign a wholesale agreement with NTL to provide infrastructure to NTL – whoops, Virgin Media – who want to outsource, if that is the right term, their DSL infrastructure. Will this also include the original Mercury telephony network sold to NTL I wonder?

No maybe, on reflection after a cup of coffee, I’ve got it all wrong and I’m just confused!