The new network dogma: Has the wheel turned full circle?

August 26, 2008

An authoritative principle, belief, or statement of ideas or opinion, especially one considered to be absolutely true”

When innovators proposed Internet Protocol (IP) as the universal protocol for carriers in the mid 90s, they met with furious resistance from the traditional telecommunications community. This post asks whether the wheel has now turned full circle with new innovative approaches often receiving the same reception.

Like many others, I have found the telecommunications industry so very interesting and stimulating over the last decade. There have been so many profound changes that it is hard to indentify with the industry that existed prior to the new religion of IP that took hold in the late 90s. In those balmy days the industry was commercially and technically controlled by the robust world standards of the Public Switched Telecommunications Services (PSTN).

In some ways it was a gentleman’s industry where incumbent monopoly carriers ruled their own lands and had detailed inter-working agreements with other telcos to share the end-to-end revenue generated by each and every telephone call. To enable these agreements to work, the International Telecommunications Union (ITU) in Geneva spent decades defining the technical and commercial standards that greased the wheels. Life was relatively simple as there was only one standards body and one set of rules to abide by. The ITU is far from dead of course and the organisation went on to develop the highly successful GSM standard for mobile telephony and is still very active defining standards to this very day.

In those pre-IP days, the industry was believed to be at its nadir with high revenues, similarly high profits with every company having its place in the universe. Technology had not significantly moved on for decades ( though this does an injustice to the development of ATM and SDH/SONET) and there was quite a degree of complacency driven by a monopolistic mentality. Moreover, it was very much a closed industry in that individuals chose to spend their entire careers in telecommunications from a young age with few outsiders migrating into it. Certainly few individuals with an information technology background joined telcos as there was a significant mismatch in technology, skills and needs. It was not until the mid 90s, when the industry started to use computers by adopting Advanced Intelligent Networks (AIN) and Operational Software and Systems (OSS), that computer literate IT engineers and programmers saw new job opportunities and jumped aboard.

In many ways the industry was quite insular and had its own strong world view of where it was going. As someone once said, “the industry drank its own bathwater” and often chose to blinker out opposing views and changing reality. It is relatively easy to see how this came about with hindsight. How could an industry that was so insular embrace disruptive technology innovation with open arms? The management dogma was all about “We understand our business, our standards and our relationships. We are in complete control and things won’t change.”

Strong dogma dominated and was never more on show than in the debate about the adoption of Asynchronous Transfer Mode (ATM) standards that were needed to upgrade the industry’s switching networks. If ATM had been developed a decade earlier there would have never been an issue but unfortunately the timing could not have been worse as it coincided with the major uptake of IP in enterprises. When I first wrote about ATM back in 1993, IP was pretty much an unknown protocol in Europe. (The demise of ATM ). ATM and the telco industry lost that battle and IP has never looked back.

In reality it was not so much a battle but all out war. It was the telecommunications industry eyeball-to-eyeball with the IT industry. The old “we know best” dogma did not triumph and the abrupt change in industry direction led to severe trauma  in all sections of the industry. Many old-style telecommunications equipment vendors, who had focused on ATM with gusto, failed to adapt with many either writing off billions of Dollars or being sold at knock-down valuations. Of course, many companies made a killing. Inside telcos, commercial and engineering management who had spent decades at the top of their profession, found themselves floundering and over a fifteen year period a significant proportion of that generation of  management ended up leaving the industry.

The IP band wagon had started rolling and its unstoppable inertia has relentlessly driven the industry through to the current time. Interestingly, as I have covered in previous posts such as MPLS and the limitations of the Internet, not all the pre-IP technologies were dumped. This was particularly so with fundamental transmission related network technologies such as SDH / SONET (SDH, the great survivor). These technologies were 100% defined within the telecommunications world and provided capabilities that were wholly lacking in IP. IP may have been perfect for enterprises, but many capabilities were missing that were required if it was to be used as the bedrock protocol in the telecommunications industry. Such things as:

  • Unlike telecommunications protocols, IP networks were proud that their networks were non-deterministic. This meant that packets would always find their way to the required destination even if the desired path faulted. In the IP world this was seen as a positive feature. Undoubtedly it was, but it also meant that it was not possible to predict the time it would take for a packet to transit a network. Even worse, a contiguous stream of packets could arrive at a destination via different paths. This was acceptable for e-mail traffic but a killer for real-time services like voice.
  • Telecommunications networks required high reliability and resilience so that in event of any failure, automatic switchover to an alternate route would occur within several milliseconds so that even live telephone calls were not interrupted. In this situation IP would lackadaisically find another path to take and packets would eventually find their way to their destination (well maybe that is a bit of an overstatement, but it does provide a good image of how IP worked!).
  • Real time services require a very high Quality of Service (QoS) in that latency, delay, jitter and drop-out of packets need to be kept to an absolute minimum. This was, and is, a mandatory requirement for delivery of demanding voice services. IP in those days did not have the control signalling mechanisms to ensure this.
  • If PSTN voice networks had one dominant characteristic – it was reliable. Telephone networks just could not go down. There were well engineered and extensively monitored so if any fault occurred comprehensive network management systems flagged it very quickly to enable operational staff to correct it or provide a work round. IP networks just didn’t have this level of capability of operational management systems.

These gaps in capabilities in the new IP-for-everything vision needed to be corrected pretty quickly, so a plethora of standards development was initiated through the IETF that remains in full flow to this day. I can still remember my amazement in the mid 1990s when I came across a company had come up with the truly innovative idea to combine the deterministic ability of ATM with an IP router that brought together the best of the old with the new still under-powered IP protocol (The phenomenon of Ipsilon). This was followed by Cisco’s and the IETF’s development of MPLS and all its progeny protocols. (The rise and maturity of MPLS and GMPLS and common control).

Let’s be clear, without these enhancements to basic IP, all the benefits the telecommunications world gained from focusing on IP would not have been realised. The industry should be making a huge sigh of relief as many of the required enhancements were not developed until after the wholesale industry adoption of IP. If IP itself had not been sufficiently adaptable, it could be conjectured that there would have been one of the biggest industry dead ends imaginable and all the ‘Bellheads’ would have been yelling “I told you so!”.

Is this the end of story?

So, that’s it then, it’s all done. Every carrier of every description, incumbent, alternate, global, regional, mobile, and virtual has adopted IP / MPLS and everything is hunky-dory. We have the perfect set of network standards and everything works fine. The industry has a clear strategy to transport all services over IP and the Next Generation Network (NGN) architecture will last for several decades.

This may very well turn out to be the case and certainly IP /MPLS will be the mainstream technology set for a long time to come and I still believe that this was one of the best decisions the industry took in recent times. However, I cannot help asking myself whether if we have not gone back to many of the same closed industry attitudes that drove it prior to the all-pervasive adoption of IP?

It seems to me that it is now not the ‘done thing’ to propose alternative network approaches or enhancements that do not exactly coincide with the now IP way of doing things for risk of being ‘flamed’. For me the key issue that should drive network architectures should be simplicity and nobody could use the term ‘simple’ when describing today’s IP carrier networks. Simplicity means less opportunity for service failure and simplicity means lower cost operating regimes. In these days of ruthless management cost-cutting, any innovation that promises to simplify a network and thus reduce cost must have merit and should justify extensive evaluation – even if your favourite vender disagrees. To put it simply, simplicity cannot not come from deploying more and more complex protocols that micro-manage a network’s traffic.

Interestingly, in spite of there being a complete domination of public network cores by MPLS, there is still one major area where the use of MPLS is being actively questioned – edge and or metro networks. There is currently quite a vibrant discussion taking place concerning the over complexity of MPLS for use in metro and the possible benefits of using IP over Ethernet (Ethernet goes carrier grade with PBT / PBB-TE?). More on this later.

We should also not forget that telcos have never dropped other aspects of the pre-IP world. For example, the vast majority of telcos who own physical infrastructure still use that leading denizen of the pre-IP world, Synchronous Digital Hierarchy (SDH or SONET) (SDH, the great survivor). This friendly dinosaur of a technology still holds sway at the layer-1 network level even though most signalling and connectivity technologies that sit upon it have been brushed aside by the IP family of standards. SDH’s partner in crime, ATM, was absorbed by IP through the creation of standards that replicated its capabilities in MPLS (deterministic routing) and MPLS-TE (fast rerouting). The absorption of SDH into IP was not such a great success as many of the capabilities of SDH could not effectively be replaced by layer-3 capabilities (though not for the want of trying!).

SDH is based on time division multiplexing (TDM), the pre-IP packet method of sharing a defined amount of bandwidth between a number of services running over an individual wavelength on a fibre optic cable. The real benefit of this multiplexing methodology is that it had proved to be ultra-reliable and offers the very highest level of Quality of Service available. SDH also has the in-built ability par-excellence to provide restoration of an inter-city optical cable in the case of major failure. One of SDH’s limitations however, is that it only operates at very high granularity of bandwidth so smaller streams of traffic more appropriate to the needs of individuals and enterprises cannot be managed through SDH alone. This capability was provided by ATM and is now provided by MPLS.

Would a moment of reflection be beneficial?

The heresy that keeps popping up in my head when I think about IP and all of its progeny protocols, is that the telecommunications industry has spent fifteen years developing a highly complex and inter-dependent set of technical standards that were only needed to effectively replace what was a ‘simple’ standard that did its job effectively at a lower layer in the network. Indeed, pre MPLS, many of the global ISPs used ATM to provide deterministic management of the global IP networks.

Has the industry now created a highly over-engineered and over-complex reference architecture? Has a whole new generation of staff been so marinaded for a decade in deep IP knowledge, training and experience that it’s for an individual to question technical strategy? Has the wheel has turned full circle?

In my post Traffic Engineering, capacity planning and MPLS-TE, I wrote about some of the challenges facing the industry and the carriers’ need to undertake fine-grain traffic engineering to ensure that individual service streams are provided with appropriate QoS. As consumers start to use the Internet more and more for real-time isochronous services such as VoIP and video streaming, there is a major architectural concern about how this should be implemented. Do carriers really want to continue to deploy an ever increasing number of protocols that add to the complexity of live networks and hence increase risk?

It is surprising just how many carriers use only very light traffic engineering and simply rely on over-provisioning of bandwidth at a wavelength level. This may be considered to be expensive (but is it if they own the infrastructure?) and architects may worry about how long they will be able to continue to use this straightforward approach, but there does seem to be a real reticence to introduce fine-grained traffic management. I have been told several times that this is because they do not trust some of the new protocols and it would be too risky to implement them. It is industry knowledge that a router’s operating system contains many features that are never enabled and this is as true today as it was in the 90s.

It is clear that management of fine-grain traffic QoS is one of the top issues to be faced in coming years. However, I believe that many carriers have not even adopted the simplest of traffic engineering standards in the form of MPLS-TE that starts to address the issue. Is this because many see that adopting these standards could create a significant risk to their business or is it simply fear, uncertainty and doubt (FUD)?

Are these some of the questions carriers we should be asking ourselves?

Has management goals moved on since the creation of early MPLS standards?

When first created, MPLS was clearly focused on providing predictable determinability at layer-3 so that the use of ATM switching could be dropped to reduce costs. This was clearly a very successful strategy as MPLS now dominates the core of public networks. This idea was very much in line with David Isenberg’s ideas articulated in The Rise of the Stupid Network in 1997 which we were all so familiar with at the time. However ambitions have moved on, as they do, and the IP vision was considerably expanded. This new ambition was to create a universal network infrastructure that could provide any service using any protocol that any customer was likely to need or buy. This was called an NGN.

However, is that still a good ambition to have? The focus these days is on aggressive cost reduction and it makes sense to ask whether an NGN approach could ever actually reduce costs compared to what it would replace. For example, there are many carriers today who wish to exclusively focus on delivering layer-2 services. For these carriers, does it make sense to deliver these services across a layer 3 based network? Maybe not.

Are networks so ‘on the edge’ that they have to be managed every second of the day?

PSTN networks that pre-date IP were fundamentally designed to be reliable and resilient and pretty much ran without intervention once up and running. They could be trusted and were predictable in performance unless a major outside event occurred such as a spade cutting a cable.

IP networks, whether they be enterprise or carrier, have always had an well-earned image of instability and going awry if left alone for a few hours. This is much to do with the nature of IP and the challenge of managing unpredicted traffic bursts. Even today, there are numerous times when a global IP network goes down due to an unpredicted event creating knock-on consequences. A workable analogy would be that operating an IP network is similar to a parent having to control an errant child suffering from Attention Deficit Disorder.

Much of this has probably been brought about by the unpredictable nature of routing protocols selecting forwarding paths. These protocols have been enhanced over the years by so many bells and whistles that a carrier’s perception of the best choice of data path across the network will probably be not the same as the one selected by the router itself.

Do operational / planning architecture engineers often just want to “leave things as they are” because it’s working. Better the devil you know?

When a large IP network is running, there is a strong tendency to want to leave things well alone. Is this because there are so many inter-dependent functions in operation at any one time that it’s beyond an individual to understand it? Is it because when things go wrong it takes such an effort to restore service and it’s often impossible to isolate the root cause if it not down to simple hardware failure?

Is risk minimisation actually the biggest deciding factor when deciding what technologies to adopt?

Most operational engineers running a live network want to keep things as simple as possible. They have to because their job and sleep are on the line every day. Achieving this often means resisting the use of untried protocols (such as MPLS-TE) and replacing fine-grained traffic engineering with the much simpler strategy of using over-provisioned networks ( Telcos see it as a no-brainer because they already own the fibre in the ground and it is relatively easy to light an additional dark wavelength).

At the end of the day, minimising commercial risk is right at the top of everyone’s agenda, though it usually sits below operation cost reduction.

Compared to the old TDM networks they replace, are IP-based public networks getting too complex to manage when considering the ever increasing need for fine-grain service management at the edge of the network?

The spider’s web of protocols that need to perform flawlessly in unison to provide a good user experience is undoubtedly getting more and more complex as time goes by. There is only little effort to simply things and there is a view that it is all becoming too over-engineered. Even if a new standard has been ratified and is recommended for use, this does not mean it will be implemented in live networks on a wide scale basis. The protocol that heads the list of under exploited protocols is IPv6 (IPv6 to the rescue – eh?).

There is significant on-going standards development activity in the space of path provisioning automation (Path Computation Element (PCE): IETF’s hidden jewel) and of true multilayer network management. This would include seamless control of layer-3 (IP), layer-2.5 (MPLS) and layer-1 networks (SDH) (GMPLS and common control). The big question is (risking being called a Luddite) would a carrier in the near future risk the deployment of such complexity that could bringing down all layers of a network at once? Would the benefits out weigh the risk?

Are IP-based public networks more costly to run than legacy data networks such as Frame Relay?

This is a question I would really like to get an objective answer to as my current views are mostly based on empirical and anecdotal data. If anyone has access to definitive research, please contact me! I suspect, and I am comfortable with the opinion until proved wrong, that this is the case and could be due to the following factors:

  • There needs to be more operations and support staff permanently on duty than with the old TDM voice systems thus leading to higher operational costs.
  • Operational staff require a higher level of technical skill and training caused by the complex nature of IP. CCIEs are expensive!
  • Equipment is expensive as the market is dominated by only a few suppliers and there are often proprietary aspects of new protocols that will only run on a particular vendor’s equipment thus creating effective supplier lock-in. The router clone market is alive and healthy!

It should be remembered that the most important reason given to justify the convergence on IP was the cost savings resulting from collapsing layers. This has not really taken place except for the absorption of ATM into MPLS. Today, each layer is still planned, managed and monitored by separate systems. The principle goal of a Next Generation Network (NGN) architecture is still to achieve this magic result of reduced costs. Most carriers are still waiting on the fence for evidence of this.

Is there a degradation in QoS using IP networks?

This has always been a thorny question to answer and a ‘Google’ to find the answer does not seem to work. Of course, any answer lies in the eyes of the beholder as there is no clear definition of what the term QoS encompasses. In general, the term can be used at two different levels in relation to a network’s performance: micro-QoS and the macro-QoS.

Micro-QoS is concerned with individual packet issues such as order of reception of packets, number of missing packets, latency, delay and jitter. An excessive amount of any of these will severely degrade a real-time service such as VoIP or video streaming. Macro-QoS is more concerned with network wide issues such as network reliability and resilience and other areas that could affect overall performance and operational efficiency of a network.

My perspective is that on a correctly managed IP / MPLS network (with all the hierarchy and management that requires), micro-QoS degradation is minimal and acceptable and certainly no worse than IP over SDH. Indeed, many carriers deliver traditional private wire services such as E1 or T1 connectivity over an MPLS network using pseudowire tunnelling protocols such as Virtual Private LAN Service (VPLS). However this does significantly raise the bar in respect to the level of IP network design and network management quality required.

The important issue is the possible degradation at the macro-QoS level where I am comfortable with the view that using an IP / MPLS network there will always be a statistically higher risk of fault or problems due to its complexity compared to a simpler IP over SDH system. There is a certain irony in that macro-QoS performance of a network could be further degraded when additional protocols are deployed to improve-micro-QoS performance.

Is there still opportunity for simplification?

In an MPLS dominated world, there is still significant opportunity for simplification and cost reduction.

Carrier Ethernet deployment

I have written several posts (Ethernet goes carrier grade with PBT / PBB-TE?) about carrier Ethernet standards and the benefits its adoption might bring to public network. In particular, the promise of simplification. To a great extent this interesting technology is a prime example of where a new (well newish) approach that actually does make quite a lot of sense comes up against the new MPLS-for-everything-and-everywhere dogma. It is not just a question of convincing service providers of the benefit but also overcoming the almost overwhelming pressure brought on carrier management form MPLS vendors who have clear vested interests in what technologies their customers choose to use. This often one-sided debate definitely harks back to the early 90s no-way-IP culture. Religion is back with a vengeance.

Metro networks

Let me quote Light Reading from September 2007. What once looked like a walkover in the metro network sector has turned into a pitched battle – to the surprise, but not the delight, of those who saw Multiprotocol Label Switching (MPLS) as the clear and obvious choice for metro transport.” MPLS has encountered several road bumps on its way to domination and it should always be appropriate to question whether any particular technology adoption is appropriate.

To quote the column further:The carrier Ethernet camp contends that MPLS is too complex, too expensive, and too clunky for the metro environment.” Whether ‘thin MPLS’ (PBB-TE / PBT or will it be T-MPLS?) will hold off the innovative PBB intruder remains to be seen. At the end of the day, the technology that provides simplicity and reduced operational costs will win the day.

Think the unthinkable

As discussed above, the original ambition of MPLS has ballooned over the years. Originally solving the challenge of how to provide a deterministic and flexible forwarding methodology for layer-3 IP packets and replace ATM, it has achieved this objective exceptionally well. These days, however, it seems to be always assumed that some combination or mix of Ethernet (PBB-TE) and/or MPLS-TE and maybe even GMPLS is the definitive, but highly complex, answer to creating that optimum highly integrated NGN architecture that can be used to provide any service any customer might require.

Maybe, it is worth considering a complementary approach that is highly focused on removing complexity. There is an interesting new field of innovation that is proposing that path forwarding ‘intelligence’ and path bandwidth management is moved from layer-3, layer.2.5 and layer-2 back into layer-1 where it rightly belongs. By adding additional capability to SDH, it is possible to reduce complexity in the above layers. In particular deployment scenarios this could have a number of major benefits, most of which result in significantly lower costs.

This raises an interesting point to ponder. While revenues still derive from traditional telecom-oriented voice services, the services and applications that are really beginning to dominate and consume most bandwidth are real time interactive and streaming services such as IPTV, TV replays, video shorts, video conferencing, tele-presence, live event broadcasting, tele-medicine, remote monitoring etc. It could be argued that all these point-to-point and broadcast services could be delivered with less cost and complexity using advanced SDH capabilities linked with Ethernet or IP / MPLS access? Is it worth thinking about bringing SDH back to the NGN strategic forefront where it could deliver commercial and technical benefits?

To quote a colleague: “The datacom protocol stack of IP-over-Ethernet was designed for asynchronous file transfer, and Ethernet as a local area network packet-switching protocol, and these traditional datacom protocols do a fine job for those applications (i.e. for services that can tolerate uncertain delays, jitter and throughput, and/or limited-scope campus/LAN environments). IP-over-Ethernet was then assumed to become the basis protocol stack for NGNs in the early 2000s, due to the popularity of that basic datacom protocol stack for delivering the at-that-time prevailing services carried over Internet, which were mainly still file-transfer based non-real-time applications.”

SDH has really moved on since the days when it was only seen as a dumb transport layer. At least one service provider company, Optimum Communications Services offers an innovative vision whereby instead of inter-node paths being static, as is the case with the other NGN technologies discussed in this post, the network is able to dynamically determine the required inter-node bandwidth based on a fast real-time assessment of traffic demands between nodes.


So has the wheel has turned full circle?

As most carriers’ architectural and commercial strategies are wholly focused on IP with the Yellow Brick Road ending with the sun rising over a fully converged NGN, how much real willingness is there to listen to possible alternate or complementary innovative ideas?

In many ways the telecommunications industry could be considered to have returned to the closed shutter mentality that dominated before IP took over in the late 1990s – I hope that this is not the case. There is no doubt that choosing to deploy IP / MPLS was a good decision, but a decision to deploy some of the derivative QoS and TE protocols is far from clear cut.

We need to keep our eyes and minds open as innovation is alive and well and most often arises in small companies who are free to think the the unthinkable. They may might not be always right but they may not be wrong either. Just cast your mind back to the high level of resistance encountered by IP in the 90s and let’s not repeat that mistake again. There is still much scope for innovation within the IP based carrier network world and I would suspect this has everything to do with simplifying networks and not complicating them further.

Addendum #1: Optimum Communications Services – finally a way out of the zero-sum game?


Hammerhead Systems: Enabling PBB-TE – MPLS seamless services

November 16, 2007

I haven’t quite decided whether there is a true religious war between the now ubiquitous MPLS and the more recent PBB-TE (Provider Backbone Bridging Traffic Engineering) Ethernet technologies. It certainly seems that way sometimes! However, everything has its time and place and that applies to network technologies as well.

On one hand, MPLS is now the de rigueur technology for use in the core of the world’s IP-based ‘converged’ networks. MPLS enables IP to be tamed to a degree by providing deterministic (i.e. predictable) routing and QoS. Deterministic routing forces traffic over a predetermined path so that all packets on that path will experience the same delay. This is an absolute necessity for real-time traffic such as Voice-over-IP, video conferencing and IP-TV services. MPLS also enables traffic to be categorised so that real-time services take preference over non-critical traffic such as email at busy times on the network. I’ve covered much of this in previous posts such as The rise and maturity of MPLS.

If your network strategy guys come from the ‘purist’ MPLS camp then it is clear that they will see MPLS being deployed both in the core and metro access network. However, MPLS is often now seen as an expensive and complex technology to maintain in real environments and this has prevented carriers from rolling out MPLS to the edge of their networks, often known as local metro-networks. A carrier usually has only one core network but often has many local access or metro networks which directly connect to their customers’ buildings and private LANs. If MPLS were deployed throughout this infrastructure costs could skyrocket.

A consequence of this is that the industry has been looking for a lower cost alternative as the technology of preference for use in these access networks. As the transport of preference for enterprises is Ethernet it comes as no surprise that there has been tremendous interest in using Ethernet in carriers’ access networks as it could prove to be a lower cost solution than MPLS. It has been conjectured that the deployment of PBB-TE rather than MPLS could save in excess 40% of costs. This will be the subject of a future post.

This vision has driven a tremendous amount of standards activity that has resulted in the PBB-TE standard whereby inappropriate features have been stripped out of Ethernet to create a transport technology that can be used in carrier’s access networks. I’ve previously written about these initiatives in my posts – Ethernet goes carrier grade with PBT / PBB-TE? and PBB-TE / PBT or will it be T-MPLS?.

If the above scenario is to pan out in practice, then carriers must be able to to seamlessly and transparently deploy and manage services across both technologies and this has been a real if not impossible challenge to date. This has much to do with the immaturity of PBB-TE technology and lack of compatibility with MPLS. For example, MPLS uses pseudowire tunnels for the transport of services across a core network, while PBB-TE uses E-LINE which has been defined by the Metro Ethernet Forum (MEF).

Earlier this week I listened to a most interesting webinar from Hammerhead Systems a USA company who have been focusing on this issue and I would like to thank them for allowing me to use some of their graphics in this post.

It was interesting to hear a clearly articulated vision for a future network strategy based on a technology agnostic view. The term ‘technology agnostic’ in this case means a future based on hybrid networks based on a mechanism whereby MPLS and PBB-TE are able to inter-work. Of course, I’m sure many would see this as a first step to an MPLS-free future, however that could be seen as a bit extreme and I’m sure Hammerhead would never articulate this view!

One of the weaknesses of PBB-TE is the lack of a workable control plane so Hammerhead have partnered with Soapstone in this announcement. Interestingly, Soapstone is a division of a company that I used know quite well, Avici.

Avici came to fame with a terabit router in the late 1990s but with the down turn in the market they decided to focus on providing software to support converged Next Generation Networks. They say they “Provide an abstraction layer that decouples service from the network“. The availability of this portable abstraction layer is the one of the key needs to enable seamless inter-operation between MPLS and and PBB-TE.

In the webinar, Dr. Ray Mota, Chief Strategist and President of Consulting Synergy Research Group, presented a view of PBB-TE past and PBB-TE future. As it’s nearing Christmas this reminded me of Dickens’s Christmas Carol, but I digress…

PBB-TE (past) was profiled as being designed as a replacement for traditional point-to-point SONET/SDH trunks supporting enterprise Ethernet services. However, there are some key pieces missing and this was what the webinar was all about.

PBB-TE (future) is about a “Generalized Services Infrastructure” that is independent of MPLS or PBB-TE transport layers. The joint announcement encompassed the following components of this Generalized Services Infrastructure which claimed to be the “first seamless support across PBB-TE metro networks and MPLS cores” running on Hammerhead’s HSX 6000 PBB-TE Service Gateway™.

  • Multipoint-to-Multipoint (MP2MP): Hammerhead’s PBB-TE E-LAN
  • Point-to-Multipoint (P2MP): Hammerhead’s PBB-TE E-tree
  • Multicast and Multipoint applications: PBB-TE E-Tree for IPTV, IP-VPN, Multicast, and Enterprise Managed Services
  • Seamless solutions across MPLS/VPLS and PBB-TE: Hammerhead’s Service gateway for inter-working of MP2MP and P2MP PBB-TE solutions with MPLS/VPLS
  • Control Plane Provisioning: Support for MP2MP and P2MP PBB-TE solutions through the Soapstone partnership.
  • All of these services supported with MultiClass QoS

An example of a service – business multicast – that could be deployed across a mixed infrastructure is shown below.

Hammerhead make extensive use of the IETF’s Virtual Switch Instance (VSI) as a building block to enable a capability to support both pseudowire trunks across MPLS and PBB-TE trunks based on MEF E-LAN. The diagram below shows how a seamless service can be created:

One of the key services that is driving converged NGN networks is IP-TV and the MEF E-Tree specification provides the multicast capability these types of service require. Again, Hammerhead support this stanfdard on PBB-TE and MPLS.

In practice, Hammerhead’s multicast solutions for PBB-TE networks use Soapstone Networks’ Provider Network Controller (PNC) control plane which decouples the control and data planes enabling Hammerhead’s E-LAN and E-Tree services to run without the development of new protocols. Also, Hammerhead’s VPLS and MPLS E-Tree solutions use existing MPLS control protocols.

Roundup

I don’t normally make my technology posts so focused on a particular vendor’s product set but I wanted to make an exception in this case. I certainly would not be able to confirm that what Hammerhead have announced is truly unique, but it does seem to be a first from my limited visibility. I have also been interested in what Soapstone are doing for some time as well. Perhaps this partnership is a marriage in heaven?

We can all do without technology wars. The telecommunications industries, whether they be fixed or mobile, really do need to focus on providing the innovative services that their customers can use. Moreover, they do need realise that moving packets from one location to another is a commodity service that needs to be offered with exceptional reliability, high customer service but also at low cost. To me, commoditisation is a good thing and not to be something to be frightened of and avoid by trying to jump into so-called value added services to avoid the margin crush. The commoditisation of the computer market following the personal computer steamroller can hardly be seen as a bad thing, but it does mean that infrastructure costs have to come down in step with average service selling prices.

MPLS is a high cost marriage partner and carriers should be looking at alternative technologies to see if they can help reduce costs. Unfortunately it is often the case that equipment vendors are not technology agnostic (e.g. PBT could be catastrophic, says Juniper CEO) and that is very much the case with MPLS. Of course, once a technology really starts to take off – as demonstrated by IP – then every vendor jumps on the bandwagon!

Providing a solution that enables carriers to deploy the most appropriate and cost effective technologies in access and core networks AND to be able to provision and manage services seamlessly, seems to me to be a ‘no brainer’ idea which should receive much interest. It is certainly good to be able to identify one company that can help carriers achieve this goal and it will certainly help PBB-TE gain further credibility.

I certainly predict that the majority of incumbent and alternative carriers that need to connect with customer premises will, if they are not today, evaluate the use PBB-TE to ascertain whether the cost reduction promises are real. Hammerhead’s and Soapstone’s solution could provide a key element in that evaluation. If they are truly unique with this announcement, then they won’t be for long as every other vendor will try and catch up!


The Bluetooth standards maze

October 2, 2007

This posting focuses on low-power wireless technologies that enable communication between devices that are located within a few feet of each other. This can apply to both voice communications as well as data communication.

This whole area is becoming quite complex with a whole raft of standards being worked on – ULB, UWB, Wibree, Zigbee etc. This may seem rather strange bearing in mind the wide-scale use of the key wireless technology in this space – Bluetooth.

We are all familiar with Bluetooth as it is now as ubiquitous in use as Wi-Fi but it has had a chequered history by any standard and this has negatively affected its take-up across many market sectors.

Bluetooth first saw the light of day as an ‘invention’ by Ericsson in Sweden back in 1994 and was intended as a wireless standard for use as a low-power inter-’gadget’ communication mechanism (Ericsson actually closed the Bluetooth division in 2004). This initially meant hands-free ear pieces for use with mobile phones. This is actually quite a demanding application as there is no room for drop outs as in an IP network as this would be a cause for severe dissatisfaction from users.

Incidentally, I always remember buying my first Sony Ericsson hands-free earpiece that I bought in 2000 as everyone kept giving me weird looks when I wore it in the street – nothing much has changed I think!

Standardisation of Bluetooth was taken over by the Bluetooth Special Interest Group (SIG) following its formation in 1998 by Sony Ericsson, IBM, Intel, Toshiba, and Nokia. Like many new technologies, it was launched with great industry fanfare as the up-and-coming new thing. This was pretty much at the same time as WAP (Covered in a previous post: WAP, GPRS, HSDPA on the move!) was being evangelised. Both of these initiatives initially failed to live up to consumer expectations following the extensive press and vendor coverage.

Bluetooth’s strength lies in its core feature set:

  • It operates in the ‘no licence’ industrial, scientific and medical (ISM) spectrum of 2.4 to 2.485 GHz (as does Wi-Fi of course)
  • It uses a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec
  • Power can be altered from 100mW (Class 1) down to 1mW (Class 3), thus effectively reducing the distance of transmission from 10 metres to 1 metre
  • It uses adaptive frequency hopping (AFH) capability with the transmission hopping between 79 frequencies at 1 MHz intervals to help reduce co-cannel interference from other users of the ISM band. This is key to giving Bluetooth a high degree of interference immunity
  • Bluetooth pairing occurs when two Bluetooth devices agree to communicate with each other and establish a connection. This works because each Bluetooth device has a unique name given it by the user or as set as the default

Several issues beset early Bluetooth deployments:

  • A large lack of compatibility between devices meant that Bluetooth devices from different vendors failed to work with each other. This caused quite a few problems both in the hands-free mobile world and the personal computer peripheral world and led to several quick updates.
  • In the PC world, user interfaces were poor forcing ordinary users to become experts in finding their way around arcane set-up menus.
  • There were also a considerable number of issues arising in the area of security. There was much discussion about Bluejacking where an individual could send unsolicited messages to nearby phones that were ‘discoverable’. However, people that turned off discoverability needed an extra step to receive legitimate data transfers thus complicated ‘legitimate’ use.

Early versions of the standard were fraught with problems and the 1Mbit/s v1.0 release was rapidly updated to v1.1 which overcame many of the early problems. This was followed up by v1.2 in 2003 which helped reduce co-channel interference from non-Bluetooth wireless technologies such as Wi-Fi.

In 2004, V2.0 + Enhanced Data Rate (EDR) was announced that offered higher data rates – up to 3Mbit/s – and reduced power consumption.

To bring us up to date, V2.1 + Enhanced Data Rate (EDR) was released in August 2007 which offered a number of enhancements the major of which seems to be an improved and easier-to-use mechanism for pairing devices.

The next version of Bluetooth is v3.0 which will be based on ultra-wideband (UWB) wireless technology. This is called high speed Bluetooth while there is another proposed variant, announced in June 2007, called Ultra Low Power Bluetooth (ULB).

During this spread of updates, most of the early days problems that plagued Bluetooth have been addressed but it cannot be assumed that Bluetooth’s market share is unassailable as there are a number of alternatives on the table as it is viewed that Bluetooth does not meet all the market’s needs – especially the automotive market.

Low-power wireless

Ultra Low-power Bluetooth (ULB)

Before talking about ULB, we need to look at one of its antecedents, Wibree.

This must be one of the shortest lived ‘standards’ of all time! Wibree was announced in October 2006 by Nokia though they did indicate that they would be willing to merge its activities with other standards activities if that made sense.

“Nokia today introduced Wibree technology as an open industry initiative extending local connectivity to small devices… consuming only a fraction of the power compared to other such radio technologies, enabling smaller and less costly implementations and being easy to integrate with Bluetooth solutions.”

Nokia felt that there was no agreed open standard for ultra-low power communications so it decided that it was going to develop one. One of the features that consumes power in Bluetooth is its frequency hopping capability so Wibree would not use it. Wibree is also more tuned to data applications as it used variable packet lengths unlike the fixed packet length of Bluetooth. This looks similar to the major argument that took place when ATM (The demise of ATM) was first mooted. The voice community wanted short packets while the data community wanted long or variable packets – the industry ended up with a compromise that suited neither application.

More on Wibree can be found at wibree.com . According to this site:

“Wibree and Bluetooth technology are complementary technologies. Bluetooth technology is well-suited for streaming and data-intensive applications such as file transfer and Wibree is designed for applications where ultra low power consumption, small size and low cost are the critical requirements … such as watches and sports sensors”.

On June 12th 2007 Wibree merged with the Bluetooth SIG and the webcast of the event can be seen here. This will result in Wibree becoming part of the Bluetooth specification as an ultra low-power extension of Bluetooth known as ULB.

ULB is intended to complement the existing Bluetooth standard by incorporating Wibree’s original target of reducing the power consumption of devices using it – it aims to consume only a fraction of the power current Bluetooth devices consume. ULB will be designed to operate in a standalone mode or in a dual-mode as a bolt-on to Bluetooth. ULB will reuse existing Bluetooth antennas and needs just a small bit of addition logic when operating in dual-mode with standard Bluetooth so it should not add too much to costs.

When announced, the Bluetooth SIG said that NLB was aimed at wireless enabling small personal devices such as sports sensors (heart rate monitors), healthcare monitors (blood pressure monitors), watches (remote control of phones or MP3 players) and automotive devices (tyre pressure monitors).

Zigbee

The Zigbee standard is managed by the Zigbee Alliance and was developed by the IEEE as standard 802.15.4 It was ratified in 2004.

According to the Alliance site:

“ZigBee was created to address the market need for a cost-effective, standards-based wireless networking solution that supports low data-rates, low-power consumption, security, and reliability.

ZigBee is the only standards-based technology that addresses the unique needs of most remote monitoring and control and sensory network applications.”

This puts the Bluetooth ULB standard in competition with Zigbee as it aims to be cheaper and simpler to implement than Bluetooth itself. In a similar way to the ULB team announcements, Zigbee uses about 10% of the software and power required to run a Bluetooth node..

A good overview can be found here – ZigBee Alliance Tutorial – which talks about all the same applications as outlined in the joint Wibree / Bluetooth NLB announcement above. Zigbee’s characteristics are:

  • Low power compared to Bluetooth
  • High resilience as iill operate in a much noisier environment that Bluetooth or Wi-Fi
  • Full mesh working between nodes
  • 250kbit/s data rate
  • Up to 65,536 nodes.

The alliance says this makes Zigbee ideal for both home automation and industrial applications.

It’s interesting to see that one of Zigbee’s standard competitors has posted an article entitled New Tests Cast Doubts on ZigBee . All’s fair in love and war I guess!

So there we have it. It looks like Bluetooth ULB is being defined to compete with Zigbee.


High-
speed wireless

High Speed Bluetooth 3.0

There doesn’t seem to be too much information to be found on the proposed Bluetooth version 3.0. However on the WiMedia Alliance site I found the statement by Michael Foley, Executive Director, Bluetooth SIG. WiMedia is the organisation that lies behind Ultra Wide-band (UWB) wireless standards.

“Having considered the UWB technology options, the decision ultimately came down to what our members want, which is to leverage their current investments in both UWB and Bluetooth technologies and meet the high-speed demands of their customers. By working closely with the WiMedia Alliance to create the next version of Bluetooth technology, we will enable our members to do just that.”

According to a May 2007 presentation entitled High-Speed Bluetooth on the Wimedia site, the Bluetooth SIG will reference the WiMedia Alliance [UWB] specification and the solution will be branded with Bluetooth trademarks. The solution will be backwards compatible with the current 2.0 Bluetooth standard.

It also talks about a combined Bluetooth/UWB stack:

  • With high data rate mode devices containing two radios initially
  • Over time, the radios will become more tightly integrated sharing components

The specification will be completed in Q4 2007 and first silicon prototyping complete in Q3 2008. I have to say that this approach does not look to be either elegant or low cost to me. However, time will tell.

That completes the Bluetooth camp of wireless technologies. Let’s look at some others.


Ultra-wide Bandwidth (UWB)

As the Bluetooth SIG has adopted UWB as the base of Bluetooth 3.0 what actually is UWB. A good UWB overview presentation can be found here. Essentially, UWB is a wireless protocol that can deliver a high bandwidth over short distances.

It’s characteristics are:

  • UWB uses spread spectrum techniques over a very wide bandwidth in the 3.1 to 10GHz spectrum in the US and 6.0 to 8.5GHz in Europe
  • It uses very low power so that it ‘co-exist’ with other services that use the same spectrum
  • It aims to deliver 480Mbit/s at distances of several metres

The following diagram from the presentation describes it well:

In theory, there should never be an instance where UWB interferes with an existing licensed service. In some ways, this has similarities to BPL (The curse of BPL), though it should not be so profound in its effects. To avoid interference it uses Detect and Avoid (DAA) technology which I guess is self defining in its description without going into too much detail here.

One company that is making UWB chips is Artimi based in Cambridge, UK.
Wireless USB (WUSB)

In the same way that the Bluetooth SIG has adopted UWB, the USB Implementers Forum has adopted WiMedia’s UWB specification as the basis of Wireless USB. According to Jeff Ravencraft, President and Chairman, USB-IF and Technology Strategist, Intel:

“Certified Wireless USB from the USB-IF, built on WiMedia’s UWB platform, is designed to usher in today’s more than 2 billion wired USB devices into the area of wireless connectivity while providing a robust wireless solution for future implementations. The WiMedia Radio Platform meets our objective of using industry standards to ensure coexistence with other WiMedia UWB connectivity protocols.”

A presentation on Wireless USB can be downloaded here

Wireless USB will deliver around the same bandwidth as Bluetooth 3.0 – 480Mbit/s at 3 metres because it is based on the same technology and will be built into Microsoft Vista.™.

One is bound to ask, what the difference is between Wireless USB and Bluetooth as they are going to be based on the same standard. Well one answer is that Wireless USB products are being shipped today as seen in the Belkin Wireless USB Adapter as shown on the right.

A real benefit of both standards adopting UWB will be that both standards will use the same underlying radio. Manufacturers can choose whatever which ever standard they want and there is no need to change hardware designs. This can only help both standard’s adoption.

However, because of the wide spectrum required to run UWB – multiple GHz – different spectrum ranges in each region are being allocated. This is a very big problem as it means that radios in each country or region will need to be different to accommodate the disparate regulatory requirements.

In the same way that Bluetooth ULB will compete with Zigbee (an available technology), Bluetooth 3.0 will compete with Wireless USB (also an available technology).

Round up

So there you have it – the relationships between Bluetooth 2.0, Bluetooth 3.0, Wibree, Bluetooth ULB, Zigbee, High speed Bluetooth, UWB and Wireless USB. So things are clear now right?

So what about Wi-Fi’s big brother WIMAX? And don’t let us forget about HSPDA (WAP, GPRS, HSDPA on the move!), the 3G answer to broadband services? At least these can be put in a category of wide area wireless services to separate them from near distance wireless technologies. I have to say I find all these standards very confusing and makes any decision that relies on a bet about which technology will win out in the long run exceedingly risky. At least Bluetooth 3.0 and Wireless USB use the same radio!

At an industry conference I attended this morning, a speaker talked about an “arms war” between telcos and technology vendors. If you add standards bodies to this mix, I really do wonder where we consumers are placed in their priorities. Can you see PC manufacturers building all these standards onto their machines?

I could also write about WIMAX, Near Field Communications, Z-wave and RF-ID but I think that is better left for another day!


EBay paid too much for Skype

October 2, 2007

I don’t normally post news, but I couldn’t resist posting this as it so close to my heart. Ever since the deal was done everyone has been asking whether it was worth what they paid.

The  article was in the London Evening Standard today.

ONLINE auctioneer eBay today admitted it had paid too much for internet telephone service Skype in 2005.

EBay, which forked out $2.6 billion (fl.3 billion), will now take a $1.4 billion charge on the company as it fails to convert users into revenue.

Skype’s chief executive Nikias Zennström, one of eBay’s founders, will step down, but the company denies he is walking the plank.

EBay will pay some investors $530 million to settle future obligations under the disastrous Skype deal.

In a desperate bid to get the deal over the line in 2005, eBay promised an extra $L7 billion to Skype investors if the unit met certain targets including number of users.

Now it is offering those shareholders $530 million as “an early, one-time payout”. The parent company will write down $900 million in the value of Skype.

Since eBay took over, Skype’s membership accounts have risen past 220 million, but it earned just $90 million during the second quarter of 2007, far below projections.

I wonder if this will cool some of the outrageous values being put on some of the social network services?


Do you know your ENUM?

September 24, 2007

Isn’t it funny how a new concept is often universally derided as nonsensical? There are many examples of this but none more so than Voice over IP (VoIP) (I mean Internet Protocol not Intellectual Property).

But just look at how universal VoIP has become over the last fifteen years despite all the early knocking and mumblings that it would, could, not ever work. When I first started talking about VoIP in the mid 1990s, after a visit to Vocaltec in Israel, I was even banned from a particular country as my views were considered seditious. Looking at the markets of 2007, I guess they may have been right! However, trying to hold back the inevitable is never a good reaction to a possibly disruptive technology though this is still occurring on a wide scale in today’s telecommunications world. [Picture credit: Enum.at]

Earlier this year I wrote about the challenges of what I called islands of isolation in a posting entitled Islands of communication or isolation?. I consider this to be one of the main challenges any new communications technology or service needs to face up to if it is going to achieve world-wide penetration. Sometimes just an accepted standard can tip a new technology into global acclaim. A good example of this is Wi-Fi or ADSL. Because of the nature of these technologies, equipment based on these standards can be used even by a single individual so a market can be grown from even a small installed base when it is reinforced by a multiplicity of vendors jumping on the bandwagon when they think the market is big enough.

However, many communication technologies or services require something more before they can become truly ubiquitous and VoIP is just one of those services. Of course many of these additional needs can be successfully bypassed by ‘putting up the proverbial finger’ to the existing approach by developing completely stand-alone services based on proprietary technologies as so successfully demonstrated by Skype in the VoIP world. The reason Skype become so successful at such an early stage was that the service was run independently of the existing circuit-switched Public Switched Telephone Network (PSTN). This was quite a deliberate and wholly successful strategy. What was the issue that Skype was trying to circumvent (putting their views of their perceived monopolistic characteristics of the telco industry to one side)? Telephone numbers.

Numbering was the one important feature that made the traditional telephone industry so successful. Unfortunately, it is also the lack of this one feature that has held back the rollout of VoIP services more than any other. The issue is that every user of a traditional telephone had their own unique telephone number (backed up by agreed standards drafted by the ITU). As long as you knew an individual’s number you could call them where ever they were located. In the case of VoIP, you may not be able to find out their address if they use a different VoIP operator to yourself leading to multiple islands of VoIP users who are unable to directly communicate with each other.

If the user chooses to use a VoIP-based telephone service they still expect to be able talk to anyone no matter what service provider they have chosen to use, whether that be another user of the VoIP service or a colleague not using VoIP but an ordinary telephone.

One of the key issues cluttering the path to achieving this is that VoIP runs on an IP network that uses a completely different way of identifying users than traditional PSTN or mobile networks. IP networks use an IP addresses as dictated by the IPv4 standard ( IPv6 to the rescue – eh? ) while public telephone networks use the E.164 standard as maintained by the ITU in Geneva. So if a VoIP user wants to make a call to an individual’s desk or mobile phone or vice versa a cross-network directory look up is needed before a physical connection can be made.

This is where the concept of Telephone Number Mapping (ENUM) comes into its own as one of the key elements required to achieve the vision of converged VoIP and PSTN services. The key goal of ENUM is to enable calls to be made between the two worlds of VoIP and PSTN as easy as between PSTN users. This must be achieved if VoIP services are are to become truly ubiquitous.

In reality no individual really cares whether a call is being completed on a VoIP network or not as long as the quality is adequate. They certainly do care about cost of a call and this turned out to be one of the main drivers causing the rise of VoIP services as they are used to bypass the tradition financial settlement regimes that exist in the PSTN world (Revector, detecting the dark side of VoIP).

How does ENUM work?

There are three aspects that need to be considered:

  1. How is an individual is identified on the IP network or Internet (an IP network can be a closed IP network used by a carrier where a guaranteed quality of service is implemented unlike the Internet).
  2. How the individual is identified on the PSTN network segment from an addressing or telephone number basis.
  3. How these two segments inter-work.

The IP network segment: We are all familiar with the concept of a URL or Uniform Resource Locator that is used to identify a web site. For example, the URL of this blog is http://technologyinside.com . In fact a URL is a subset of a Uniform Resource Identifier (URI) along with a Uniform Resource Name (URN). A URL refers to the domain e.g. a company name, while the URI operates at a finer granularity and can identify an individual within that company such as with an email address. For VoIP calls, as an individual is the recipient of a call rather than the company, URIs are used as the address. The same concept is used with SIP services as explained in sip, Sip, SIP – Gulp! The IETF standard that talks about E.164 and DNS mapping is RFC 2916.

URIs can be used to specify the destination device of a real-time session e.g.

  1. IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  2. Phone: sip: 1234 1234 1234@yyy.com; user=phone
  3. FAX: sip: 1234 1234 1235@yyy.com; user=fax

On the PSTN segment: A user is identified by their E.164 telephone number used by both fixed and mobile / cell phones. I guess there is no need to explain the format of these as they are an example of an ITU standard that is truly global!

Mapping of the IP and PSTN worlds:

There are two types of VoIP call. Those that are carried end-to-end on an IP network or other calls that start on a VoIP network but end on a PSTN network or vice versa. For the second type, call. mapping is required.

Mapping between the two worlds is in essence managed by an on-line directory that can be accessed by either party – the VoIP operator wishing to complete a call on a traditional telephone or a PSTN operator wishing complete a call on a VoIP network. These directories are maintained ENUM registrars. Individual user records therefore contain both the E.164 number AND the VoIP identifier for an individual.

The Registrar’s function to manage both the database and the security issues surrounding the maintenance of a public database i.e. only the individual or company (in the case of private dial plans) that are concerned with the record are able to change its contents.

The translation procedure: When a call between a VoIP user and a PSTN user is initiated, four steps are involved. Of course, the user must be ENUM-enabled by having an ENUM record with an ENUM registrar.

  1. The VoIP user’s software, or their company’s PBX i.e. their User Agent translates the E.164 number into ENUM format as described in RFC 3761.To convert an E.164 number to an ENUM the follows steps are required:
    1. +44 1050 6416 (The E.164 telephone number)
    2. 44105056416 (Removal of all characters except numbers)
    3. 61465050144 (Reversal of the number order)
    4. 6.1.4.6.5.0.5.1.3.4 (Insertion of dots between the numbers)
    5. 6.1.4.6.5.0.5.1.3.4.e164.arpa (Adding the global ENUM domain)
  2. A request is sent to the Domain Number Service (DNS) to look up the ENUM domain requested.
  3. A query in a format specified by RFC 3403 is sent to the ENUM registrar’s domain which either returns the PSTN number or the URI number of the caller – whichever is requested.
  4. The call is now initiated and completed.

For this process to work universally then every user that uses both VoIP and PSTN services need to have an ENUM record. That is a problem today as it is just not the case.

ENUM Registrars

In a number of countries top-level public ENUM registrars have been set up driven by the ITU. For example this is the ENUM registrar in Austria – http://www.enum.at They then hold the DNS pointers to other ENUM registrars in Austria. Another example is Ireland’s ENUM registry.

However, in the USA, ENUM services are in the hands of private registrars.

If you sign up for a VoIP service that provides you with an E.164 telephone number, your VoIP provider will act as a registrar and hence your details will be automatically registered for look-up through a DNS call. If you do not use one of these services, it is possible to register yourself with an independent registrar.

Local Number Portability (LNP)

During the early days of VoIP services, many ENUM registrars were operated by 3rd party clearing houses acting on a federated basis who were quick to jump on an unaddressed need. Of course, these registrars charge for look-up services. Other third party companies offer provide “trusted and neutral” number database services such as Neustar, e164 and Nominum in the USA who not only offer ENUM services but also Local Number Portability services. To quote Neustar:

“LNP is the ability of a phone service customer in North America to retain their local phone number and access to advanced calling features when they switch their local phone service to another local service provider. LNP helps ensure successful local telephone competition, since without LNP, subscribers might be unwilling to switch service providers.”

However, as we start to see more and more VoIP service providers and more and more traditional voice carriers offering VoIP service to their customers we will see more carriers offering ENUM numbering capabilities. Moreover, They could also use ENUM technology to help reduce costs of the need to support Local Number Portability by managing translation / mapping databases themselves rather than paying a 3rd party for the capability. To quote an article in Telephony Online:

Not all service providers are rushing to do their own ENUM implementations, said Lynda Starr, a senior analyst with Frost & Sullivan who specializes in IP communications. “Some say it’s not worth doing yet because VoIP traffic is still small.” Eventually, however, Starr estimates that service providers could save about 20% of the cost of a call by implementing ENUM – even more if they exchange traffic with one another as peers.

An ITU committee is being planned by the ITU to look at service-provider hosted ENUM databases but the view is that it will be slow to be implemented as is usually the case with ITU standards.

Round up

If every PSTN network had an ENUM-compliant gateway and database, then truly converged voice services could be created and user’s preferences concerning on which device they would like to take calls could be accommodated. Today, as far as I am aware, even the neutral 3rd party ENUM registrars do not currently share their records with other parties, further exacerbating the numbering islands issue. This means you need to know which Registrar to go to before a call can be set up.

It is early days yet but we will undoubtedly start to see more and more carriers implementing ENUM capabilities rather than some of the proprietary number translation solutions that started with the concept of Intelligent Networks in the 1980s. In the mean time the industry will carry on in a sub-optimal way hoping beyond hope that something will happen to sort it all out soon. The real issue is that ENUM registries are the keystone capability needed to make VoIP services globally ubiquitous but they can hardly be considered a major opportunity to make money on a standalone basis. Rather they are an embedded capability in VoIP or PSTN service providers or neutral Internet exchanges so there is little incentive to pour vast amounts of money into the capability which will lead to continuing snail-like growth.

As is the case with standards, even though most would agree that using E.164 numbering is the way forward, there is another proposal called SRV or service record that proposes to use email addresses as the denomination rather than telephone numbers. The logic of this is that it would be driven by by IT directors riding on the back of disappearing PBXs and who are swapping over to Asterisk open-software systems. That is a story for another time however.

Addendum #1: sip, Sip, SIP – Gulp!


How to Be a Disruptor

September 11, 2007

An excellent article from Sandhill.com on running a software business along disruptive lines. Written by the CEO of MySQL, it looks like it needs a lot of traditional common sense!

These are the key issues  he talks about:

Follow No Model
Get Rich Slow
Make Adoption Easy
Run a Distributed Workforce
Foster a Culture of Experimentation
Develop Openly
Leverage the Ecosystem
Make Everyone Listen to Customers
Run Sales as a Science
Fraternize with the Enemy

Take a read: How to Be a Disruptor


WAP, GPRS, HSDPA on the move!

September 4, 2007

Over the last few months I have written many posts about Internet technologies but they have been pretty much focussed on terrestrial rather than wireless networks (other than dabbling in Wi-Fi with my overview of The Cloud. – The Cloud hotspotting the planet). This exercise was rather interesting as I needed to go back to the beginning and look at how the technologies evolved starting with The demise of ATM.

Back in 1994 a colleague of mine, Gavin Thomas, wrote about Mobile Data protocols and it’s interesting to glance back to see how ‘crude’ mobile data services were at the time. Of course, you would expect that to be the case as GSM Digital Cellular Radio was a pretty new concept at the time as well. In that 1993 post I ended with the statement that “GSM has a bright future”. Maybe it should have read ” the future is Orange”! No one foresaw in those days the up and coming explosive growth of GSM and mobile phone usage. Certainly n one predicted the surge in use of SMS.

Acronym hell has extended itself to mobile services over the last few years and the market has become littered with three, four and even five letter acronyms. In particular, wireless Internet started with a three letter acronym back in the late 1990s – WAP (Wireless Access Protocol), progressing through a four letter acronym, GPRS (General Packet Radio Service) and Enhanced Data GSM Environment (EDGE) and is now moving to a five letter broadband 3G acronym – HSDPA (High-Speed Downlink Packet Access). Phew!

The history of mobile data services has been littered with undelivered hype over the years that still lives on today. However, that hype led to the development of services that really do work unlike some of the early initiatives like WAP.

Ah, WAP, now that was interesting. I would probably put this at the top of my list of over-hyped protocols of all time. At least when ATM was hyped this only took place within the telecommunications community whereas WAP was hyped to the world’s consumers which created much more visibility of ‘egg on the face’ for mobile operators and manufacturers.

So what was WAP?

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity (whether it was right or wrong) to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users – more on this later.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet. Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

There were several aspects that needed standardising under the WAP banner:

  • Transmission protocols. WAP defined how packets were handled on a 2G wireless network and consisted of wireless versions TCP and UDP as seen on the Internet and also used WTP (Wireless transaction protocol) to control communications between the mobile phone and the base station. WTP itself contained an error correction capability to better help cope with unreliable wire bearer.
  • Mobile HTML: It was immediately recognised that due to the limited screen size and the low data rates achievable on a mobile phone a very simplified version of HTML was required for use with mobile web sites. This led to the development of WML (Wireless Markup Language). This was a a VERY cut down version of HTML with very little capability and any graphic used being tiny as well. Towards the end of the 90s WAP 2.0 was defined which improved things somewhat and was based on a cut down of XHTML.

WAP clearly did not live up to its promise of a mobile version of the Internet with it’s crude and constrained user interface, high latency, the need to struggle with arcane menu structures (has anything changed here in ten years?) and to access services using exceedingly slow data rates experienced on the mobile networks of the day.

However, this did not stop mobile service operators from over hyping WAP services with endless hoarding and TV adverts extolling Internet access from mobiles. At one time it looked as if mobile operator advertising departments never talked to their engineering departments and were living in a world of their own that bore little relation to reality.

It all had to crash and it did along with the ‘Internet bubble’ in 2001. Many mobile operators sold their WAP service as an ‘open’ service similar to the Internet. In reality, they were closed garden services that forced users to visit their company portal as their first port of call making it well nigh impossible for small application developers to get their services in front of users. One could ask how much this has changed by 2007?

I should not forget to also mention that the cost of using WAP services was very high based as it was on bits transmitted. This led to shockingly high bills and low usage and provided one of the great motivators behind the ‘unforeseen’ growth of SMS services.

I believe that much of this still lives on in the conscious and unconscious memory of consumers and held back major usage of mobile data services for many years.

Along comes the ‘always-on’ GPRS service

After licking the WAP wounds for several years, it was clearly recognised that something better was required if data services were take off for mobile operators. One of the big issues for WAP were the poor data transmission speeds achieved so GPRS (General Packet Radio Service) was born.

GPRS is an IPv4-based packet switched based protocol where data users share the same data channel in a cell. Increased data rates in GPRS derives from the knitting together of multiple TDMA time slots where each individual GSM time slot can manage between 9.6 to 21.4 Kbps. Linking together slots can deliver greater than 40kbit/s ( up to 80kbit/s) depending on the configuration implemented.

GPRS users are connected all the time and have access to the maximum upstream bandwidth available if no other users in their cell are recieving data at the same time.

The improved data rate (that is in the range of an old dial-up modem) and improved reliability experienced when using GPRS has definitely led to a wider use of data services on the internet. Incidentally, a shared packet service should mean lowered cost but as users are still billed on a kilobits transmitted basis, GPRS bills are still shockingly high if the service is used a lot.

GPRS services are so reliable that there is wide spread availability of GPRS routers as shown in the picture above (Linksys) which are often used for LAN back up capabilities.

GPRS was definitely a step in the right direction.

Gaining an EDGE

EDGE (Enhanced Data rates for GSM Evolution) is an upgrade to GPRS that has gained some popularity in the USA and Europe and is known as a 2.5G service (although it is derives from 3G standards).

EDGE can be deployed by any carrier who offers GPRS services and represents an upgrade to GPRS by requiring a swap-out to an EDGE compatible transceiver and base station subsystem.

By using an 8PSK (8 phase shift keying) modulation scheme on each time slot it’s possible to increase data rates within a single time slot to 48kbit/s. Thus, in theory, it would be be possible, by combining all 8 times slots, to deliver an aggregate 384kbit/s data service. In practice this would not be possible as there would be no spare bandwidth available for voice services!

All in all EDGE achieves what it set out to achieve – higher data rates without an upgrade to full 3G capability and has been widely deployed.

The promise of the HSDA family

Following on from WAP, GPRS and EDGE have been the dominant protocols used for mobile data access for a number of years now. Achieved data rates are still slow by ADSL standards and this has put off many users after they have played with them for a bit.

With the tens of billions of $ spent on 3G licences at the end of the last century one would have imagined that we all would have access to megabit data rates on our mobile or cell phones by now, but that has just not been the case. 3G has been slow to be deployed and presented many operational issues that needed be resolved.

The Universal Mobile Telecommunications System (UMTS) known as 3GSM uses W-CDMA spread spectrum technology as its air interface and delivers its data services under the standards known as HSDPA (High-Speed Downlink Packet Access) and HSUPA (High-Speed Uplink Packet Access) known collectively as HSDA (High-Speed Data Access).

Unlike the TDMA technology used in GSM, W-CDMA is a spread spectrum technology where all users transmit ‘on the top’ of each other over a wide spectrum, in this case 5MHz radio channels. The equipment identifies individual users in the aggregate stream of data through the use of unique user codes] that can be detected. (I explained how spread spectrum radio works in 1992 in Spread Spectrum Radio). The use of this air interface adopted makes a 3G service incompatible with GSM.

In theory, W-CDMA is able to support data rates up to 14mbit/s but in reality offered rates are in the 384Kbit/s to 3.6Mbit/s and is delivered using a dedicated down link channel called the HS-DSCH, (High-Speed Downlink Shared Channel) which allows higher bit rate transmission than ordinary channels. Control functions are carried on sister channels. The HS-DSCH channel is shared between all users in a cell so in practice it would not be possible to deliver the ceiling data rate to any more than a single subscriber which makes me wonder how the industry is going to support lots of mobile TV users on a single cell? More on this issue in a future post.

Standardisation of HSDPA is carried out by the 3rd Generation Partnership Project (3GPP).

Inevitably, because of the ultra slow roll out of UMTS 3G networks, HSDPA will take a long time to get to your front door although this is happening is quite a few countries. Here in the UK, the 3 network is currently launching (August 2007) its HSDPA data service which will be followed by a HSUPA capability at a later date. Initially it will only offer HSDPA data cards for PCs.

Interestingly, The Register reports that 3 will offer 2.8Mbit/s and the the tariff will start at £10 Sterling a month for the Broadband Lite service providing 1Gbytes of data rising to £25 for 7Gbytes with the Broadband Max service.

You can pre-order a broadband modem now as shown on the right.

Incidentally, Vodafone’s UK HSDPA service can be found here and their 7.2Mbit/s service here.

The future is LTE

Another project within 3GPP is the Long Term Evolution (LTE) activity as a part of Release 8. The core focus of the LTE team is, as you would expect, on increasing available bandwidths but there are a number of other concerns they are working on.

  • Reduction of latency: Latency is not an issue for streamed services but is a prime concern for interactive services. There is no point post-WAP launching advanced interactive services if users have to wait around like in the early days of the Internet. Users have been there before.
  • Cost reduction: This is pretty self evident but the activity is focussed on reducing operator’s deployment costs not reducing consumer charge rates!
  • QoS capability: The ubiquitous need for policy and QoS capability and I’ve explored in depth on fixed networks.

The System Architecture Evolution (SAE) is another project that is running in parallel with but behind the LTE. It comes as little surprise that the SAE is looking at creating a flat all-IP network core which will (supposedly) be the key mechanism by which operators will reduce their operating costs. This still debatable to my mind.

Details of this new architecture can be found under the auspices of the Telecoms & Internet Services & Protocols for Advanced Neworks or TISPAN (a six letter acronym!) which is a joint activity between ETSI and 3GPP. To quote from the web site:

Building upon the work already done by 3GPP in creating the SIP-based IMS (IP Multimedia Subsystem), TISPAN and 3GPP are now working together to define a harmonized IMS-centric core for both wireless and wireline networks.

This harmonized ALL IP network has the potential to provide a completely new telecom business model for both fixed and mobile network operators. Access independent IMS will be a key enabler for fixed/mobile convergence, reducing network installation and maintenance costs, and allowing new services to be rapidly developed and deployed to satisfy new market demands.

Based as it is on IMS (which I wrote about in IP Multimedia Subsystem or bust!) this could turn out to be a project and a half. Saying that the “devil is in the detail” would seem to be a bit of an understatement when considering TISPAN.

A recent informative PowerPoint presentation about the benefits of NGN, convergence and TISPAN an be found here.

Roundup

We seem to have come a long way since the early days of WAP with HSDA now starting to deliver the speed of fixed line ADSL to the mobile world. Transfer rates are indeed important but high latency can be every bit as frustrating when using interactive services so it is important to focus on its reduction. The challenge with 3G is its limited coverage and this could cause slowness of uptake – as long as flat rate access charges are the norm and NOT per megabit charging as we have seen in the past. And boy, I bet the inter-operator roaming charges will be high!

However, bandwidth and service accessibility is not the only issue that needs addressing for the mobile Internet market to sky rocket. The platform itself is still a fundamental challenge, limited screen size and arcane menus to name but two. The challenge of writing of applications that are able to run on the majority of phones is definitely one the other major issues (I touched on this in Mobile apps: Java just doesn’t cut the mustard?).

I reviewed a book earlier this year entitled Mobile Web 2.0! that talks extensively about the walled-garden and protectionist attitudes still exhibited by many of the mobile operators. This has to change and there are definite signs that this is beginning to happen with fully open Internet access now being offered by the more enlightened operators.

Maybe, just maybe, if it all comes together over the next decade then the prediction in the above book “The mobile phone network is the computer. Of course, when we say ‘phone network’ we do not mean the ‘Mobile operator network. Rather we mean an open, Web driven application…” could just come about.


IPv6 to the rescue – eh?

June 21, 2007

To me, IPv6 is one of the Internet’s real enigmas as the supposed replacement of the the Internet’s ubiquitous IPv4. We all know this has not happened.

The Internet Protocol (IPv4) is the principle protocol that lies behind the Internet and it originated before the Internet itself. In the late 1960s there was a need in a number of US universities to exchange data and an interest in developing the new network technologies, switching capabilities and protocols required to achieve this.

The result of this was the formation of the Advanced Research Project Agency a US government body who started developing a private network called ARPANET which metamorphosed into the Defense Advanced Research Projects Agency (DARPA). The initial contract to develop the network was won by Bolt, Beranek and Newman (BBN) which was eventually bought by Verizon and sold to two private equity companies in 2004 to be renamed BBN Technologies.

The early services required by the university consortium were file transfer, email and the ability to remotely log onto university computers. The first version of the protocol was called the Network Control Protocol (NCP) and saw the light of day in 1971.

In 1973, Vince Cerf, who worked on NCP (now Chief Internet Evangelist at Google), and Robert Kahn ( who previously worked on the Interface Message Processor [IMP]) kicked off a program to design a next generation networking protocol for the ARPANET. This activity resulted in the the standardisation through ARPANET Requests For Comments (RFCs) of TCP/IPv4 in 1981 (now IETF RFC 760).

IPv4 uses a 32-bit address structure which we see most commonly written in dot-decimal notation such as aaa.bbb.ccc.ddd representing a total of 4,294,967,296 unique addresses. Not all of these are available for public use as many addresses are reserved.

An excellent book that pragmatically and engagingly goes through the origins of the Internet in much detail is Where Wizards Stay Up Late – it’s well worth a read.

The perceived need for upgrading

The whole aim of the development of of IPv4 was to provide a schema to enable global computing by ensuring that computers could uniquely identify themselves through a common addressing scheme and are able to communicate in a standardised way.

No matter how you look at it, IPv4 must be one of the most successful standardisation efforts to have ever taken place if measured by its success and ubiquity today. Just how many servers, routers, switches, computers, phones, and fridges are there that contain an IPv4 protocol stack? I’m not too sure, but it’s certainly a big, big number!

In the early 1990s, as the Internet really started ‘taking off’ outside of university networks, it was generally thought that the IPv4 specification was beginning to run out of steam and would not be able to cope with the scale of the Internet as the visionaries foresaw. Although there were a number of deficiencies, the prime mover for a replacement to IPv4 came from the view that the address space of 32 bits was too restrictive and would completely run out within a few years. This was foreseen because it was envisioned, probably not wrongly, that nearly every future electronic device would need its own unique IP address and if this came to fruition the addressing space of IPv4 would be woefully inadequate.

Thus the IPv6 standardisation project was born. IPv6 packaged together a number of IPv4 enhancements that would enable the IP protocol to be serviceable for the 21st century.

Work was started 1992/3 and by 1996 a number of RFCs were released starting with RFC 2460. One of the most important RFCs to be released was RFC 1933 which specifically looked at the transition mechanisms of converting IPv4 networks to IPv6. This covered the ability of routers to run IPv4 and IPv6 stacks concurrently – “dual stack” – and the pragmatic ability to tunnel the IPv6 protocol over ‘legacy’ IPv4 based networks such as the Internet.

To quote RFC 1933:

This document specifies IPv4 compatibility mechanisms that can be implemented by IPv6 hosts and routers. These mechanisms include providing complete implementations of both versions of the Internet Protocol (IPv4 and IPv6), and tunnelling IPv6 packets over IPv4 routing infrastructures. They are designed to allow IPv6 nodes to maintain complete compatibility with IPv4, which should greatly simplify the deployment of IPv6 in the Internet, and facilitate the eventual transition of the entire Internet to IPv6.

The IPv6 specification contained a number of areas of enhancement:

Address space: Back in the early 1990s there was a great deal of concern about the lack of availability of public IP addresses. With the widespread uptake of IP rather than ATM as the basis of enterprise private networks as discussed in a previous post The demise of ATM, most enterprises had gone ahead and implemented their networks with any old IP address they cared to use. This didn’t matter at the time because those networks were not connected to the public Internet so it did’nt matter whether other computers or routers had selected the same addresses.

It first became a serious problem when two divisions of a company tried to interconnect within their private network and found that both divisions had selected the same default IP addresses and could not connect. This was further compounded when those companies wanted to connect to the Internet and found that their privately selected IP addresses could not be used in the public space as they had been allocated to other companies.

The answer to this problem was to increase the IP protocol addressing space to accommodate all the private networks coming onto the public network. Combined with the vision that every electronic device could contain an IP stack, IPv6 increased the address space to 128 bits rather than IPv4′s 32 bits.

Headers: Headers in IPv4 (headers precede data in the packet flow and contain routing and other information about the data) were already becoming unwieldy so the addition of extra data in the headers necessitated by IPv6 would not help things by increasing a minimum 20byte header to 80 bytes. IPv6 headers are simplified by enabling headers to be chained together and only used when needed. IPv4 has a total of 10 fields, while IPv6 has only 6 and no options.

Configuration: Managing an IP network is pretty much of a manual exercise with few tools to automate the activity beyond tools such as DCHP (the automatic allocation of IP addresses for computers). Network administrators seem to spend most of the day manually entering IP addresses into fields in network management interfaces which really does not make much use of their skills.

IPv6 has incorporated enhancements to enable a ‘fully automatic’ mode where the protocol can assign an address to itself without human intervention. The IPv6 protocol will send out a request to enquire whether any other device has the same address. If it receives a positive reply it will add a random offset and ask again until it receives no rely. IPv6 can also identify nearby routers and automatically identify if a local DHCP server ID available.

Quality of Service: IPv6 has embedded enhancements to enable the prioritisation of certain classes of traffic by assigning a value to a packet in the field labelled Drop Priority.

Security: IPv6 incorporates IP-Sec to provide authentication and encryption to improve the security of packet transmission and is handled by the Encapsulating Security Payload (ESP).

Multicast: Multicast addresses are group addresses so that packets can be sent to a group rather than an individual. IPv4 handles this very inefficiently while IPv6 has implemented the concept of a multicast address into its core.

So why aren’t we all using IPv6?

The short answer to this question is that IPv4 is a victim of its own success. The task of migrating the Internet to IPv6, even taking into to account the available migration options of dual stack hosting and tunnelling, is just too challenging.

As we all know, the Internet is made up of thousands of independently managed networks each looking to commercially thrive or often just to survive. There is no body overseeing how the Internet is run except for specific technical aspects such as Domain Name Server (DNS) management or the standards body, IETF. (Picture credit: The logo of Linux IPv6 Development Project)

No matter how much individual evangelists push for the upgrade, getting the world to do so is pretty much an impossible task unless everyone sees that there is a distinct commercial and technical benefit for them to do so.

This is the core issue and as the benefits of upgrading to IPv6 have been seriously eroded by the advent of other standards efforts that address each of the IPv6 enhancements on a stand-alone basis. The two principle are NAT and MPLS.

Network address translation (NAT): To overcome the limitation in the number of available public addresses, NAT was implemented. This means that many users / computers in a private network are able to access the public Internet using a single public IP address. Each user is assigned a transient dynamic session IP address when they access the Internet and the NAT software manages the translation between the the public IP address and the dynamic address used within the private network.

NAT effectively addressed the concern that the Internet may run out of address space. It could be argued that NAT is just a short term solution that came at a big cost to users. The principle downside is that external connections are unable to set up long term relationships with an individual user or computer that is behind a NAT wall as they have not been assigned their own unique IP address. Users of the internal dynamically assigned IP addresses can change at any time.

This particularly affects applications that contain addresses so that traffic can always be sent to a specific individual or computer – VoIP is probably the main victim.

It’s interesting to note that the capability to uniquely identify individual computers was the main principle behind the development of IPv4 so it quite easy to see why there is often strong views expressed about NAT!

MPLS and related QoS standards: The advent of MPLS covered in The rise and maturity of MPLS and MPLS and the limitations of the Internet addressed many of the needs of the IP community to be able to address Quality of Service issues by separating high-priority service traffic from low-priority traffic.

Round up

Don’t break what works. IP networks take a considerable amount of skill and hard work to keep alive. They always seem to be ‘living on the edge’ and break down when a network administrator gets distracted. Leave well alone is the mantra by many operational groups.

The benefits of upgrading to IPv6 have been considerably eroded by the advent of NAT and MPLS. Combine this with the lack of an overall management body who could force through a universal upgrade and the innate inertia of carriers and ISPs probably means that IPv6 will never achieve such a dominant position as its progenitor IPv4.

According to one overview of IPv6, which gets to the heart of the subject, “Although IPv6 is taking its sweet time to conquer the world, it’s now showing up in more and more places, so you may actually run into it one of these days.”

This is not to say that IPv6 is dead, rather it is being marginalised by only being run in closed networks (albeit some rather large networks). There is real benefit to the Internet being upgraded to IPv6 as every individual and every device connected to it could be assigned its own unique address as envisioned by the Founders of the Internet. The inability to do this severely constrains services and applications which are not able to clearly identify an individual on an on-going basis as is inherent in a telephone number. This clearly reflects badly on the Internet.

IPv6 is a victim of the success of the Internet and the ubiquity of IPv4 and will probably never replace IPv4 in the Internet in the foreseeable future (Maybe I should never say never!). I was once asked by a Cisco Fellow how IPv6 could be rolled out, after shrugging my shoulders and laughing I suggested that it needed a Bill Gates of the Internet to force through the change. That suggestion did not go down too well. Funnily enough, now that IPv6 is incorporated into Vista we could see the day when this happens. The only fly in the ointment is that Vista has the same problems and challenges as IPv6 in replacing XP – users are finally tiring of never-ending upgrades with little practical benefit.

Interesting times.


sip, Sip, SIP – Gulp!

May 22, 2007

Session Initiation Protocol or ‘SIP’ as it is known has become a major signalling protocol in the IP world as it lies at the heart of Voice-over-IP (VoIP). It’s a term you can hardly miss as it is supported by every vender of phones on the planet (Picture credit: Avaya: An Avaya SIP phone).

Many open software groups have taken SIP to the heart of their initiatives and an example of this is IP Multimedia Subsystem (IMS) which I recently touched upon in IP Multimedia Subsystem or bust!

SIP is a real-time IP applications layer protocol that sits alongside HTTP, FTP, RTP and other well known protocols used to move data through the Internet. However it is an extremely important one because it enables SIP devices to discover, negotiate, connect and establish communication sessions with other SIP enabled devices.

SIP was co-authored in 1996 by Jonathan Rosenberg who is now a Cisco Fellow, Henning Schulzrinne who is Professor and Chair in the Dept. of Computer Science at Columbia University and Mark Handley who is Professor of Networked Systems at UCL. SIP became an IETF SIP Working Group which is still supporting the RFC 3261 standard. SIP was originally used on the US experimental Multicast network commonly known as Mbone. This makes SIP an IT /IP standard rather than one developed by the communications industry.

Prior to SIP, voice signalling protocols were essentially proprietary signalling protocols aimed at use by the big telecommunications companies on their big Public Switched Telecommunications Networks (PSTN) voice networks such as SS7 (C7 in the UK). With the advent of the Internet and the ‘invention’ of Voice over IP, it soon became clear that a new signalling protocol was required that was peer-to-peer, scalable, open, extensible, lightweight and simple in operation that could be used on a whole new generation of real-time communications devices and services that are running over the Internet.

SIP itself is based on earlier IETF / Internet standards, principally Hypertext Transport Protocol (HTTP) which is the core protocol behind the World Wide Web.

Key features of SIP

The SIP signalling standard has many key features:

Communications device identification: SIP supports a concept known as Address of Record (AOR) which represents a user’s unique address in the world of SIP communications. An example of an AOR is sip: xxx@yyy.com. To enable a user to have multiple communications devices or services, SIP has a mechanism called a Uniform resource Identifier (URI). A URI is like the Uniform Resource Locator (URL) used to identify servers on the world wide web. URIs can be used to specify the destination device of a real-time session e.g.

  • IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  • Phone: sip: 1234 1234 1234@yyy.com; user=phone
  • FAX: sip: 1234 1234 1235@yyy.com; user=fax

A SIP URI can use both traditional PSTN numbering schemes AND alphabetic schemes as used on the Internet.

Focussed function: SIP only manages the set up and tear down of real time communication sessions, it does not manage the actual transport of media data. Other protocols undertake this task.

Presence support: SIP is used in a variety of applications but has found a strong home in applications such as VoIP and Instant Messaging (IM). What makes SIP interesting is that it is not only capable of setting up and tearing down real time communications sessions but also supports and tracks a user’s availability through the Presence capability. The open presence standard Jabber uses SIP. I wrote about presence in – The magic of ‘presence’.

Presence is supported through a key SIP extension: SIP for Instant messaging and Presence Leveraging Extensions (SIMPLE) [a really contrived acronym!]. This allows a user to state their status as seen in most of the common IM systems. AOL Instant Messenger is shown in the picture on the left.

SIMPLE means that the concept of Presence can be used transparently on other communications devices such as mobile phones, SIP phones, email clients and PBX systems.

User preference: SIP user preference functionality enables a user to control how a call is handled in accordance to their preferences. For example:

  • Time of day: A user can take all calls during office hours but direct them to a voice mail box in the evenings.
  • Buddy lists: Give priority to certain individuals according to a status associated with each contact in an address book.
  • Multi-device management: Determine which device / service is used to respond to a call from particular individuals.

PSTN mapping: SIP can manage the translation or mapping of conventional PSTN numbers to SIP URIs and vice versa. This capability allows SIP sessions to transparently inter-work with the PSTN. There are organisations, such as ENUM, who provide appropriate database capabilities. To quote ENUM’s home page:

“ENUM unifies traditional telephony and next-generation IP networks, and provides a critical framework for mapping and processing diverse network addresses. It transforms the telephone number—the most basic and commonly-used communications address—into a universal identifier that can be used across many different devices and applications (voice, fax, mobile, email, text messaging, location-based services and the Internet).”

SIP trunking: SIP trunks enable enterprises to group inter-site calls using a pure IP network. This could use an IP-VPN over an MPLS-based network with a guaranteed Quality of Service. Using SIP trunks could lead to significant cost saving when compared to using traditional E1 or T1 leased lines.

Inter-island communications: In a recent post, Islands of communication or isolation? I wrote about the challenges of communication between islands of standards or users. The adoption of SIP-based services could enable a degree of integration with other companies to extend the reach of what, to date, have been internal services.

Of course, the partner companies need to have adopted SIP as well and have appropriate security measures in place. This is where the challenge would lay in achieving this level of open communications! (Picture credit: Zultys: a Wi-Fi SIP phone)

SIP servers

SIP servers are the centralised capability that manage establishment of communications sessions by users. Although there are many types of server, they are essentially only software processes and could be run on a single processor or device. There are several types of SIP server:

Registrar Server: The registrar server authenticates and registers users as soon as they come on-line. It stores identities and the list of devices in use by each user.

Location Server: The location server keeps track of users’ locations as they roam and provides this data to other SIP servers as required.

Redirect Server: When users are roaming, the Redirect Server maps session requests to a server closer to the user or an alternate device.

Proxy Server: SIP Proxy servers pass on SIP requests that are located either downstream or upstream.

Presence Server: SIP presence servers enable users to provide their status (presentities) to other users who would like to see it (Watchers).

Call setup Flow

The diagram below shows the initiation of a call from the PSTN network (section A), connection (section B) and disconnect (section C). The flow is quite easy to understand. One of the downsides is that if a complex session is being set up it’s quite easy to get up to 40 to 50+ separate transactions which could lead to unacceptable set-up times being experienced – especially if the SIP session is being negotiated across the best-effort Internet.

(Picture source: NMS Communications)

Round-up

As a standard SIP has had a profound impact on our daily lives and lives well along those other protocol acronyms that have fallen into the daily vernacular such as IP, HTTP, www and TCP. Protocols that operate at the application level seem to be so much more relevant to our daily lives than those that are buried in the network such as MPLS and ATM.

There is still much to achieve by building capability on top of SIP such as federated services and more importantly interoperability. Bodies working on interoperability are SIPcenter, SIP Forum, SIPfoundry, SIP’it and IETF’s SPEERMINT working group. More fundamental areas under evaluation are authentication and billing.

More depth information about SIP can be found at http://www.tech-invite.com, a portal devoted to SIP and surrounding technologies.

Next time you just buy a SIP Wi-Fi phone from your local shop, install it, find that it works first time AND saves you money, just think about all the work that has gone into creating this software wonder. Sometimes, standards and open software hit a home run. SIP is just that.

Adendum #1:Do you know your ENUM?


IP Multimedia Subsystem or bust!

May 10, 2007

I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.

Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:

“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”

This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:

“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”

Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.

Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.

Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.

Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?

What is IP Multimedia Subsystem (IMS)?

IMS is an architectural framework or reference architecture - not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).

The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.

The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:

  • Common user interfaces for any service
  • Open application server architecture to enable a ‘rich’ service set
  • Separate user data from services for cross service access
  • Standardised session control
  • Inherent service mobility
  • Network independence
  • Inter-working with legacy IN applications

One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.

Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.


Removal of service silos
Source: Business Communications Review, May 2006

As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:

“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.

Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the  core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.

VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006

Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.

IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.

Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:

#1 Applications layer:

The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP).  Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).

(Source: Lucent Technologies)

The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.

Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.

PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.

#2 Control layer:

The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:

  • P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
  • S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
  • I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.

#3 Transport layer:

IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.

Roundup

IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.

All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!

The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.

It’s some of the downsides that particularly concern me:

  • Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?

    IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.

  • Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
  • No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
  • Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
  • OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
  • Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
  • Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
  • Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.

I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.

Addendum:  In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?


Follow

Get every new post delivered to your Inbox.