Content transcoding hits mobiles

October 18, 2007

Content transcoding hits mobiles

Content adaptation and transcoding is high on the agenda of many small mobile content or services companies at the moment and is causing more bad language and angst than anything else I can remember in the industry in recent times. Before I delve into that issue what is content adaptation?

Content translation and the need for it on the Internet is as old as the invention of the browser and is caused by standards, or I should say the interpretation of them. Although HTML, the language of the web page, transformed the nature of the Internet by enabling anyone to publish and access information through the World Wide Web, there were many areas of the specification that left a sufficient degree of fogginess for browser developers to ‘fill in’ with their interpretation of how content should be displayed.

In the early days, most of us engaged with the WWW through the use of the Netscape Navigator browser. Indeed Netscape epitomised all the early enthusiasm for the Internet and their IPO on August 9, 1995 set in play the fabulously exciting ‘bubble’ of the late 1990s. Indeed, The Netscape browser held over a 90% market share in the years post their IPO.

This inherent market monopoly made it very easy for early web page developers to develop content as it only needed to run on one one browser. However that did not make life particularly easy because the Netscape Navigator browser had so many problems in how it arbitrarily interpreted HTML standards. In practice, a browser is only an interpreter after all and, like human interpreters, are prone to misinterpretation when there are gaps in the standards.

Browser market shares. Source Wikipedia

Content Adaptation

Sometimes the drafted HTML displayed in Navigator fine but at other times it didn’t. This led to whole swathes of work-abounds that made the the task of developing interesting content a rather hit and miss affair. A good example of this is the HTML standard that says that the TABLE tag should support a CELLSPACING attribute to define the space between parts of the table. But standards don’t define the default value for that attribute, so unless you explicitly define CELLSPACING when building your page, two browsers may use different amounts of white space in your table.

(Credit: NetMechanic) This type of problem was further complicated by the adoption of browser-specific extensions. The original HTML specifications were rather basic and it was quite easy to envision and implement extensions that enabled better presentation of content. Netscape did this with abandon and even invented a web page scripting language that is universal to day – JavaScript (This has nothing to do with Sun’s Java language).

Early JavaScript was ridden with problems and from my limited experience of writing in the language most of the time was spent trying to iunderstand why code that looked correct according to the rule book failed to work in practice!

Around this time I remember attending a Microsoft presentation in Reston where Bill Gates spent an hour talking about why Microsoft were not in favour of the internet and why they were not going to create a create a browser themselves. Oh how times change when within a year BG announced that the whole company was going to focus on the Internet and that their browser would be given away free to “kill Netscape”.

In fact, I personally lauded Internet Explorer when it hit the market because, in my opinion, it actually worked very well. It was faster than Navigator but more importantly, when you wrote the HTML or JavaScript, the code worked as you expected it to. This made life so much easier. The problem was that you now had to write pages that would run on both browsers or you risked alienating a significant sector of your users. As there still are today, there were many users who blankly refused to change from using Navigator to IE because of their emotional dislike of Microsoft.

From that point on it was downhill for a decade as you had to include browser detection on your web site so that appropriately coded browser-specific and even worse version specific content could be sent to users. Without this, it was just not possible to guarantee that users would be able to see your content. Below is the typical code you had to use:

var browserName=navigator.appName;
if (browserName=="Netscape")
{
 alert("Hi Netscape User!");
}
else
{
 if (browserName=="Microsoft Internet Explorer")
 {
  alert("Hi, Explorer User!");
 }

If we now fast forward to 2007 the world of browsers has changed tremendously but the problem has not gone away. Although it is less common to detect browser types and send browser-specific code considerable problems still exist in making content display in the same way on all browsers. I can say from practical experience that making an HTML page with extensive style sheets display correctly on Firefox, IE 6 and IE 7 is not a particularly easy and definitely frustrating task!

The need to adapt content to a particular browser was the first example of what is now called content adaptation. Another technology in this space is called content transcoding.

Content transcoding

I first came across true content transcoding when I was working with the first real implementation of a Video on Demand service in Hong Kong Telecom in the mid 1990s. This was based based on proprietary technology and myself and a colleague were of the the opinion that it should be based on IP technologies to be future proof. Although we lost that battle we did manage to get Mercury in the UK to base its VoD developments on IP. Mercury went on to sell its consumer assets to NTL so I’m pleased that the two of us managed to get IP as the basis of broadband video services in the UK at the time.

Around this time, Netscape were keen to move Navigator into the consumer market but it was too bloated to be able to run on a set top box so Netscape created a new division called Navio which created a cut down browser for the set top box consumer market. Their main aim however was to create a range of non-PC Internet access platforms.

This was all part of the anti-PC / Microsoft community that then existed (exists?) in Silicon Valley. Navio morphed into Network Computer Inc. owned by Oracle and went on to build another icon of the time – the network computer. NCI changed its name to Liberate when it IPOed in 1999. Sadly, Liberate went into receivership in the early 2000s but lives on today in the form of SeaChange who bought their assets.

Anyway, sorry for the sidetrack, but it was through Navio that I first came across the need to transcode content as a normal web page just looked awful on a TV set. TV Navigator also transcoded HTML seamlessly into MPEG. The main problems on presenting a web page on a TV were:

Fonts: Text that could be read easily on a PC could often not be read on a TV because the font size was too small or the font was too complex. So, fonts were increased in size and simplified.

Images: Another issue was that as the small amount of memory on an STB meant that the browser needed to be cut down in size to run. One way of achieving this was cut out the number of content types that could be supported. For example, instead of the browser being able to display all picture formats e.g. BMP, GIF, JPG etc it would only render JPG pictures. This meant that pictures taken off the web needed to be converted to JPG at the server or head-end before being sent to the STB.

Rendering and resizing: Liberate automatically resized content to fit on the television screen.

Correcting content: For example, horizontal scrolling is not considered a ‘TV-like’ property, so content was scaled to fit the horizontal screen dimensions. If more space is needed, vertical scrolling is enabled to allow the viewer to navigate the page. The transcoder would also automatically wrap text that extends outside a given frame’s area. In the case of tables, the transcoder would ignore widths specified in HTML if the cell or the table is too wide to fit within the screen dimensions.

In practice, most VoD or IPTV services only offered closed wall garden services at the time so most of the content was specifically developed for an operators VoD service.

WAP and the ‘Mobile Internet ‘comes along

Content adaptation and transcoding trundled along quite happily in the background as a requirement for displaying content on non-PC platforms for many years until 2007 and the belated advent of open internet access on mobile or cell phones.

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet (WAP, GPRS, HSDPA on the move!). Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

WAP was in essence simplified HTML and if a content provider wanted to created a service that could be accessed from a mobile phone then they needed to write it in WAP. Services were very simple as shown in the picture above and could quite easily be navigated using a thumb.

The main point was that is was quite natural for developers to specifically create a web site that could be easily used on a mobile phone. Content adaptation took place in the authoring itself and there was no need for automated transcoding of content. If you accessed a WAP site, it may have been a little slow because of the reliance on GPRS, but services were quite easy and intuitive to use. WAP was extremely basic so it was updated to XHTML which provided improved look and feel features that could be displayed of the quickly improving mobile phones.

In 2007 we are beginning to see phones with full-capability browsers backed up by broadband 3G bearers making Internet access a reality on phones today. Now you may think this is just great, but in practice phones are not PCs by a long chalk. Specifically, we are back to browsers interpreting pages differently and more importantly, the screen sizes on mobile phones are too small to display standard web pages that allow a user to navigate it with ease (Things are changing quite rapidly with Apple’s iPhone technology).

Today, as in the early days of WAP, most companies who seriously offer mobile phone content will create a site specifically developed for mobile phone users. Often these sites will have URLs such as m.xxxx.com or xxxx.mobi so that a user can tell that the site is intended for use on a mobile phone.

Although there was a lot of frustration about phones’ capabilities everything at the mobile phone party was generally OK.

Mobile phone operators have been under a lot of criticism for as long as anyone can remember about their lack of understanding of the Internet and focusing on providing closed wall-garden services, but that seems to be changing at long last. They have recognised that their phones are now capable of being a reasonable platform to access to the WWW. They have also opened their eyes and realised that there is real revenue to be derived from allowing their users to access the web – albeit in a controlled manner.

When they opened their browsers to the WWW, they realised what this was not without its challenges. In particular, there are so few web sites that have developed sites that could be browsed on a mobile phone. Even more challenging is that the mobile phone content industry can be called embryonic at best with few service providers that are well known. Customers naturally wanted to use the web services and visit the web sites that they use on their PCs. Of course, most of these look dreadful on a mobile phone and cannot be used in practice. Although many of the bigger companies are now beginning to adapt their sites to the mobile, Google and MySpace to name but two, 99.9999% (as many 9s as you wish) of sites are designed for a PC only.

This has made mobile phone operators turn to using content transcoding to keep their users using their data services and hence keep their revenues growing. The transcoder is placed in the network and intercepts users’ traffic. If a web page needs to be modified so that it will display ‘correctly’ on a particular mobile phone, the transcoder will automatically change the web page’s content to a layout that it thinks will display correctly. Two of the largest transcoding companies in this space are Openwave and Novarra.

This issue came to the fore recently (September 2007) in a post by Luca Passani on learning that Vodafone had implemented content transcoding by intercepting and modifying the User Agent dialogue that takes place between mobile phone browsers and web sites. From Luca’s page, this dialogue is along the lines of:

  • I am a Nokia 6288,
  • I can run Java apps MIDP2-CDLC 1,
  • I support MP3 ringtones
  • …and so on

His concern, quite rightly, is that this is an standard dialogue that goes on across the whole of the WWW that enables a web site to adapt and provide appropriate content to the device requesting it. Without it, they are unable to ensure that their users will get a consistent experience no matter what phone they are using. Incidentally, Luca, provides an open-source XML file called WURFL that contains the capability profile of most mobile phones. This is used by content providers, following a user agent dialogue, to ensure that the content they sent to a phone will run – it contains the core information needed to enable content adaptation.

It is conjectured that, if every mobile operator in the world uses transcoders – and it looks like this is going to be the case – then this will add another layer of confusion to already high challenge of providing content to mobile phones. Not only will content providers have to understand the capabilities of each phone but they will need to understand when and how each operator uses transcoding.

Personally I am against transcoding in this market and reason why can be seen in this excellent posting by Nigel Choi and Luca Passani. In most cases, no automatic transcoding of a standard WWW web page can be better than providing a dedicated page written specifically for a mobile phone. Yes, there is a benefit for mobile operators in that no matter what page a user selects, something will always be displayed. But will that page be usable?

Of course, transcoders should pass through untouched and web site that is tagged by the m.xxxx or the xxxx.mobi URL as that site should be capable of working on any mobile phone, but in these early days of transcoding implementation this is not always happening it seems.

Moreover, the mobile operators say that this situation can be avoided by the 3rd party content providers applying to be on the operators’ white list of approved services. If this turns out to be a universal practice then content providers would need to gain approval and get on all the lists of mobile operators in the world – wow! Imagine an equivalent situation on the PC if content providers needed to get approval from all ISPs. Well, you can’t can you?

This move represents another aspect of how the control culture of the mobile phone industry comes to the fore in placing their needs before those of 3rd party content providers. This can only damage the 3rd party mobile content and service industry and further hold back the coming of an effective mobile internet. A sad day indeed. Surely, it would be better to play a long game and encourage web sites to create mobile versions of their services?


EBay paid too much for Skype

October 2, 2007

I don’t normally post news, but I couldn’t resist posting this as it so close to my heart. Ever since the deal was done everyone has been asking whether it was worth what they paid.

The  article was in the London Evening Standard today.

ONLINE auctioneer eBay today admitted it had paid too much for internet telephone service Skype in 2005.

EBay, which forked out $2.6 billion (fl.3 billion), will now take a $1.4 billion charge on the company as it fails to convert users into revenue.

Skype’s chief executive Nikias Zennström, one of eBay’s founders, will step down, but the company denies he is walking the plank.

EBay will pay some investors $530 million to settle future obligations under the disastrous Skype deal.

In a desperate bid to get the deal over the line in 2005, eBay promised an extra $L7 billion to Skype investors if the unit met certain targets including number of users.

Now it is offering those shareholders $530 million as “an early, one-time payout”. The parent company will write down $900 million in the value of Skype.

Since eBay took over, Skype’s membership accounts have risen past 220 million, but it earned just $90 million during the second quarter of 2007, far below projections.

I wonder if this will cool some of the outrageous values being put on some of the social network services?


Do you know your ENUM?

September 24, 2007

Isn’t it funny how a new concept is often universally derided as nonsensical? There are many examples of this but none more so than Voice over IP (VoIP) (I mean Internet Protocol not Intellectual Property).

But just look at how universal VoIP has become over the last fifteen years despite all the early knocking and mumblings that it would, could, not ever work. When I first started talking about VoIP in the mid 1990s, after a visit to Vocaltec in Israel, I was even banned from a particular country as my views were considered seditious. Looking at the markets of 2007, I guess they may have been right! However, trying to hold back the inevitable is never a good reaction to a possibly disruptive technology though this is still occurring on a wide scale in today’s telecommunications world. [Picture credit: Enum.at]

Earlier this year I wrote about the challenges of what I called islands of isolation in a posting entitled Islands of communication or isolation?. I consider this to be one of the main challenges any new communications technology or service needs to face up to if it is going to achieve world-wide penetration. Sometimes just an accepted standard can tip a new technology into global acclaim. A good example of this is Wi-Fi or ADSL. Because of the nature of these technologies, equipment based on these standards can be used even by a single individual so a market can be grown from even a small installed base when it is reinforced by a multiplicity of vendors jumping on the bandwagon when they think the market is big enough.

However, many communication technologies or services require something more before they can become truly ubiquitous and VoIP is just one of those services. Of course many of these additional needs can be successfully bypassed by ‘putting up the proverbial finger’ to the existing approach by developing completely stand-alone services based on proprietary technologies as so successfully demonstrated by Skype in the VoIP world. The reason Skype become so successful at such an early stage was that the service was run independently of the existing circuit-switched Public Switched Telephone Network (PSTN). This was quite a deliberate and wholly successful strategy. What was the issue that Skype was trying to circumvent (putting their views of their perceived monopolistic characteristics of the telco industry to one side)? Telephone numbers.

Numbering was the one important feature that made the traditional telephone industry so successful. Unfortunately, it is also the lack of this one feature that has held back the rollout of VoIP services more than any other. The issue is that every user of a traditional telephone had their own unique telephone number (backed up by agreed standards drafted by the ITU). As long as you knew an individual’s number you could call them where ever they were located. In the case of VoIP, you may not be able to find out their address if they use a different VoIP operator to yourself leading to multiple islands of VoIP users who are unable to directly communicate with each other.

If the user chooses to use a VoIP-based telephone service they still expect to be able talk to anyone no matter what service provider they have chosen to use, whether that be another user of the VoIP service or a colleague not using VoIP but an ordinary telephone.

One of the key issues cluttering the path to achieving this is that VoIP runs on an IP network that uses a completely different way of identifying users than traditional PSTN or mobile networks. IP networks use an IP addresses as dictated by the IPv4 standard ( IPv6 to the rescue – eh? ) while public telephone networks use the E.164 standard as maintained by the ITU in Geneva. So if a VoIP user wants to make a call to an individual’s desk or mobile phone or vice versa a cross-network directory look up is needed before a physical connection can be made.

This is where the concept of Telephone Number Mapping (ENUM) comes into its own as one of the key elements required to achieve the vision of converged VoIP and PSTN services. The key goal of ENUM is to enable calls to be made between the two worlds of VoIP and PSTN as easy as between PSTN users. This must be achieved if VoIP services are are to become truly ubiquitous.

In reality no individual really cares whether a call is being completed on a VoIP network or not as long as the quality is adequate. They certainly do care about cost of a call and this turned out to be one of the main drivers causing the rise of VoIP services as they are used to bypass the tradition financial settlement regimes that exist in the PSTN world (Revector, detecting the dark side of VoIP).

How does ENUM work?

There are three aspects that need to be considered:

  1. How is an individual is identified on the IP network or Internet (an IP network can be a closed IP network used by a carrier where a guaranteed quality of service is implemented unlike the Internet).
  2. How the individual is identified on the PSTN network segment from an addressing or telephone number basis.
  3. How these two segments inter-work.

The IP network segment: We are all familiar with the concept of a URL or Uniform Resource Locator that is used to identify a web site. For example, the URL of this blog is http://technologyinside.com . In fact a URL is a subset of a Uniform Resource Identifier (URI) along with a Uniform Resource Name (URN). A URL refers to the domain e.g. a company name, while the URI operates at a finer granularity and can identify an individual within that company such as with an email address. For VoIP calls, as an individual is the recipient of a call rather than the company, URIs are used as the address. The same concept is used with SIP services as explained in sip, Sip, SIP – Gulp! The IETF standard that talks about E.164 and DNS mapping is RFC 2916.

URIs can be used to specify the destination device of a real-time session e.g.

  1. IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  2. Phone: sip: 1234 1234 1234@yyy.com; user=phone
  3. FAX: sip: 1234 1234 1235@yyy.com; user=fax

On the PSTN segment: A user is identified by their E.164 telephone number used by both fixed and mobile / cell phones. I guess there is no need to explain the format of these as they are an example of an ITU standard that is truly global!

Mapping of the IP and PSTN worlds:

There are two types of VoIP call. Those that are carried end-to-end on an IP network or other calls that start on a VoIP network but end on a PSTN network or vice versa. For the second type, call. mapping is required.

Mapping between the two worlds is in essence managed by an on-line directory that can be accessed by either party – the VoIP operator wishing to complete a call on a traditional telephone or a PSTN operator wishing complete a call on a VoIP network. These directories are maintained ENUM registrars. Individual user records therefore contain both the E.164 number AND the VoIP identifier for an individual.

The Registrar’s function to manage both the database and the security issues surrounding the maintenance of a public database i.e. only the individual or company (in the case of private dial plans) that are concerned with the record are able to change its contents.

The translation procedure: When a call between a VoIP user and a PSTN user is initiated, four steps are involved. Of course, the user must be ENUM-enabled by having an ENUM record with an ENUM registrar.

  1. The VoIP user’s software, or their company’s PBX i.e. their User Agent translates the E.164 number into ENUM format as described in RFC 3761.To convert an E.164 number to an ENUM the follows steps are required:
    1. +44 1050 6416 (The E.164 telephone number)
    2. 44105056416 (Removal of all characters except numbers)
    3. 61465050144 (Reversal of the number order)
    4. 6.1.4.6.5.0.5.1.3.4 (Insertion of dots between the numbers)
    5. 6.1.4.6.5.0.5.1.3.4.e164.arpa (Adding the global ENUM domain)
  2. A request is sent to the Domain Number Service (DNS) to look up the ENUM domain requested.
  3. A query in a format specified by RFC 3403 is sent to the ENUM registrar’s domain which either returns the PSTN number or the URI number of the caller – whichever is requested.
  4. The call is now initiated and completed.

For this process to work universally then every user that uses both VoIP and PSTN services need to have an ENUM record. That is a problem today as it is just not the case.

ENUM Registrars

In a number of countries top-level public ENUM registrars have been set up driven by the ITU. For example this is the ENUM registrar in Austria – http://www.enum.at They then hold the DNS pointers to other ENUM registrars in Austria. Another example is Ireland’s ENUM registry.

However, in the USA, ENUM services are in the hands of private registrars.

If you sign up for a VoIP service that provides you with an E.164 telephone number, your VoIP provider will act as a registrar and hence your details will be automatically registered for look-up through a DNS call. If you do not use one of these services, it is possible to register yourself with an independent registrar.

Local Number Portability (LNP)

During the early days of VoIP services, many ENUM registrars were operated by 3rd party clearing houses acting on a federated basis who were quick to jump on an unaddressed need. Of course, these registrars charge for look-up services. Other third party companies offer provide “trusted and neutral” number database services such as Neustar, e164 and Nominum in the USA who not only offer ENUM services but also Local Number Portability services. To quote Neustar:

“LNP is the ability of a phone service customer in North America to retain their local phone number and access to advanced calling features when they switch their local phone service to another local service provider. LNP helps ensure successful local telephone competition, since without LNP, subscribers might be unwilling to switch service providers.”

However, as we start to see more and more VoIP service providers and more and more traditional voice carriers offering VoIP service to their customers we will see more carriers offering ENUM numbering capabilities. Moreover, They could also use ENUM technology to help reduce costs of the need to support Local Number Portability by managing translation / mapping databases themselves rather than paying a 3rd party for the capability. To quote an article in Telephony Online:

Not all service providers are rushing to do their own ENUM implementations, said Lynda Starr, a senior analyst with Frost & Sullivan who specializes in IP communications. “Some say it’s not worth doing yet because VoIP traffic is still small.” Eventually, however, Starr estimates that service providers could save about 20% of the cost of a call by implementing ENUM – even more if they exchange traffic with one another as peers.

An ITU committee is being planned by the ITU to look at service-provider hosted ENUM databases but the view is that it will be slow to be implemented as is usually the case with ITU standards.

Round up

If every PSTN network had an ENUM-compliant gateway and database, then truly converged voice services could be created and user’s preferences concerning on which device they would like to take calls could be accommodated. Today, as far as I am aware, even the neutral 3rd party ENUM registrars do not currently share their records with other parties, further exacerbating the numbering islands issue. This means you need to know which Registrar to go to before a call can be set up.

It is early days yet but we will undoubtedly start to see more and more carriers implementing ENUM capabilities rather than some of the proprietary number translation solutions that started with the concept of Intelligent Networks in the 1980s. In the mean time the industry will carry on in a sub-optimal way hoping beyond hope that something will happen to sort it all out soon. The real issue is that ENUM registries are the keystone capability needed to make VoIP services globally ubiquitous but they can hardly be considered a major opportunity to make money on a standalone basis. Rather they are an embedded capability in VoIP or PSTN service providers or neutral Internet exchanges so there is little incentive to pour vast amounts of money into the capability which will lead to continuing snail-like growth.

As is the case with standards, even though most would agree that using E.164 numbering is the way forward, there is another proposal called SRV or service record that proposes to use email addresses as the denomination rather than telephone numbers. The logic of this is that it would be driven by by IT directors riding on the back of disappearing PBXs and who are swapping over to Asterisk open-software systems. That is a story for another time however.

Addendum #1: sip, Sip, SIP – Gulp!


How to Be a Disruptor

September 11, 2007

An excellent article from Sandhill.com on running a software business along disruptive lines. Written by the CEO of MySQL, it looks like it needs a lot of traditional common sense!

These are the key issues  he talks about:

Follow No Model
Get Rich Slow
Make Adoption Easy
Run a Distributed Workforce
Foster a Culture of Experimentation
Develop Openly
Leverage the Ecosystem
Make Everyone Listen to Customers
Run Sales as a Science
Fraternize with the Enemy

Take a read: How to Be a Disruptor


sip, Sip, SIP – Gulp!

May 22, 2007

Session Initiation Protocol or ‘SIP’ as it is known has become a major signalling protocol in the IP world as it lies at the heart of Voice-over-IP (VoIP). It’s a term you can hardly miss as it is supported by every vender of phones on the planet (Picture credit: Avaya: An Avaya SIP phone).

Many open software groups have taken SIP to the heart of their initiatives and an example of this is IP Multimedia Subsystem (IMS) which I recently touched upon in IP Multimedia Subsystem or bust!

SIP is a real-time IP applications layer protocol that sits alongside HTTP, FTP, RTP and other well known protocols used to move data through the Internet. However it is an extremely important one because it enables SIP devices to discover, negotiate, connect and establish communication sessions with other SIP enabled devices.

SIP was co-authored in 1996 by Jonathan Rosenberg who is now a Cisco Fellow, Henning Schulzrinne who is Professor and Chair in the Dept. of Computer Science at Columbia University and Mark Handley who is Professor of Networked Systems at UCL. SIP became an IETF SIP Working Group which is still supporting the RFC 3261 standard. SIP was originally used on the US experimental Multicast network commonly known as Mbone. This makes SIP an IT /IP standard rather than one developed by the communications industry.

Prior to SIP, voice signalling protocols were essentially proprietary signalling protocols aimed at use by the big telecommunications companies on their big Public Switched Telecommunications Networks (PSTN) voice networks such as SS7 (C7 in the UK). With the advent of the Internet and the ‘invention’ of Voice over IP, it soon became clear that a new signalling protocol was required that was peer-to-peer, scalable, open, extensible, lightweight and simple in operation that could be used on a whole new generation of real-time communications devices and services that are running over the Internet.

SIP itself is based on earlier IETF / Internet standards, principally Hypertext Transport Protocol (HTTP) which is the core protocol behind the World Wide Web.

Key features of SIP

The SIP signalling standard has many key features:

Communications device identification: SIP supports a concept known as Address of Record (AOR) which represents a user’s unique address in the world of SIP communications. An example of an AOR is sip: xxx@yyy.com. To enable a user to have multiple communications devices or services, SIP has a mechanism called a Uniform resource Identifier (URI). A URI is like the Uniform Resource Locator (URL) used to identify servers on the world wide web. URIs can be used to specify the destination device of a real-time session e.g.

  • IM: sip: xxx@yyy.com (Windows Messenger uses SIP)
  • Phone: sip: 1234 1234 1234@yyy.com; user=phone
  • FAX: sip: 1234 1234 1235@yyy.com; user=fax

A SIP URI can use both traditional PSTN numbering schemes AND alphabetic schemes as used on the Internet.

Focussed function: SIP only manages the set up and tear down of real time communication sessions, it does not manage the actual transport of media data. Other protocols undertake this task.

Presence support: SIP is used in a variety of applications but has found a strong home in applications such as VoIP and Instant Messaging (IM). What makes SIP interesting is that it is not only capable of setting up and tearing down real time communications sessions but also supports and tracks a user’s availability through the Presence capability. The open presence standard Jabber uses SIP. I wrote about presence in – The magic of ‘presence’.

Presence is supported through a key SIP extension: SIP for Instant messaging and Presence Leveraging Extensions (SIMPLE) [a really contrived acronym!]. This allows a user to state their status as seen in most of the common IM systems. AOL Instant Messenger is shown in the picture on the left.

SIMPLE means that the concept of Presence can be used transparently on other communications devices such as mobile phones, SIP phones, email clients and PBX systems.

User preference: SIP user preference functionality enables a user to control how a call is handled in accordance to their preferences. For example:

  • Time of day: A user can take all calls during office hours but direct them to a voice mail box in the evenings.
  • Buddy lists: Give priority to certain individuals according to a status associated with each contact in an address book.
  • Multi-device management: Determine which device / service is used to respond to a call from particular individuals.

PSTN mapping: SIP can manage the translation or mapping of conventional PSTN numbers to SIP URIs and vice versa. This capability allows SIP sessions to transparently inter-work with the PSTN. There are organisations, such as ENUM, who provide appropriate database capabilities. To quote ENUM’s home page:

“ENUM unifies traditional telephony and next-generation IP networks, and provides a critical framework for mapping and processing diverse network addresses. It transforms the telephone number—the most basic and commonly-used communications address—into a universal identifier that can be used across many different devices and applications (voice, fax, mobile, email, text messaging, location-based services and the Internet).”

SIP trunking: SIP trunks enable enterprises to group inter-site calls using a pure IP network. This could use an IP-VPN over an MPLS-based network with a guaranteed Quality of Service. Using SIP trunks could lead to significant cost saving when compared to using traditional E1 or T1 leased lines.

Inter-island communications: In a recent post, Islands of communication or isolation? I wrote about the challenges of communication between islands of standards or users. The adoption of SIP-based services could enable a degree of integration with other companies to extend the reach of what, to date, have been internal services.

Of course, the partner companies need to have adopted SIP as well and have appropriate security measures in place. This is where the challenge would lay in achieving this level of open communications! (Picture credit: Zultys: a Wi-Fi SIP phone)

SIP servers

SIP servers are the centralised capability that manage establishment of communications sessions by users. Although there are many types of server, they are essentially only software processes and could be run on a single processor or device. There are several types of SIP server:

Registrar Server: The registrar server authenticates and registers users as soon as they come on-line. It stores identities and the list of devices in use by each user.

Location Server: The location server keeps track of users’ locations as they roam and provides this data to other SIP servers as required.

Redirect Server: When users are roaming, the Redirect Server maps session requests to a server closer to the user or an alternate device.

Proxy Server: SIP Proxy servers pass on SIP requests that are located either downstream or upstream.

Presence Server: SIP presence servers enable users to provide their status (presentities) to other users who would like to see it (Watchers).

Call setup Flow

The diagram below shows the initiation of a call from the PSTN network (section A), connection (section B) and disconnect (section C). The flow is quite easy to understand. One of the downsides is that if a complex session is being set up it’s quite easy to get up to 40 to 50+ separate transactions which could lead to unacceptable set-up times being experienced – especially if the SIP session is being negotiated across the best-effort Internet.

(Picture source: NMS Communications)

Round-up

As a standard SIP has had a profound impact on our daily lives and lives well along those other protocol acronyms that have fallen into the daily vernacular such as IP, HTTP, www and TCP. Protocols that operate at the application level seem to be so much more relevant to our daily lives than those that are buried in the network such as MPLS and ATM.

There is still much to achieve by building capability on top of SIP such as federated services and more importantly interoperability. Bodies working on interoperability are SIPcenter, SIP Forum, SIPfoundry, SIP’it and IETF’s SPEERMINT working group. More fundamental areas under evaluation are authentication and billing.

More depth information about SIP can be found at http://www.tech-invite.com, a portal devoted to SIP and surrounding technologies.

Next time you just buy a SIP Wi-Fi phone from your local shop, install it, find that it works first time AND saves you money, just think about all the work that has gone into creating this software wonder. Sometimes, standards and open software hit a home run. SIP is just that.

Adendum #1:Do you know your ENUM?


IP Multimedia Subsystem or bust!

May 10, 2007

I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.

Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:

“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”

This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:

“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”

Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.

Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.

Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.

Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?

What is IP Multimedia Subsystem (IMS)?

IMS is an architectural framework or reference architecture - not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).

The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.

The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:

  • Common user interfaces for any service
  • Open application server architecture to enable a ‘rich’ service set
  • Separate user data from services for cross service access
  • Standardised session control
  • Inherent service mobility
  • Network independence
  • Inter-working with legacy IN applications

One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.

Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.


Removal of service silos
Source: Business Communications Review, May 2006

As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:

“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.

Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the  core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.

VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006

Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.

IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.

Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:

#1 Applications layer:

The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP).  Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).

(Source: Lucent Technologies)

The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.

Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.

PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.

#2 Control layer:

The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:

  • P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
  • S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
  • I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.

#3 Transport layer:

IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.

Roundup

IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.

All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!

The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.

It’s some of the downsides that particularly concern me:

  • Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?

    IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.

  • Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
  • No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
  • Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
  • OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
  • Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
  • Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
  • Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.

I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.

Addendum:  In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?


Aria Networks shows the optimal path

April 26, 2007

In a couple of previous posts, Path Computation Element (PCE): IETF’s hidden jewel and MPLS-TE and network traffic engineering I mentioned a company called Aria Networks who are working in the technology space discussed in those posts. I would like to take this opportunity to write a little about them.

Aria Networks are a small UK company that have been going for around eighteen months. The company is commercially led by Tony Fallows, the core technology has been developed by Dr Jay Perrett, Chief Science Officer and Head of R&D and Daniel King, Chief Operating Officer and their CTO is Adrian Farrel. Adrian currently co-chairs the IETF Common Control and Measurement Plane (CCAMP) working group that is responsible for GMPLS and also co-chairs of the IETF Path Computation Element (PCE) working groups.

The team at Aria have brought some very innovative software technology the products they supply to network operators and the network equipment vendors. Their raison d’etre, as articulated by Daniel King, is to “to fundamentally change the way complex, converged networks are designed, planned and operated”. This is an ambitious goal, so let’s take a look at how Aria plan to achieve this.

Aria currently supplies software that addresses the complex task of computing packet constrain-based paths across an IP or an MPLS network and optimising that network holistically and in parallel. Holistic is a key word in respect of understanding Aria products. It means that when an additional path needs to be computed in a network, the whole network and all the services that are running over it are recalculated and optimised in a single calculation. A simple example of why this is so important is shown here.

This ability to compute holistically rather than on a piecemeal basis requires some very slick software as it is a very computationally intensive (‘hard’) computation that could easily take many hours using other systems. Parallel is the other key word. When an additional link is added to a network there could be a knock-on effect to any other link in the network, therefore re-computing all the paths in parallel – both existing and new – is the only way to ensure a reliable and optimal result is achieved.

Traffic engineering of IP, MPLS or Ethernet networks could quite easily be dismissed by the non-technical management of a network operator as an arcane activity but, as anyone with experience of operating networks can vouch, good traffic engineering brings pronounced benefits that directly affects the reduction of costs while increasing the positive aspects of customers’ experience of using services. Of course, lack of appropriate traffic engineering activity has the opposite effect. Only one thing could be put above traffic engineering to better achieve a good brand image and that is good customer service. The irony is that if money is not spent on good traffic engineering, ten times the amount would need to be spent on call centre facilities papering over the cracks!

One quite common view held by a number of engineers is that traffic engineering is not required because they say “we throw bandwidth at our network”. If a network has an abundance of bandwidth then in theory there will never be any delays caused by an inadvertent overload of a particular link. This may be true, but it is certainly an expensive and short sighted solution and one that could turn out to risky as new customers come on board. Combine it with the often slow provisioning times often associated with adding additional optical links can cause major network problems. The challenge of planning and optimising if significantly increased in a Next Generation Network (NGN) when traffic is actively segmented into different traffic classes such as real time VoIP and best-effort Internet access. Traffic engineering tools will become an even more indispensable tool than they have in the past.

It’s interesting to note that even if a protocol like MPLS-TE, PBT, PBB-TE or T-MPLS has all the traffic bells and whistles any carrier may ever desire, it does not mean they can be used. Using TE extensions such as Fast ReRoute (FRR) need sophisticated tools or they quickly become unmanageable in a real network.

Aria’s product family is called intelligent Virtual Network Topologies (iVNT). Current products are aimed at network operators that operate IP and / or MPLS-TE based networks.

iVNT MPLS-TE enables network operators design, model and optimise MPLS – Traffic Engineered (MPLS-TE) networks that use constraint based point-to-point Labelled Switched Paths (LSPs), constraint based point-to-multipoint LSPs, Fast-Reroute (FRR) bypass tunnels. One of its real strengths is that it goes to town on supporting any type of constraint that could be placed on a link – delay, hop count, cost, required bandwidth, link-layer protection, path disjointedness, bi-directionality, etc. Indeed, it quite straight forward to add any addition constraints that an individual carrier may need.

iVNT IP enables network operators to design, model and optimise IP and Label Distribution Protocol (LDP) networks based on the metrics used in Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS) Interior Gateway Protocols (IGPs) to ensure traffic flows are correctly balanced across the network. Although not using more advanced traffic engineering capabilities is clearly not the way to go in the future, many carriers still stick with ‘simple’ IP solutions – however they are far from simple in practise and can be an operational nightmare to manage.

What makes Aria software so interesting?

To cut to the chase, it’s the use of Artificial Intelligence (AI) when applied to path computation and network optimisation. Conventional algorithms used in network optimisation software are linear in nature and are usually deterministic in that they produce the same answer for the same set of variables every time they are run. They are usually ‘tuned’ to a single service type and are often very slow to produce results when faced with a very large network that uses many paths and carries many services. Aria’s software approach may produce different results but correct, results each time it is run and is able to handle multiple services that are inherently and significant different from a topology perspective e.g. point-to-point (P2P), Point -to-Multipoint (P2MP) based services and mesh-like IP-VPNs etc.

Aria uses evolutionary and genetic techniques which are good at learning new problems and running multiple algorithms in parallel. The software then selects which algorithm is the better at solving the particular problem they are challenged with. The model evolves multiple times and quickly converges on the optimal solution. Importantly, the technology is very amenable to use parallel computing to speed up processing of complex problems such as required in holistic network optimisation.

It generally does not make sense to use the same algorithm to solve all the network path optimisation needs of different services – iVNT runs many in parallel and self selection will show which is the most optimal for the current problem.

Aria’s core technology is called DANI (Distributed Artificial Neural Intelligence) and is a “flexible, stable, proven, scalable and distributed computation platform”. DANI was developed by two of Aria’s Founders, Jay Perrett and Daniel King and has had a long proving ground in the pharmaceutical industry for pre-clinical drug discovery which needs the analysis of millions of individual pieces of data to isolate interesting combinations. The company that addresses the pharmaceutical industry is Applied Insilico.

Because of the use of AI, iVNT is able to compute a solution for a complex network containing thousands of different constraint-based links, hundreds of nodes and multiple services such as P2P LSPs, Fast Reroute (FRR) links, P2MP links (IPTV broadcast) and meshed IP-VPN services in just a few minutes on one of today’s norebooks.

What’s the future direction for Aria’s products?

Step to multi-layer path computation: As discussed in the posts mentioned above, Aria is very firmly supportive of the need to provide automatic multi-layer path computation. This means that the addition of of a new customer’s IP service will be passed as a bandwidth demand the MPLS network and downwards to the GMPLS controlled ASTN optical network as discussed in GMPLS and common control.

Path Computation Element (PCE): Aria are at the heart of the development of on-line path computation so if this is a subject of interest to you then give Aria a call.

Two product variants address this opportunity:

iVNT Inside is aimed at Network Management System (NMS) vendors, Operational Support System (OSS) vendors and Path Computation Element (PCE) vendors that have a need to provide advanced path computation capabilities embedded in their products.

iVNT Element is for network equipment vendors that have a need to embed advanced path computation capabilities in their IP/MPLS routers or optical switches.

Roundup

Aria Networks could be considered to be a rare company in the world of start-ups. It has a well tried technology whose inherent characteristics are admirably matched to the markets and the technical problems it is addressing. Its management team are actively involved in developing the standards that their products are, or will be, able to support. This provides no better basis to get their products right.

It is early days for carriers turning in their droves to NGNs and it is even earlier days for them to adopt on-line PCE in their networks, but Aria’s timing is on the nose as most carriers are actively thinking about these issues and are actively looking for tools today.

Aria could be well positioned to benefit from the explosion of NGN convergence as it seems – to me at least – that fully converged networks will very challenging to design, optimise and operate without the new approach and tools from companies such as Aria.

Note: I need to declare an interest as I worked with them for a short time in 2006.


Path Computation Element (PCE): IETF’s hidden jewel

April 10, 2007

In a previous post MPLS-TE and network traffic engineering, I talked about the challenges of communication network traffic engineering and capacity planning and their relation to MPLS-TE (or MPLSTE). Interestingly, I realised that I did not mention that all of the engineering planning, design and optimisation activities that form the core of network management usually take place off-line. What I mean by this, is that a team of engineers sit down either on an ad hoc basis driven by new network or customer acquisitions or as part of an annual planning cycle to produce an upgrade or migration plan that can be used to extend their existing network to meet the needs of the additional traffic. This work does not impact live networks until the OPEX and CAPEX plans have been agreed and signed off by management teams and then implemented. A significant proportion of data that drives this activity is obtained from product marketing and/or sales teams who are supposed to know how much additional business, hence additional traffic, will be imposed on the network in the time period coved by planning activities.

This long-term method of planning network growth has been used since the dawn of time and the process should put in place the checks and balances (that were thrown to the wind in the late 1990s) to ensure that neither too much nor too little investment is made in network expansion.

What is Path Computation Element (PCE)

What is a path through the network? I’ve covered this extensively in my previous posts about MPLS’s ability to guide traffic through a complex network and force particular packet streams to follow a constraint-based and pre-determined path from network ingress to network egress. This deterministic path or tunnel enables the improved QoS management of real-time services such as Voice over IP or IPTV.

Generally paths are calculated and managed off-line as part of the overall traffic engineering activity. When a new customer is signed up, their traffic requirements are determined and the most appropriate paths for the traffic superimposed on the current network topology that would best meet the customer’s needs and balance traffic distribution on the network. If new physical assets are required, then these would be provisioned and deployed as necessary.

Traditional planning cycles are traditionally focussed on medium to long term needs and cannot really be applied to shorter planning needs. Such short term needs could derive from a number of requirements such as:

  • Changing network configurations dependent on the time of day, for example, there is usually a considerable difference traffic profiles between office hours, evening hours and night time. The possibility of dynamically moving traffic dependent on busy hours (Time being the new constraint) could provide significant cost benefits.
  • Dynamic or temporary path creation based on customers’ transitory needs.
  • Improved busy hour management through auto-rerouting of traffic.
  • Dynamic balancing of network load to reduce congestion.
  • Improved restoration when faults occur.

To be able to undertake these tasks a carrier would need to move away from off-line Path Calculation to On-line Path Calculation and this is where IETF’s Path Computation Element (PCE) Working Group comes to the rescue.

In essence, on-line PCE software acts very much along the same lines a graphics chip handles off-loaded calculations for the main CPU in a personal computer. For example, a service requires that a new path be generated through the network and that request, together with the constrained-path requirements for the path such as bandwidth, delay etc., is passed to the attached PCE computer. The PCE has a complete picture of flows and paths in the network at the precise moment derived from other Operational Support Software (OSS) programmes so it can calculate in real time the optimal path through the network that will deliver the requested path. This path is then used to automatically update router configurations and Traffic engineering database.

In practice, the PCE architecture calls for each Autonomous System (AS) domain to have its own PCE and if a multi-domain path is required the affected PCEs will co-operate to calculate the required path with the requirement provided by a ‘master’ PCE. The standard supports any combination, number or location of PCEs.

Why a separate PCE?

There are a number of reasons why a separate PCE is being proposed:

  • Path Computation of any form is not an easy and simple task by any means. Even with appropriate software, computing all the primary, back-up and services paths on a complex network will strain computing techniques to the extreme. A number of companies that provide software capable of undertaking this task were provided in the above post.
  • The PCE will need undertake computationally intensive calculations so it is unlikely (to me) that a PCE capability would ever be embedded into a router or switch as they generally do not have the power to undertake path calculations in complex network.
  • If path calculations are to be undertaken in a real-time environment then, unlike off-line software which can take hours for an answer to pop out, a PCE would needs to provide an acceptable solution in just a few minutes or seconds.
  • Most MPLS routers calculate a path on the basis of a single constraint e.g. the shortest path. Calculating paths based on multiple constraints such as bandwidth, latency, cost or QoS significantly increases the computing power required to reach a solution.
  • Routers route and have limited or partial visibility of the complete network, domain and service mix and thus are not able to undertake the holistic calculations required in a modern converged network.
  • In a large network the Traffic engineering database (TED) can become very large creating a large computational overhead for a core router. Moving TED calculations to a dedicated PCE server could be beneficial in lowering path request response times.
  • In a traditional IP network there may be many legacy devices that do not have an appropriate control plane thus creating visibility ‘holes’.
  • A PCE could be used to provide alternative restorative routing of traffic in an emergency. As a PCE would have a holistic view of the network, restoration using a PCE could reduce potential knock-on effects of a reroute.

The key aspect of multi-layer support

One of the most interesting architecture aspects of the PCE is to address a very significant issue faced by all carriers today – multi-network support. All carriers utilise multiple layers to transport traffic – these could include IP-VPN, IP, Ethernet, TDM, MPLS, SDH and optical networks in several possible combinations. The issue is that a path computation at the highest level inevitably has a knock-on effect down the hierarchy to the physical optical layer. Today, each of these layers and protocols are generally managed, planned and optimised as separate entities so it would make sense that when a new path is calculated, its requirements are passed down the hierarchy so that knock-on effects can be better managed. The addition of a new small IP link could force the need to add an additional fibre.

Clearly, providing flow though and visibility of new services to all layers and manage path computation on a multi-layer basis would be a real boon for network optimisation and cost reduction. However, let’s bear in mind that this represents a nirvana solution for planning engineers!

A Multi-layer path

The PCE specification is being defined to provide this across layer or multi-layer capability. Note that a PCE is not a solution aimed at use on the whole Internet – clearly this would be a step just too challenging along the lines of the whole Internet upgrading IPV-6!

I will not plunge into the deep depths of the PCE architecture here, but a complete overview can be found in A Path Computation Element (PCE) Based Architecture (RFC 4655). At the highest level the PCE talks to a signalling engine that takes in requests for a new path calculation and passes any consequential requests to other PCEs that might be needed for an inter-domain path. The PCE also interacts with the Traffic Engineering Database to automatically update it if and as required (Picture source: this paper).

Another interesting requirement document is Path Computation Element Communication Protocol (PCECP) Requirements .

Round up

It is very early days for the PCE project, but it would seem to provide one of the key elements required to enable carriers to effectively manage a fully converged Next Generation Network. However, I would imagine that the operational management in many carriers would be aghast at putting the control of even transient path computation on-line when considering the risk and the consequence to customer experience if it went wrong.

Clearly PCE architecture has to be based on the use of powerful computing engines, software that can holistically monitor and calculate new paths in seconds and most importantly be a truly resilient network element. Phew!

Note: One of the few commercial companies working on PCE software is Aria Networks who are based in the UK and whose CTO, Adrian Farrell, is also Chairman of the PCE Working Group. I do declare an interest as I undertook some work for Aria Networks in 2006.

Addendum #1: GMPLS and common control

Addendum #2: Aria Networks shows the optimal path

Addendum #3: It was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ or function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?

Addendum #4: New web site focusing on PCE: http://pathcomputationelement.com


Islands of communication or isolation?

March 23, 2007

One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

What are the causes these so-called islands of isolation? Here are a few examples.

Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?


webex + Cisco thoughts

March 19, 2007

I first read about the Cisco acquisition of Webex on Friday when a colleague sent me a post from SiliconValley.com – It’s more than we wanted to spend, but look how well it fits. It’s synchronicity in operation again of course because I mentioned webex in posting about a new application sharing company: Would u like to collaborate with YuuGuu? There are many other postings about this deal with a variety of views – some more relevant than others – Techcrunch for example: Cisco Buys WebEx for $3.2 Billion

Although pretty familiar with the acquisition history of Cisco, I must admit that I was surprised at this opening of the chequebook for several reasons.

I used webex quite a lot last year and really found it quite a challenge to use. My biggest area of concern was usability.

(a) When using webex there are several windows open on your desktop making its use quite confusing. At least once I closed the wrong window thus accidentally closing the conference. As I was just concluding a pitch I was more than unhappy as it clused both the video and the audio components of the conference! I broke my golden rule of not using separate audio bridging and application sharing services.

(b) When using webex’s conventional audio bridge, you have to open the conference using the a webex web site page on a beforehand. If you fail to do so, the bridge cannot be opened with everyone receiving an error message when they dial in. To correct this takes a about 5 minutes. Even worse, you cannot use an audio bridge on a standalone basis without having access to a PC! Not good when travelling.

(c) The UI is over complicated and challenging for users under the pressure of giving a presentation. Even the invite email that webex sends out it confusing – the one below is typical. Although the example is the one sent to the organiser, the ones sent to participants are little better.

Hello Chris Gare,
You have successfully scheduled the following meeting:
TOPIC: zzzz call
DATE: Wednesday, May 17, 2006
TIME: 10:15 am, Greenwich Standard Time (GMT -00:00, Casablanca ) .
MEETING NUMBER: 705 xxx xxx
PASSWORD: xxxx
HOST KEY: yyyy
TELECONFERENCE: Call-in toll-free number (US/Canada): 866-xxx-xxxx
Call-in number (US/Canada): 650-429-3300
Global call-in numbers: https://webex.com/xxx/globalcallin.php?serviceType=MC&ED=xxxx
1. Please click the following link to view, edit, or start your meeting.

https://xxx.webex.com/xxx/j.php?ED=87894897

Here’s what to do:
1. At the meeting’s starting time, either click the following link or copy and paste it into your Web browser:

https://xxx.webex.com/xxx/j.php?ED=xxxxx

2. Enter your name, your email address, and the meeting password (if required), and then click Join.
3. If the meeting includes a teleconference, follow the instructions that automatically appear on your screen.
That’s it! You’re in the web meeting!
WebEx will automatically setup Meeting Manager for Windows the first time you join a meeting. To save time, you can setup prior to the meeting by clicking this link:

https://xxx.webex.com/xxx/meetingcenter/mcsetup.php

For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
………………..end copy here………………..
For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
To add this meeting to your calendar program (for example Microsoft Outlook), click this link:

https://xxx.webex.com/xxx/j.php?ED=87894897&UID=480831657&ICS=MS

To check for compatibility of rich media players for Universal Communications Format (UCF), click the following link:

https://xxx.webex.com/xxx/systemdiagnosis.php

http://www.webex.com

We’ve got to start meeting like this(TM)

Giving presentations on-line is a stressful process at the best of times and the application sharing application needs to be so simple to use that you can just concentrate on the presentation not the medium. webex, in my opinion, fails on this criteria. There are so many new and easier to use conferencing services around that I was surprised that webex provided such a poor usability experience.

Reason #2: In another posting – Why in the world would Cisco buy WebEx?, Steve Borsch talks about the inherent value of webex’s proprietary MediaTone network. This could be called a Content Distribution network (CDN) such as operated by Akamai, Mirror Image or Digital Island bought by Cable and Wireless a few years ago. You can see a flash overview of MediaTone on their web site.

The flash talks about this as an “Internet overlay network” that provides better performance than the unpredictable Internet, but as a individual user of webex I was still forced to access webex services via the Internet as this was unavoidable. I assume that MediaTone is a backbone network interconnecting webex’s data centres. It seems strange to me that an applications company like webex felt the need to spend several $bn on building their own network when perfectly adequate networks could be bought in from the likes of Level3 quite easily and at low cost. In the flash presentation, webex says that it started to build the network a decade ago and it could have been seen as a value-added differentiator at that time. More likely was that it was actually needed for the company’s applications to actually work adequately as the Internet was so poor from a performance perspective in those days.

I have no profound insights into Cisco’s M&A strategy, but this particular acquisition brings Cisco into potential competition with two of its customer sectors at a stroke – on-line application vendors and the carrier community. This does strike me as a little perverse.


Follow

Get every new post delivered to your Inbox.