The magic of ‘presence’

March 20, 2007

Presence is one of the in-words of the telecoms and web industry for the last few years. It sits alongside Location based services as a capability that is still to “realise its full potential”. A Wikipedia overview of presence can be found here.

I have used Google Alerts for the last couple of years to track market activity of technologies and companies that are of interest to me and presence has been one of the key words that I have looked at. These are typical of the results I see pouring into my in-tray each week:

As you can see, I havn’t seen too many announcements about presence from a telecommunications perspective!

Presence is a very broad church and like the word platform everyone uses it with their own interpretation. Wikipedia defines presence as “A user client may publish a presence state to indicate its current communication status.” Let’s go through some examples of presence as it used today.

One of the most simple examples of presence has been around for many years and that can be seen in email clients like Outlook by using an Out of Office auto reply message. I say simple, but in a typical Microsoft way it can be quite complicated to set up if you are not using an Exchange server. However that aside, by using this facility you can indicate to anyone that sends you an email that you are away from the office for a time. You can use this message to just say that you are not around or it could be helpful by providing an alternative contact.

The use Out of Office feature brings us straight way to the fact that using it can create problems! For example, I am a member of many news groups and Out of the Office auto-responses do create problems as everyone in the newsgroup receives them. following each and every post. If the group is very active, they can really build up and rapidly become an irritant.

Another simple use of presence information can be found on web sites and in email signatures. The one shown on the left is in an email from VerticalResponse where they show information about if their support organisation is open. This is an application of presence showing the availability of the support team.

This shows an interesting aspect of presence. Someone may be present but do they wish to be available? These are different concepts and need to be considered separately. Rolling them together can create all sorts of problems as we will see later.

The VerticalResponse signature shown above shows a live presence element. When Live Chat is available it is shown in green and I assume when it is not available it is shown as Not Available in red (I havn’t seen this). One company that helps provide this type of capability is Contact at Once. According to their web site they use their presence engine to “continually monitor the availability of advertiser sales representatives across multiple devices and aggregates the availability status of each representative into an advertiser-level availability or “presence.”

Another company that provides a presence engine is Jabber: “By integrating presence—i.e., information about the availability of entities, end-points, and content for communication over a network—into applications, devices, and systems you can streamline processes and increase the velocity of information within your organization. Discover the latest best practices organizations are implementing to take advantage of the benefits of adding presence to business processes.”

Although they offer Instant Messaging services described below they focus on integrating presence information into enterprise process flows to increase the efficiency of business processes. The available flag is an example of this as is the automatic routing of internal calls to an available expert or an available person in a company’s call centre.

On of the most common application of presence is in Instant Messaging (IM) services where you are able to set your status to be on-line or Away. The picture on the right is the status options in Microsoft Messenger.

On the left below is Yahoo Messenger. It is interesting to note the addition of the Invisible to Everyone option. Why is this supplied I wonder? Most instant messaging services and some PC-to-PC VoIP services provide an option to set your availability status. Many users have found that this capability can create real problems.

When you boot your PC, your status is set to On-line automatically. Annoyingly, this often leads to several of your buddies saying “Hi” or work colleagues asking you questions immediately!

Away status. You can choose to set an Away status automatically after say 5 minutes. The IM or VoIP service detects that you are away because you have not touched your keyboard or mouse. As soon as you return to your PC and touch the keyboard you are placed On-line again and open to immediate interruption just as you start working. This is just the exact opposite of what you want!

Multiple services. When you use several services such as IM, VoIP or calendar it is highly unlikely that you will set your status on every application before your leave your desk so that they do not reliably reflect your real status.

These problems can become so severe that the only solution is for many is to opt to appear to be Off-line permanently destroying any benefits of sharing status information.

One of the problems with presence being built into many on-line applications is that you need to set your status on every single one of them every time it changes or your will not see any benefit. There are quite a few companies who aggregate presence information. One such company is PRESENCEWORKS that enables you to integrate presence information from instant messaging into your pre-existing business software as shown below.

In my post about the Would u like to collaborate with YuuGuu? I showed their initial presence option. In its current simple form, this is of limited use because of the difference between presence and Availability. For example when I contacted YuuGuu, Philip was shown as being Available but it was half an hour later when he came back to me saying he was on another call thus demonstrating that he was not Available.

Many new social network services claim to use presence as a component to their service. This is so common it could be said to be ubiquitous. A good example of this is NeuStar who provide “next generation messaging” technology to service providers. To quote their web site: “Presence services enable people within a community to keep connected anytime, any place. When they indicate their availability or see that their contacts are on-line, presence is a catalyst for interactive services for those users who demand an enriched communications environment.”

This capability is similar to that seen in IM services. If your mobile phone is on, you are deemed to be available unless you manually set your status to be unavailable.

Another provider of presence technology is iotum who produce a Relevance Engine™ that can be used as the basis of a number of services.

One of the core applications is call handling: When someone calls you, iotum’s Relevance Engine instantly identifies the caller and cross-references their identity with your address book. Within a fraction of a second, iotum understands the relationship you have with this person.

Just as quickly, iotum accesses your IM presence status and/or online calendar program to determine what you are doing at that moment. It determines which of your communications devices should receive the call and helps to ensure your phone will only ring if you want it to, based on your schedule, your defined preferences and your past behaviour.

iotum has also launched a consumer presence service aimed at Blackberry users – Talk Now.

A recent mobile social networking service that uses presence is jaiku from Finland. Jaiku uses what it terms to be Rich Presence. This is about texting presence updates to your community from any phone.

Note: The above is an old picture and shows how information on your mobile phone can be used to interpret your presence or availability.

These free-form messages can show availability and they can show location but will often just show an irrelevant message commenting on something that is currently happening to the text sender – just like SMS messages really!

Which brings us to Twitter. Twitter is similar in many ways to Jaiku, in that it is a “global community of friends and strangers answering one simple question: What are you doing? Answer on your phone, IM, or right here on the web!”

Sam Sethi at Vecosys recently posted a good update on Twitter and its ecosystem – The Twitterfication of the blogosphere. To me, the best description of Twitter is a microblog – that says it all.

One of the big challenges of today’s on-line world is information overload and although there can be useful presence and availability information contained in Twitter updates if users control what they post, I’m sure that this is often not the case. Twitter is useful and fun in a social context but because of the baggage that goes along with it, I suspect that it will be of limited use in focused business applications today.

I do not use Twitter at the moment so I guess my last post shown above will remain as my current status forever?

This is but a brief overview of the world of presence and I have missed out many areas of interest. I will try and talk about these in future posts. Like Location Based Services, Presence is full of intrigue and promise. We shall see what happens in coming years.

Addendum: I was planning to planning to mention another presence aggregator that was started up up by Jeff Pulver in 2006 – Tello. But, according to Goodbye, Tello they are no longer as of a few days ago. Tello took a technology platform integration of providing presence information; an approach which I believe to be flawed for any start-up – even one with deep pockets.

Addendum: A good post about new presence

Advertisements

webex + Cisco thoughts

March 19, 2007

I first read about the Cisco acquisition of Webex on Friday when a colleague sent me a post from SiliconValley.com – It’s more than we wanted to spend, but look how well it fits. It’s synchronicity in operation again of course because I mentioned webex in posting about a new application sharing company: Would u like to collaborate with YuuGuu? There are many other postings about this deal with a variety of views – some more relevant than others – Techcrunch for example: Cisco Buys WebEx for $3.2 Billion

Although pretty familiar with the acquisition history of Cisco, I must admit that I was surprised at this opening of the chequebook for several reasons.

I used webex quite a lot last year and really found it quite a challenge to use. My biggest area of concern was usability.

(a) When using webex there are several windows open on your desktop making its use quite confusing. At least once I closed the wrong window thus accidentally closing the conference. As I was just concluding a pitch I was more than unhappy as it clused both the video and the audio components of the conference! I broke my golden rule of not using separate audio bridging and application sharing services.

(b) When using webex’s conventional audio bridge, you have to open the conference using the a webex web site page on a beforehand. If you fail to do so, the bridge cannot be opened with everyone receiving an error message when they dial in. To correct this takes a about 5 minutes. Even worse, you cannot use an audio bridge on a standalone basis without having access to a PC! Not good when travelling.

(c) The UI is over complicated and challenging for users under the pressure of giving a presentation. Even the invite email that webex sends out it confusing – the one below is typical. Although the example is the one sent to the organiser, the ones sent to participants are little better.

Hello Chris Gare,
You have successfully scheduled the following meeting:
TOPIC: zzzz call
DATE: Wednesday, May 17, 2006
TIME: 10:15 am, Greenwich Standard Time (GMT -00:00, Casablanca ) .
MEETING NUMBER: 705 xxx xxx
PASSWORD: xxxx
HOST KEY: yyyy
TELECONFERENCE: Call-in toll-free number (US/Canada): 866-xxx-xxxx
Call-in number (US/Canada): 650-429-3300
Global call-in numbers: https://webex.com/xxx/globalcallin.php?serviceType=MC&ED=xxxx
1. Please click the following link to view, edit, or start your meeting.
https://xxx.webex.com/xxx/j.php?ED=87894897
Here’s what to do:
1. At the meeting’s starting time, either click the following link or copy and paste it into your Web browser:
https://xxx.webex.com/xxx/j.php?ED=xxxxx
2. Enter your name, your email address, and the meeting password (if required), and then click Join.
3. If the meeting includes a teleconference, follow the instructions that automatically appear on your screen.
That’s it! You’re in the web meeting!
WebEx will automatically setup Meeting Manager for Windows the first time you join a meeting. To save time, you can setup prior to the meeting by clicking this link:
https://xxx.webex.com/xxx/meetingcenter/mcsetup.php
For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
………………..end copy here………………..
For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
To add this meeting to your calendar program (for example Microsoft Outlook), click this link:
https://xxx.webex.com/xxx/j.php?ED=87894897&UID=480831657&ICS=MS
To check for compatibility of rich media players for Universal Communications Format (UCF), click the following link:
https://xxx.webex.com/xxx/systemdiagnosis.php
http://www.webex.com
We’ve got to start meeting like this(TM)

Giving presentations on-line is a stressful process at the best of times and the application sharing application needs to be so simple to use that you can just concentrate on the presentation not the medium. webex, in my opinion, fails on this criteria. There are so many new and easier to use conferencing services around that I was surprised that webex provided such a poor usability experience.

Reason #2: In another posting – Why in the world would Cisco buy WebEx?, Steve Borsch talks about the inherent value of webex’s proprietary MediaTone network. This could be called a Content Distribution network (CDN) such as operated by Akamai, Mirror Image or Digital Island bought by Cable and Wireless a few years ago. You can see a flash overview of MediaTone on their web site.

The flash talks about this as an “Internet overlay network” that provides better performance than the unpredictable Internet, but as a individual user of webex I was still forced to access webex services via the Internet as this was unavoidable. I assume that MediaTone is a backbone network interconnecting webex’s data centres. It seems strange to me that an applications company like webex felt the need to spend several $bn on building their own network when perfectly adequate networks could be bought in from the likes of Level3 quite easily and at low cost. In the flash presentation, webex says that it started to build the network a decade ago and it could have been seen as a value-added differentiator at that time. More likely was that it was actually needed for the company’s applications to actually work adequately as the Internet was so poor from a performance perspective in those days.

I have no profound insights into Cisco’s M&A strategy, but this particular acquisition brings Cisco into potential competition with two of its customer sectors at a stroke – on-line application vendors and the carrier community. This does strike me as a little perverse.


The insistent beat of Netronome!

March 15, 2007

Last week I popped into to visit Netronome in their Cambridge office and was hosted by David Wells their VP Technology, GM Europe who was one of the Founders of the company. The other two Founders were Niel Viljoen and Johann Tönsing who previously worked for companies such as FORE Systems (bought by Marconi), Nemesys, Tellabs and Marconi. Netronome is HQed in Pittsburgh but has offices in Cambridge UK and South Africa.

I mentioned Netronome in a previous post about network processors – The intrigue of network / packet processors so I wanted to bring myself up to date with what they were up to following their closing of a $20M ‘C’ funding round in November 2006 led by 3i.

What do Netronome do?

Netronome manufacture network processor based hardware and software that enables the development of applications that need to undertake real-time network content flow analysis. Or to be more accurate, enable significant acceleration and throughput for applications that need to undertake packet inspection or maybe deep packet inspection.

I say to to be more accurate because it is possible to monitor packets in a network without the use of network processors using a low-cost Windows or Linux based computer, but if the data is flowing through a port at gigabit rates – which is most likely these days – then there is little capability to react to a detected traffic type other than switching the flow to another port or simply blocking it. If your really want to detect particular traffic types in a gigabit packet flow, make an action decision, change some of the data bits in the header or body, all transparently and at full line speed then you will undoubtedly need a network processor based card from a company like Netronome. The Intel powered 16 micro-engine used in Netronome’s products enables the inspection of upwards of 1 million simultaneous bidirectional flows.

Netronome’s product is termed an Open Appliance Platform. Equipment vendor companies have used network processors (NPs) for many years. For example Cisco, Juniper and the like would use them to process packets on an interface card or blade. This would more than likely be an in-house developed NP architecture used in combination with hard-wired logic and Field Programmable Gate Arrays (FPGAs). This combination enables complete flexibility to run what’s best to run in software on the NP and use the FPGAs to accommodate possible architecture elements that may change – maybe due to standards being incomplete for example.

Netronome’s Network Acceleration Card

Other companies that have used NPs for a long time make what are known as Network Appliances. A network appliance is a standalone hardware / software bundle often based on Linux that provides a plug-and-play application that can be connected to a live network with a minimum of work. Many network appliances are simply using a server motherboard with two standard gigabit network cards installed and Linux as the OS with the application on top. These appliance vendors know that they need the acceleration they can get from an NP, but they often don’t want to deal with the complexity of hardware design and NP programming.

Either way, they have written their application specific software to run on top their hardware design. Every appliance manufacture has taken a proprietary approach which creates a significant support challenge as each new generation of NP architecture improves throughput. Being software vendors in reality, all they really want to do is write software and applications and not have the bother of supporting expensive hardware.

This is where Netronome’s Open Appliance Platform comes in. Netronome has developed a generic hardware platform and the appropriate virtual run-time software that enables appliance vendors to dump their own challenging-to-support hardware and use Netronome’s NP processor instead. The important aspect of this is that this can be achieved with minimum change to their application code.

What are the possible applications (or use cases) of Netronome’s Network Acceleration card?

The use of Netronome’s product is particularly beneficial as the core of network appliances in the following application areas.

Security: All type of enterprise network security application that depends on the inspection and modification of live network traffic.

SSL Inspector: The Netronome SSL Inspector is a transparent proxy for Secure Socket Layer (SSL) network communications. It enables applications to access the clear text in SSLencrypted connections and has been designed for security and network appliance manufacturers, enterprise IT organizations and system integrators. The SSL inspector allows network appliances to be deployed with the highest levels of flow analysis while still maintaining multi-gigabit line-rate network performance.

Compliance and audit: To ensure that all company employees are in compliance with new regulatory regimes, companies must voluntarily discover, disclose, expeditiously correct, and prevent recurrence of future violations.

Network access and identity: To check the behaviour and personal characteristics by which an individual is defined as a valid user of an application or network.

Intrusion detection and prevention: This has always been a heartland application for network processors.

Intelligent billing: By detecting a network event or a particular traffic flow, a billing event could be initiated.

Innovative applications: To me this is one of the most interesting areas as it depends on having a good idea, but applications could be, modifying QoS parameters on the fly in an MPLS network or detecting particular application flows on the fly – grey VoIP traffic for example. If you want to know about other application ideas – give me a call!

Netronome’s Architecture components

Netronome Flow Drivers: The Netronome Flow Drivers (NFD) provide high speed connectivity between the hardware components of the flow engine (NPU and cryptography hardware) and one or more Intel IA / x86 processors running on the motherboard. The NFD allows developers to write their own code for the IXP NPU and the IA / x86 processor.

Netronome Flow Manager: The Netronome Flow Manager (NFM) provides an open application programming interface for network and security appliances that require acceleration. The NFM not only abstracts (virtualises) the hardware interface of the Netronome Flow Engine (NFE), but its interfaces also guide the adaptation of applications to high-rate flow processing.

Overview of Netronome’s architecture components

Netronome real-time Flow Kernel: At the heart of the platform’s software subsystem is a real-time microkernel specialized for Network Infrastructure Applications. The kernel coordinates and steers flows, rather than packets, and is thus called the Netronome Flow Kernel (NFK). The NFK also does everything the NFM does and it also supports virtualisation.

Open Appliance Platform: Netronome have recently announced a chassis system that can be used by ISVs to quickly provide a solution to their customers.

Round-up

If your application or service really needs a network processor you will realise this quite quickly as the the performance of your non-NP based network application will be too slow, is unable to undertake the real-time bit manipulation you need or, the real killer, it is unable to scale to the flow rates your application will see in real world deployment.

In the old days, programming NPs was a black art not understood by 99.9% of the world’s programmers, but Netronome is now making the technology more accessible by providing appropriate middleware – or abstraction layer – that enables network appliance software to be ported to their open platform without a significant rewrite being necessitated or a detailed understanding of programming an NP. Your application just runs in a virtual run-time environment and uses the flow API and the Netronome product does the rest.

Good on ’em I say.


Ethernet goes carrier grade with PBT / PBB-TE (PBBTE)?

March 13, 2007

Alongside IP, Ethernet was always one of the ‘chosen’ protocols as it dominated enterprise Local Area Networks Network (LANs). I didn’t actually write about Ethernet back in the early 1990s because, at the time, it did not have a role in the Wide Area Network (WAN) space as a public data service protocol. As I mentioned in my post – The demise of ATM, it was assumed by many in the mid-1990s that Ethernet had reached the end of its life (NOT by users of Ethernet I might add!) and that ATM was the strategic direction for LAN protocols. This was so much assumed by the equipment vendor industry, that they spent vast amounts of money building ATM divisions to supply this perceived burgeoning market. But, this was just not to be (Picture credit: Nortel).

The principle reason for this was that the ATM activists failed to understand that attempting to displace a well-understood and trusted technology with an unknown and new variety was a challenge too far. Moreover, ATM was so different that it would have required a complete replacement of not only 100% of LAN related network equipment but much of the personal computer hardware and software as well.

What was ATM supposed to bring to the LAN community? A promise of complete compatibility between LANs and WANs and an increase in the speed of IEEE 802.3 10mbit/s LAN backbones that was so desperately needed at the time.

ATM and Ethernet battled it out in the LAN public arena for only a short time as it was such a one sided battle. With the arrival of the 100mbit/s standard 100 BASE-T Ethernet standard, the Keep It Simple Stupid (KISS) approach gained dominance in the minds of users yet again. Why would you throw out years of investment for a promise of ATM jam tomorrow? Upgrading Ethernet LAN backbones to 100mbit/s was so simple as it only necessitated swapping out LAN interface cards and upgrading to better coaxial cable. Most importantly, there was no major requirements to update network software or applications. It was so much a non-brainer that it is now hard to see why ATM was ever thought to be the path forward.

This enigma lives on today and is it caused by the difference in views between the private and the public network industries. ATM came out of the public WAN network word, whereas IP and Ethernet came out of the private LAN world. Chalk and cheese in reality.

Once the dust had settled on the battles between ATM and Ethernet in the LAN market, the battle moved to the telecommunications WAN space and into the telco’s home territory.

Ethernet moves into the WAN space

I first heard about Ethernet being proposed as a wide area protocol in mid-1990s when I visited Nortel’s R&D labs in Canada. It was one of those moments I still remember quite well. It was in a small back room. I don’t remember any equipment being on display but if I remember correctly, all that was being shown were some blown-up diagrams of a proposed public Ethernet architecture. Now I would not claim that Nortel invented the idea of Ethernet’s use in the public sphere as I’m sure that if I had visited 3COM’s R/D centre (the home of Ethernet) or other vendorss labs, I would have seen similar ideas being articulated.

The thought of Ethernet being used in WANs had not occurred to me before that moment and it really made me sit back and think (maybe I should have acted!). However, if Ethernet is ever going to be a credible wide area layer-2 network transport protocol it needs to be able to transparently transport any of the major protocols used in a converged network. This capability is provided in IP through the use of pseudowires and MPLS. This is where PBB-TE comes to the fore.

Over the next few years a whole new sector of the telecommunications industry started; this was termed Metropolitan Access Networks (MANs) or more simply Metro Networks. In the latter half of the 1990s – the heyday of carrier start-ups – many new telcos were set up using a new architecture paradigm based on Ethernet over optical fibre. The idea was that metro networks would sit between enterprise networks and traditional wide-area telco networks. Metro networks would inter-connect enterprise offices at the city level, aggregate data traffic that needs to be transported long distances and deliver it to telcos who could ship that traffic as required to other cities or countries using frame relay services.

Many of these metro players ceased trading during the recent challenging years but many were refinanced and were resurrected to live again.

(Picture credit: exponential-e) It is very interesting to note that throughout that phase of the industry and even up to today, Ethernet has not gained the full acceptance as a viable public service protocol from the wider telecommunications industry. Until a few years ago, outside of specialist metro players, very few traditional carriers offered wide offered Ethernet services at all. This was quite amazing when it is considered that it flew in the face of strong requests from enterprises who all wanted them. What turned this round to a degree was the deployment of MPLS backbones by carriers who could then offer Ethernet as an access protocol to enterprises but handle Ethernet services on their network through an MPLS tunnel.

Bringing ourselves up to the present time, traditional carriers are starting to offer layer-2 Ethernet-based Virtual Private Networks (VPNs) in parallel to their offerings of layer-3 MPLS-based IP-VPNs. One of the interesting companies that focused on Ethernet in the UK is exponential-e who I will be writing about soon

The wide area protocol battle restarts

What is of overt interest today, is the still open issue of Ethernet over fibre being a real alternative to a core architecture based on MPLS. It could be said that this battle is in the process of really starting today but it has been very slow in coming. Ethernet LAN <-> Ethernet WAN <-> Ethernet LAN would seem to be a natural and simple option.

Ethernet is dominant in carrier’s customers LAN networks, Ethernet is dominant in the metro networks, but Ethernet has had little impact in WAN networks. Partially, this is to do with technology politics and a distinct reluctance of traditional carriers to offer Ethernet services (in spite of insistent siren calls from their customers). It could be said that the IP and MPLS bandwagon has taken all the money, time and strategy efforts of the telcos leaving Ethernet in the wings.

It should be said that this also has to do with the technical challenges associated with delivering Ethernet services over longer distances than those seen in metro networks. This is where a new initiative called Provider Backbone Transport (PBT) comes into play which could provide a real alternative to the almost-by-default-use-of-MPLS core strategy of most of the traditional carriers. Interestingly, reminding me of my visit to Canada, Nortel along with Siemens are one of the key players in this market. Here is an interesting presentation Highly Scalable Ethernets by Paul Bottorff, Chief Architect, Carrier Ethernet, Nortel.

PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) – phew, that’s a mouthful.

PBBTE is all about separating the Ethernet service layer from the network layer thus enabling the development carrier-grade public Ethernet services such as outlined by Nortel:

  • Traffic engineering and hard QoS: Provider Backbone transport enables service provider’s to traffic engineer their Ethernet networks. PBT tunnels reserve appropriate bandwidth and support the provisioned QoS metrics that guarantee SLAs will be met without having to overprovision network capacity.
  • Flexible range of service options: PBT supports multiplexing of any service inside PBT tunnels – including both Ethernet and MPLS services. This flexibility allows service providers to deliver native Ethernet initially and MPLS-based services (VPWS, VPLS) if and when they require.
  • Protection: PBT not only allows the service provider to provision a point-to-point Ethernet tunnel, but to provision an additional backup tunnel to provide resiliency. In combination with IEEE 802.1ag these working and protection paths enable PBT to provide sub 50 ms recovery.
  • Scalability: By turning off MAC learning features we remove the undesirable broadcast functionality that creates MAC flooding and limits the size of the network. Additionally PBT offers a full 60-bit addressing scheme that enables virtually limitless numbers of tunnels to be set-up in the service provider network.
  • Service management: Because each packet is self identifying, the network knows both the source and destination address in addition to the route – enabling a enables more effective alarm correlation, service-fault correlation and service-performance correlation.

Ethernet has a long 30-year history since it was invented by Robert Metcalfe in the early 1970s. It went on to dominate enterprise networking and intra-city local networking and only wide area networks are holding out. MPLS has taken centre-stage as the ATM replacement technology providing the level of Quality of Service (QoS) needed by today’s multimedia services. But MPLS is expensive and challenging to operate on a large scale and maybe Ethernet can catch it up with the long-overdue enhancements brought about by PBBTE.

In Where now frame relay? I talked about how frame relay took off once X.25 was cleared of its cumbersome overheads that made it expensive to use as a wide area protocol. PBBTE / PBT could achieve a similar result for Ethernet. Maybe this initiative will clear the current log jam that will enable Ethernet to take up its rightful position in the public service space along side MPLS. The battle could be far from over!

The big benefit at the end of the day is that PBB-TE is a layer-2 technology, so a carrier deploying it would not need to buy additional layer-3 infrastructure and OSS in the form of routers and MPLS thu, hopefully, representing a significant cost saving for roll-out.

Addendum: BT have recently committed to deploy Nortel’s PBT technology (presentation). As can be read in the Light Reading analysis:

[BT] is keen to play down any PBT versus MPLS positioning. He says the deployment at BT shows how PBT and MPLS can co-exist. “It’s not a case of PBT versus MPLS. PBT will be deployed in the metro core and interface back into BT’s MPLS backbone network.”

It’s quite easy to understand why BT would want to say this!

Addendum #1: A competitor to PBT / PBB-TE is TMPLS – see my post
Addendum #2: Marketing Presentations of the Metro Ethernet Forum

Addendum: One of the principle industry groups promoting and supporting carrier grade Ethernet is the Metro Ethernet Forum (MEF) and in 2006 they introduced their official certification programme. The certification is currently only availably to MEF members – both equipment manufacturers and carriers – to certify that their products comply with the MEF’s carrier Ethernet technical specifications. There are two levels of certification:

MEF 9 is a service-oriented test specification that tests conformance of Ethernet Services at the UNI inter-connect where the Subscriber and Service Provider networks meet. This represents a good safeguard for customers that the Ethernet service they are going to buy will work! Presentation or High bandwidth stream overview

MEF 14: Is a new level of certification that looks at Hard QoS which is a very important aspect of service delivery not covered in MEF 9. MEF 14 provides a hard QoS backed by Service Level Specifications for Carrier Ethernet Services and hard QoS guarantees on Carrier Ethernet business services to their corporate customers and guarantees for triple play data/voice/video services for carriers. Presentation.

Addendum: Enabling PBB-TE – MPLS seamless services


Nexagent, an enigmatic company?

March 12, 2007

In a recent post, MPLS and the limitations of the Internet, I wrote about the challenge of expecting predictable performance for real-time services such as client / server applications, Voice over IP (VoIP) or video services over the public Internet.I also wrote about the challenges of obtaining predictable performance for these same applications on a Wide area Network (WAN) using IP-VPNs when the WAN straddles multiple carriers – as they most always do. This is brought about about by the fact that the majority of carriers to not currently inter-connect their MPLS networks to enable seamless end-to-end multi-carrier Class-of-Service based performance.

As mentioned in the above post, there are several companies that focus on providing this capability through a mixture of technologies, monitoring and a willingness to act as a prime contractor if they are a service provider. However today, the majority of carriers are only able to provide Service Level Agreements (SLAs) for IP traffic and customer sites that are on their own network. This forces enterprises of all sizes to either manage their own multi-carrier WANs or outsource the task to a carrier or systems integrator that is willing to offer a single umbrella SLA and back this off with separate SLAs to each component provider carrier.

Operational Support Software (OSS) Vendors challenges

An Operations Support System is the software that handles workflows, management, inventory details, capacity planning and repair functions for service providers. Typically, an OSS uses an underlying Network Management System to actually communicate with network devices. There are literally hundreds of OSS vendors providing software to carriers today, but it is interesting to note that the vast majority of these only provide software to help carriers manage their network inside the cloud i.e. to help them manage their own network. In practice, each carrier uses a mixture of bought in and 3rd party OSS to manage their network so each carrier has, in effect, a proprietary network and service management regime that makes it virtually impossible to inter-connect their own IP data services with those of other carriers.

As you would expect in the carrier world, there a number of industry standard organisations that are working on this issue but this is such a major challenge I would doubt that OSS environments could be standardised sufficiently to enable simple inter-connect of IP OSSs anywhere in the near future – if ever. Some of these bodies are:

  • The IETF, who work at the network level such as MPLS, IP-VPNs etc;
  • The Telecom Management Forum, who have been working in the OSS space for many years;
  • The MPLS Frame Relay Forum who “focus on advancing the deployment of multi-vendor, multi-service packet-based networks, associated applications, and interworking solutions
  • And one of the newest, IPSphere whose mission “is to deliver an enhanced commercial framework – or business layer – for IP services that preserves the fundamental ubiquity of the Internet’s technical framework and is also capable of supporting a full range of business relationships so that participants have true flexibility in how they add value to upstream service outcomes.”
  • The IT Information Library which is a “framework of best practice approaches intended to facilitate the delivery of high quality information technology (IT) services. ITIL outlines an extensive set of management procedures that are intended to support businesses in achieving both quality and value, in a financial sense, in IT operations. These procedures are supplier independent and have been developed to provide guidance across the breadth of IT infrastructure, development, and operations.”

As can be imagined, working in this challenging inter- carrier, service or OSS space provides both a major opportunity and a major challenge. One company that has chosen to do just this is Nexagent. Nexagent was formed in 2000 by Charlie Muirhead – also founder of Orchestream, Chris Gare – ex Cable and Wireless and Dave Page – ex Cisco.

In my travels, I often get asked “what is it that Nexagent actually does?” so I would like to have a go at answering this question after setting the scene in a previous post – MPLS and the limitations of the Internet .

The traditional way of delivering WAN-based enterprise services or solutions based on managing multiple service providers (carriers) has a considerable number of challenges associated with it. This could be a company WAN formed by integrating IP-VPNs or simple E1 / T1 TDM bandwidth services bought in from multiple service providers around the world.

The most common approach is a proprietary solution, which is usually of an ad hoc nature built up piecemeal over a number of years from earlier company acquisitions. The strategic idea would have been to integrate and harmonise these disparate networks but there was usually never enough money to start, let alone complete, the project.

Many of these challenges can be seen listed in the panel on the right taken from Nexagent’s brochure. Anyone that has been involved in managing an enterprise’s Information and Communications Technology (ICT) infrastructure will be very well aware of these issues!

Overview of Nexagent’s software

Deploying an end to end service or application running on a WAN, or solution as it is often termed, requires a combination of:

  1. workflow management: A workflow describes the order of a set of tasks performed by various individuals or systems to complete a given procedure within an organization., and
  2. supply chain management: A supply chain represents the flow of materials, information, and finances as they move in a process – or workflow – from one organisation or activity to the next.

In the situation where every carrier or service provider has adopted entirely different OSS combinations, workflow practices and supply chain processes, it is no wonder that every multi-carrier WAN represents a complex, proprietary and bespoke solution!

Nexagent has developed a unique abstraction or Meta Layer technology and methodology that manages and monitors the performance of an end-to-end WAN IP-VPN service or solution without the need for a carrier to swap-out their existing OSS infrastructure. Nexagent’s system runs in parallel with, and integrates with, multiple carriers’ existing OSS infrastructure and enables a prime-contractor to manage multi-carrier solutions in a coherent rather than an ad hoc manner.

Let’s go through what Nexagent offers following a standard process flow for deploying multi-supplier services or solutions using Nexagent’s adopted ICT Infrastructure Management (ICTIM) model.

Service or solution modelling: In the ICTIM reference model, this is the Design stage. The is a crucial step and is focused on capturing all the enterprise service requirements and design as early as possible in the process as possible and maintaining that knowledge for the complete service lifecycle. Nexagent has developed a CAD-like modelling capability with frontend capture based on a simpletouse Excel spreadsheet as every carrier uses a different method of capturing their WAN designs. The model is created up front and acts as a reference benchmark if the design is changed at any future time. The tool provides price query capabilities for use with bought-in carrier services together with automated design rule verification.

Implementation engine: In the ICTIM reference model, this is the Deploy stage. The software automatically populates the design created at the design stage with the required service provider interconnect configurations, generates service work orders for each service provider involved with the design and provisions the network interconnect to create a physical working network. The software is based on a unified information flow to network service providers. Importantly, it schedules the individual components of the end-to-end solution to meet the enterprise roll-out and change management needs.

Experience manager: In the ICTIM reference model, this is the Operate stage. The Nexagent software compares real-life end-to-end solution performance to the expected performance level as specified in the service or solution design. Any deviation from agreed component supplier SLAs will generate alerts into existing OSS environments.

The monitoring is characterised by active in-band measurement for each site and each CoS link by application group and is closed-loop in nature by comparing actual performance to expected performance stored in the service reference model. It can detect and isolate problems and includes optimisation and conformance procedures.

Physical Network interconnect: Lastly, Nexagent developed the service interconnect template with network equipment vendors such as Cisco Systems and physically enables the interconnection of IP-VPNs that have chosen different incompatible ways of defining their CoS-based services.

Who could use Nexagent’s technology?

Nexagent provides three examples of the application of their software – ‘use cases’ – on their web site:

Hybrid Virtual Network Operator is a service provider which has some existing network assets in some geography but lacks network reach and operational efficiency to win business from enterprises requiring out of territory service and customised solutions. Such a carrier could use Nexagent to extend their reach, standardise their off-net interface to save costs and ensure that services work as designed.

Data Centre Virtualisation: Data centre services are critical components of effective enterprise application solutions. A recent trend in data centre services is to share computing resources across multiple customers. Similarly, using multiple physical data centres enables more resilient and better performing services with improved load balancing . Using Nexagent simplifies the task of swapping carriers delivering services to customers and better monitor overall service performance.

Third Party Service Delivery: One of the main obstacles for carriers to growing market share and expanding into adjacent markets is the time and money to develop and implement new services. While many service providers want a broader portfolio of services, there is growing evidence that enterprises want to use multiple companies for services as a way of maintaining supply chain negotiation leverage – what Gartner calls enterprise multi-sourcing.

Round up

This all may sound rather complicated, but the industry pain that Nexagent helps solve is quite straight forward to appreciate when you consider the complexity of multi-carrier solutions and Nexagent and have taken a pretty unique approach to solving that pain.

Although there is not too much information in the public domain about Nexagent’s commercial activities, there is a most informative presentation – MPLS Interconnection and Multi-Sourcing for the Secure Enterprise by Leo McCloskey, who was Senior Director, Network and Partner Strategy at EDS when he presented it (Leo is now Nexagent’s VP of Marketing). An element of one of the slides is shown below. You can also see a presentation by Charlie Muirhead from Nexagent – Case Study: Solving the Interconnect Challenge.

These, and other presentations from the conference can be found from the 2006 MPLScon conference proceedings at Webtorials archive site. You will need to register with Webtorials before you can access these papers.

If you are still unsure about what Nexagent actually does or how they could help your business – go visit them in Reading!

Note: I should declare a personal interest in Nexagent as a co-founder, though I am no longer involved with day to day activities.

Addendum:  March 2008, EDS hoovers up Reading networking firm


SONET – SDH, the great survivors

March 8, 2007

When I first wrote about Synchronous Digital Hierarchy (SDH) and SONET (SDH is the European version SONET) back in 1992, it was seen to be truly transformational for the network service provider industry. It marked a clear boundary from just continually enhancing an old asynchronous technology belatedly called Plesiochronous Digital Hierarchy (PDH) to a new approach that could better utilise and manage the ever increasing bandwidths then becoming available through the use of optical fibre. An up-to-date overview of SDH / SONET technology can be found in Wikipedia.

SONET was initially developed in the USA and adapted to the rest of world a little later which called SDH. This was needed as the rest of the world used different data rates to those used in the USA – this later caused interesting inter-connect issues when connecting SONET to SDH networks. For the sake of this post, I will only use the term SDH from now on as, by installation base, SDH far outweighs SONET.

Probably even more amazing was that when it was launched, following many years of standardisation efforts, it was widely predicted that along with ATM it would become a major transmission technology. It has achieved just that. Although ATM hit the end stop pretty quickly and the dominance of IP was unforeseen at that time, SDH and SONET went on to be deployed by almost all carriers that offered traditional Public Switched Telephone Network (PSTN ) voice services.

The benefits that were used to justify rollout of synchronous networking at the time pretty much panned out in practice.

  • Clock rates tightly synchronised within a network through the use of atomic clocks
  • Synchronisation enabled easier network inter-connect between carriers
  • Considerably simplified and reduced costs of extracting low data rate channels from high-data rate backbone fibre optic cables
  • Considerable reduction in management costs and overheads compared to PDH systems.

In the late 1990s, as SDH came out of the telecommunications world rather than the IT world, it was often considered to be a legacy technology along with ATM. This was driven by the fact that SDH was a Time Division Multiplexed (TDM) based protocol with its roots deeply embedded in the voice world whereas the new IP driven data world was packet based.

In reality, carriers had by this time had made hefty commitments to SDH and they were not about to throw that money away as they had done with ATM. What carriers wanted was to have a network infrastructure that could deliver both tradition TDM based voice and data services along with the newer packet based services i.e. a true multi-service network. In many ways SDH has been a technology that has not only survived the IP onslaught but will be around for many years to come. It will certainly be very hard to displace.

From a layer perspective, IP packets are now generally delivered using an MPLS infrastructure that was put in place to replace ATM switching. MPLS sits on top of SDH, which in turn sits on top of Dense Wave Division Multiplexing (DWDM) optical fibre. DWDM will be the subject of a future post.

One interesting aspect of all this. is that quite a few carriers that started up in the late 1990s (many didn’t survive the telecommunications implosion) looked to a future packet-based world and did not wish to provide traditional TDM-based voice and data services. To this breed of carrier, the deployment of SDH did not seem in any way sensible and they looked to remove this seemingly redundant layer from their architecture by building a network where MPLS sat straight on top of DWDM. This is a common architecture today for a green field network start-up looking to deliver legacy voice and data services purely using an IP network.

A number of ‘improved’ SDH alternatives sprang up in the late 1990s. The most visible one being Cisco’s Dynamic Packet Transport (DPT) / resilient packet ring (RPR) technology. To quote Cisco at the time:

DPT is a Cisco-developed, IP+Optical innovation which combines the intelligence of IP with the bandwidth efficiencies of optical rings. By connecting IP directly to fiber, DPT eliminates unnecessary equipment layers thus enabling service providers to optimize their networks for IP traffic with maximum efficiencies.

DPT never really caught on with carriers for a variety of technical and political reasons.

Another European initiative came from a small start-up in Sweden at the time – net insight. This was called Dynamic Synchronous Transfer Mode (DTM). To quote net insight at the time:

DTM combines the advantages of guaranteed throughput, channel isolation, and inherent QoS found in SDH/SONET with the flexibility found in packet-based networks such as ATM and Gigabit Ethernet. DTM, first conceived in 1985 at Ericsson and developed by a team of network researchers including the three founders of Net Insight, uses innovative yet simple variable bandwidth channels.

Again, DTM failed to gain market traction.

SDH has a massive installed base in 2007 and continues to grow at an albeit steady pace. For those carriers that have already deployed SDH, it is pretty much of a no-brainer to carry on using it, while new carriers who focus on all services being delivered on a converged IP network, would never deploy SDH.

SDH has always managed to keep up with the exploding data rates available on DWDM fibre systems so it will maintain its position in carrier networks until incumbent carriers really decide to throw everything away and build a fully converged networks based on IP. There are a lot of eyes on BT at present!

SDH extensions

In recent years, there have been a number of extensions to basic SDH to help it migrate to a packet oriented world:

Generic Framing Procedure (GFP): To make SDH more packet-friendly, the ITU, ANSI, and IETF have specified standards for transporting various services such as IP, ATM and Ethernet over SONET/SDH networks. GFP is a protocol for encapsulating packets over SONET/SDH networks.

Virtual Concatenation (VCAT): Packets in data traffic such as Packet over SONET (POS) are concatenated into larger SONET / SDH payloads to transport them more efficiently.

Link Capacity Adjustment Scheme (LCAS): When customers’ needs for capacity change, they want the change to occur without any disruption in the service. LCAs a VCAT control mechanism, provides this capability.

These standards have helped SDH / SONET to adapt to a packet based world which was missing in the original protocol standards of the early 1990s.

A more detailed over view of these SDH extensions is provided by Cisco .

At the end of the day there seems to be four core transmission technologies that lie at the core of networks: IP, MPLS, optical transport hierarchy (OTH) and, if the carrier was a traditional telco, SDH / SONET. It will be interesting to see how this pans out in the next decade. Have we reached the end game now? Are there other approaches that will start to come to the fore? What is the role of Ethernet? These are some interesting questions I will attempt to tackle in future posts.

The follow on to this post is: Making SDH, DWDM and packet friendy


Panel Defies VC Wisdom

March 7, 2007

This video came across my desk recently from AlwaysOn and it is certainly entertaining an informative!

According to Guy Kawasaki, the panel’s moderator:

At the CommunityNext conference I moderated this panel with the founders of six very successful web properties: Akash Garg of hi5, Sean Suhl of Suicide Girls, Max Levchin of Slide, James Hong of HotorNot, Markus Frind of PlentyofFish, and Drew Curtis of Fark.

This is the most amusing panel that I’ve ever moderated, and the speakers defied many conventions of tech entrepreneurship—in particular the ones that venture capitalists believe are “proven.” If you’d like to learn how these companies became successful without “proven teams, proven technology, and proven business models,” you’ll love this video.Here’s a little factoid that blew my mind: Both Fark and PlentyofFish have only one employee!