Making SDH and DWDM packet friendly

March 30, 2007

Back in 1993, I wrote about the advances taking pace in fiber optic technologies and optical amplifiers. At that time, technology development was principally concerned with improving transmission distances using optical amplifier technology and increasing data rates. These optical cables a single wavelength and hence provided provided a single data channel.Wide area traffic in the early 1990s was principally dominated by Public Switched Telephone Network (PSTN) telephony traffic as this was well before the explosion in data traffic caused by the Internet. When additional throughput was required, it was relatively simple to lay down additional fibres in a terrestrial environment. Indeed, this became standard procedure to the extent that many fibres were laid in a single pipe with only a few being used or lit as it was known. Unlit fibre strands were called dark fibre. For terrestrial networks when increasing traffic demanded additional bandwidth on a link, it was simple job to simply add additional ports the appropriate SDH equipment and light up an additional dark fibre.

Wave Division Multiplexing (Picture credit: photeon)

In undersea cables adding additional fibres to support traffic growth was not so easy so the concept of Wave Division Multiplexing (WDM) came into common usage for point to point links (the laboratory development of WDM actually went back to the 1970s). The use of WDM enabled transoceanic carriers to upgrade the bandwidths of their undersea cables without the need to lay additional cables which would cost multiple billions of Dollars.

As shown in the picture, a WDM based system uses multiple wavelengths thus multiplying the available bandwidth by the number wavelengths that could be supported. The number of wavelengths that could be used and the data rate on each wavelength were limited by the quality of the optical fibre that was being upgraded and the current state-of-the-art of the optical termination electronics. Multiplexers and de-multiplexers at either end of the cable aggregated and split the combined data into separate channels by converted to and from electrical signals.

A number of WDM technologies or architectures were standardised over time. In the early days, Course Wavelength Division Multiplexing (CWDM) was relatively proprietary in nature and meant different things to different companies. CWDM combines up to 16 wavelengths onto a single fibre and uses an ITU standard 20nm spacing between the wavelengths of 1310nm to 1610nm. With CWDM technology, since the wavelengths are relatively far apart compared to DWDM, the are generally relatively cheap.

One of the major issues at the time was that Erbium Doped Fibre Amplifiers (EDFAs) as described in optical amplifiers could not be utilised due to the wavelengths selected or the frequency stability required to be able de-multiplex the multiplexed signals

In the late 1990s there was an explosion of development activity aimed at deriving benefit of the concept of Dense Wavelength Division Multiplexing (DWDM) to be able to utilise EDFA amplifiers that operated in 1550nm window. EDFAs will amplify any number of wavelengths modulated at any data rate as long as they are within its amplification bandwidth.

DWDM combines up to 64 wavelengths onto a single fibre and uses an ITU standard that specifies 100GHz or 200GHz spacing between the wavelengths, arranged in several bands around 1500-1600nm. With DWDM technology, the wavelengths are close together than used in CWDM, resulting in the multiplexing equipment being more complex and expensive than CWDM. However, DWDM allowed a much higher density of wavelengths and enabled longer distances to be covered through the use of EDFAs. DWDM systems were developed that could deliver tens of Terabits of data over a single fibre using up to 40 or 80 simultaneous wavelengths e.g. Lucent 1998.

I wouldn’t claim to be an expert in the subject, but I would expect that in dense urban environments or over longer runs where access is available to the fibre runs, it is considerably cheaper to install additional runs of fibre than to install expensive DWDM systems. An exception to this would be a carrier installing cables across a continent. If dark fibre is available then it’s an even simpler decision.

Although considerable advances were taking place at optical transport with the advent of DWDM systems, existing SONET and SDH standards of the time were limited to working with a single wavelength per fibre and were also limited to working with single optical links in the physical layer. SDH could cope with astounding data rates on a single wavelengths, but could not be used with emerging DWDM optical equipment.

Optical Transport Hierarchy

This major deficiency in SDH / SONET led to further standards development initiatives to bring it “up to date”. These are known as the Optical Transport Network (OTN) working in an Optical Transport Hierarchy (OTH) world. OTH is the same nomenclature as used for PDH and SDH networks.

The ITU-T G.709 (released between 1999 – 2003) standard Interfaces for the OTN is a standardised set of methods for transporting wavelengths in a DWDM optical network that allows the use of completely optical switches known as Optical Cross Connects that does not require expensive optical-electrical-optical conversions. In effect G.709 provides a service abstraction layer between services such as standard SDH, IP, MPLS or Ethernet and the physical DWDM optical transport layer. This capability is also known as OTN/WDH in a similar way that the term IP/MPLS is used. Optical signals with bit rates of 2.5, 10, and 40 Gbits/s were standardised in G.709 (G.709 overview presentation) (G.709 tutorial).

The functionality added to SDH in G.709 is:

  • Management of optical channels in the optical domain
  • Forward error correction (FEC) to improve error performance and enable longer optical spans
  • Provides standard methods for managing end to end optical wavelengths

Other SDH extensions to bring SDH up to date and make it ‘packet friendly’

Almost in parallel with the development of G.709 standards a number of other extensions were made to SDH to make it more packet friendly.

Generic Framing Procedure (GFP): The ITU, ANSI, and IETF have specified standards for transporting various services such as IP, ATM and Ethernet over SONET/SDH networks. GFP is a protocol for encapsulating packets over SONET/SDH networks.

Virtual Concatenation (VCAT): Packets in data traffic such as Packet over SONET (POS) are concatenated into larger SONET / SDH payloads to transport them more efficiently.

Link Capacity Adjustment Scheme (LCAS): When customers’ needs for capacity change, they want the change to occur without any disruption in the service. LCAS a VCAT control mechanism, provides this capability.

These standards have helped SDH / SONET to adapt to an IP or Ethernet packet based world which was missing in the original protocol standards of the early 1990s.

Next Generation SDH (NG-SDH)

If a SONET or SDH network is deployed with all the extensions that make it packet friendly is deployed it is commonly called a Next Generation SDH (NG-SDH). The diagram below, shows the different ages of SDH concluding in the latest ITU standards work called T-MPLS ( I cover T-MPLS in: PBT – PBB-TE or will it be T-MPLS?

Transport Ages (Picture credit: TPACK)

Multiservice provisioning platform (MSPP)

Another term in widespread use with advanced optical networks is MSPP.

SONET / SDH equipment use what are known as add / drop multiplexers (ADMs) to insert or extract data from an optical link. Technology improvements enabled ADMs to include cross-connect functionality to manage multiple fibre rings and DWDM in a single chassis. These new devices replaced multiple legacy ADMs and also allow connections directly from Ethernet LANs to a service provider’s optical backbone. This capability was a real benefit to Metro networks sitting between enterprise LANs and long distance carriers.

There are many variant acronyms in use as there are equipment vendors:

  • Multiservice provisioning platform (MSPP): includes SDH multiplexing, sometimes with add-drop, plus Ethernet ports, sometimes packet multiplexing and switching, sometimes WDM.
  • Multiservice switching platform (MSSP): an MSPP with a large capacity for TDM switching.
  • Multiservice transport node (MSTN): an MSPP with feature-rich packet switching.
  • Multiservice access node (MSAN): an MSPP designed for customer access, largely via copper pairs carrying Digital-Subscriber Line (DSL) services.
  • Optical edge device (OED): an MSSP with no WDM functions.

This has been an interesting post in that it has brought together many of the technologies and protocols discussed in the previous posts, in particular SDH, Ethernet and MPLS and joined them to optical networks. It seem strange to say on one hand that the main justification of deploying converged Next Generation Networks (NGNs) based on IP is to simplify existing networks and hence reduce costs, but then consider the complexity and plethora of acronyms and standards associated with that!

I think there is only one area that I have not touched upon and that is the IETF’s initiative – Generalised MPLS (GMPLS) or ASON / ASTN, but that is for another day!

Addendum: Azea Networks, upgrading submarine cables

Colo crisis for the UK Internet and IT industry?

March 29, 2007

The UK Internet and IT industries are facing a real crisis that is creeping up on them at a rate of knots (Source: theColocationexchange).In the UK we often believe that we are at the forefront of innovation and the delivery of creative content services such as Web 2.0 based and IPTV, but this crisis could force many of these services to be delivered from non-UK infrastructure over the next few years.

So, what are we talking about here? It’s the availability of colocation (colo) services and what you need to pay to use them. Colocation is an area where the UK has excelled and has lead Europe for a decade, but this could be set to change over the next twelve months.

It’s no secret to anyone that hosts an Internet service that prices have gone through the roof for small companies in the last twelve months, forcing many of the smaller hosters to just shut up shop. The knock-on effects of this will have a tremendous impact on the UK Internet and IT industries as it also impacts large content providers, content distribution companies such as Akamai, telecom companies and core Internet Exchange facilities such as LINX. In other words, pretty much every company involved in the delivering Internet services and applications.

We should be worried.

Estimated London rack pricing to 2007 (Source: theColocationexchange)

The core problem is that available co-location space is not just in short supply in London, it is simply disappearing at an alarming and accelerating rate as shown in the chart below (It is even worse in Dublin). It could easily run out to anyone who does not have a deep pocket.

Estimated space availability in London area (Source: theColocationexchange)

What is causing this crisis?

Here are some of the reasons.

London’s ever increasing power as a world financial hub: According to the London web site: “London is the banking centre of the world and Europe’s main business centre. It is the headquarters of more than 100 of Europe’s 500 largest companies are in London and a quarter of the of the world’s largest financial companies have their European headquarters in London too. The London foreign exchange market is the largest in the world, with an average daily turnover of $504 billion, more than the New York and Tokyo combined.”

This has been a tremendous success for the UK and driven a phenomenal expansion of financial companies needs for data centre hosting and they have turned to 3rd party colo providers to provide these needs. In particular, the need for disaster recovery has driven them to not only expand their own in-hose capabilities but to also place infrastructure in in 3rd party facilities. Colo companies have welcomed these prestigious companies with open arms in the face of the telecomms industry meltdown post 2001.

Sarbanes-Oxley compliance: The necessity for any company that operates in the USA to comply with the onerous Sarbanes-Oxley regulations has had a tremendous impact on the need to manage and audit the capture, storage, access, and sharing of company data. In practice, more analysis and more disk storage are needed leading to more colo space requirements.

No new colo build for the last five years: As in the telecommunications world, life was very difficult for colo operators in the five years following the new millennium. Massive investments in the latter half of the 1990s was followed by pretty much zero expansion of the industry which remained effectively in stasis. One exception to this is IX Europe who are expanding their facilities around Heathrow. However, builds such as this will not have any great impact on the market overall even though it will be highly profitable for the companies expanding.

However, in the last 24 months both the telecomms and the colo industries are seeing a boom in demand and a return to a buoyant market last seen in the late 1990s (Picture credit: theColocationexchange).

Consolidation: In London particularly, there has been a large trend to consolidation and roll-up of colo facilities. A prime example of this would be Telecity, (backed by private equity finance from 3i Group, Schroders and Prudential),  who have bought Redbus and Globix in the last twenty four months. This included a number of smaller colo operators that focused on supplying smaller customers. The now larger operators have really concentrated on winning the lucrative business of corporate data centre outsourcing which is seen to be highly profitable with long contract periods.

Facility absorption: In a similar way that many telecommunications companies were sold at very low prices between post 2000,  the same trend happened in the colo industry. In particular, many of the smaller colos west of London were bought at knock down valuations by carriers, large third party systems integrators and financial houses. This effectively took that colo space permanently off the market.

Content services: There has been a tremendous infrastructure growth in the last eighteen months by the well known media and content organisations. This includes all the well known names such as Google, Yahoo and Microsoft. It also includes companies delivering newer services such as IP TV and content distribution companies such as Akamai. It could be said with justification, that this growth is only just beginning and these companies are competing directly with the financial community, enterprises and carriers for what little colo space there is left.

Carrier equipment rooms: Most carriers have their own in-house colo facilities to house their own equipment or offer colo services to their customers. Few colos have invested in increasing in these facilities in the last few years so most are now 100% full forcing carriers to go to 3rd party colos for expansion.

Instant use: When enterprises buy space today they immediately use it rather then letting it lie fallow.

How has the Colo industry reacted to this ‘Colo 2.0’ spurt of growth?

With demand going through the roof and with a limited amount of space available in London, it is clearly a seller’s market. Users of colo facilities have seen rack prices increase at an alarming rate. For example: Colocation Price Hikes at Redbus.

However, the rack price graph above does not tell the whole story as power, which used to be a small additional charge or even thrown in for free, have risen by a factor of three or even four in the last twelve months.

Colos used to focus on selling colo space solely on the basis of rack footprints. However, the new currency they use is Amperes, not square feet measured in rack footprints. This is an interesting aspect that is not commonly understood by individuals who have not had to by buy space in the last twelve months.

This is caused because colo facilities are not only capped in the amount of space they have to place racks, they also have caps on the amount of electricity that a site can take from their local power companies. Also, as a significant percentage of this input power is turned into heating the hosted equipment, colo facilities have needed to make significant investment in coolers to keep the equipment operating within their temperature specifications. They also need to invest in appropriate back-up power generators and batteries to power the site in case of a external power failure.

Colo contracts are now principally based on the amount of current the equipment consumes, not its footprint. If the equipment in a rack only takes up a small space but consumes, say 8 to 10 Amps, then the rest of the rack has to remain empty unless you are willing pay an additional full-rack’s worth of power.

If a rack owner sub-lets shelf space in a rack to a number of their customers, each one has to be monitored with individual Ammeters placed on each shelf.

One colo explains this to their customers in a rather clumsy way:

“Price Rises Explained: Why has there been another price change?

By providing additional power to a specific rack/area of the data floor, we are effectively diverting power away from other areas of the facility, thus making that area unusable. In effect, every 8amps of additional power is an additional rack footprint.

The price increase reflects the real cost to the business of the additional power. Even with the increase, the cost per additional 8amps of power is still substantially less, almost half the cost of the standard price for a rack foot print including up to 8amp consumption.”

Another point to bear in mind here is the current that individual servers consume. With the sophisticated power control that is embedded into today’s servers – just like your home PC – there is a tremendous difference in the amount of current a server consumes in its idle state compared to full load. The amount of equipment placed in a rack is limited by the possible full load current consumption even if average consumption is less. In the case of an 8 Amp rack limit, there would also be a hard trip installed by the colo facility that turns the rack off if current reaches say 10 Amps.

If the equipment consists of standard servers or telecom switches this can be accommodated relatively easily, but if a company offers services such as interactive games or IPTV service and fills a rack with blade (card) servers, this can quite easily consume 15 to 20kW of power or 60 Amps! I’ll leave it you to undertake the commercial negotiations with your colo but take a big chequebook!

What could be the consequences of the up and coming crisis?

The empty floor as seen in the picture are long gone in colo facilities in London.

Virtual hosters caught in a trap: Clearly if a company does not own its own colo facilities but offers colo based services to their customers, it could prove to be very difficult and expensive  to guarantee access to sufficient space, sorry Amperage, to meet their customer’s growth needs in an exploding market. As in the semiconductor market where fabless companies are always the first hit in boom times, those companies that rely on 3rd party colos could have significant challenges facing them in coming months.

No low cost facilities will hit small hosting companies:  The issues raised in this post are significant ones even for multinational companies, but for small hosting companies they are killers. Many small hosting companies who supply SME customers have already put the shutters on their business as it has proved not to be cost effective to pass these additional costs on their customers.

Small Web 2.0 service companies and start-ups: The reduction in availability of  low-cost colo hosting could have a tremendous impact on small Web 2.0 service development companies where cash flow is always a problem. Many of these companies used to go to a “sell it cheap” colo but there are fewer and fewer of that can be resorted to. If small companies go to these lower cost colos then you can placing their services in potential jeopardy as the colo might have only one carrier fibre connection to the facility and if that goes down or no power back-up capabilities…

Its not so easy to access lower cost European facilities:  There is space available in some mainland European cities and at rates considerably lower than those seen in London. However, their possible use does raise some significant issues:

  • A connection needs to be paid for between the colo centres. If talking about multi Gbit/s bandwidths these does not come cheap. They also need to be backed up by at least a second link for resilience.
  • For real time applications – games or synchronous backup, the addition transit delays can prove to be a significant issue.
  • Companies will need local personal to support your facility and this can be very expensive and also represents an expensive and long term commitment  in many European counties.

I called this post a Crisis for the UK Internet and IT industry? I hope that the issues outlined do not have a measurable negative impact in the UK, but I’m really not sure that that will be the case. Even if there is a rush to build new facilities in 2007 it will take 18 to 24 months for them to come on line. If this trend continues, a lot of application and content service providers will be forced to provide them from Europe or the USA with a consequent set of knock-on effects for the UK.

I hope I’m wrong.

Note: I would like to acknowledge a presentation given by Tim Anker of the theColocationexchange at the Data Centres Europe conference in March 2007 which provided the inspiration for this post. Tim has been concerned about this issues and has been bringing to the attention of companies for the last two years.


March 26, 2007

And you thought Ethernet was simple! It seems I am following a little bit of an Ethernet theme at the moment, so I thought that I would have a go at listing all (many?) of the ways Ethernet packets can be moved from from one location to another. Personally I’ve always found this confusing as there seems to be a plethora of acronyms and standards. I will not cover wireless standards in this post.

Like IP (Internet protocol not Intellectual Property!), the characteristics of an Ethernet connection is only as good as the bearer service, it is being carried over and thus most of the standards are concerned with that aspect. Of course, IP is most often carried over Ethernet so performance characteristics of the Ethernet data path bleed through to IP as well. Aspects such as service resilience and Quality of Service (QoS) are particularly important.

Here are the ways that I have come across to transport Ethernet.

Native Ethernet

Native Ethernet in its original definition runs over twisted-pair, coaxial cables or fibre (Even though Metcalfe called their cables The Ether). A core feature called carrier sense multiple access with collision detection (CSMA/CD) enabled multiple computers to share the same transmission medium. Essentially this works by a node resending a packet when it did not arrive at its destination because it was lost by colliding with a packet sent from another node at the same time. This is one of the principle aspect of native Ethernet that is when used on a wide area basis as it is not needed.

Virtual LANs (VLANs): An additional capability to Ethernet was defined by the IEEE 802.1Q standard to enable multiple Ethernet segments in an enterprise to be bridged or interconnected sharing the same physical coaxial cable or fibre while keeping each bridge private. VLANs are focused on single administrative domain where all equipment configurations are planned and managed by a single entity. What is know as Q-in-Q (VLAN stacking) emerged as the de facto technique for preserving customer VLAN settings and providing transparency across a provider network.

IEEE 802.1ad (Provider Bridges) is an amendment to IEEE 802.1Q-1998 standard that the definition of Ethernet frames with multiple VLAN tags.

Ethernet in the First Mile (EFM): In June, 2004, the IEEE approved a formal specification developed by its IEEE 802.3ah task force. EFM focuses on standardising a number of aspects that will help Ethernet from a network access perspective. In particular it aims to provide a single global standard enabling complete interoperability of services. The standards activity encompasses: EFM over fibre, EFM over copper, EFM over passive optical network and Ethernet First Mile Operation, Administration, and Maintenance. Combined with whatever technology a carrier deploys to carry Ethernet over its core network, EFS enables full end-to-end Ethernet wide area services to be offered.

Over Dense Wave Division Multiplex (DWDM) optical networks

10GbE: The 10Gbit/s Ethernet standard was published in 2006 and offers full duplex capability by dropping CSMA/CD. 10GbE can be delivered over carrier’s DWDM optical network.


Ethernet over Sonet / SDH (EoS): For those carriers that have deployed SONET / SDH networks to support their traditional voice and TDM data services, EoS is a natural service to offer following a keep it simple approach as it does not involve tunnelling as would be needed using IP/MPLS as the transmission medium. Ethernet frames are encapsulated into SDH Virtual Containers. This technology is often preferred by customers as it does not involve the transmission of Ethernet via encapsulation over an IP or MPLS shared network which is often seen as a perceived performance or security risk by enterprises (I always see this as a non-logical concern as ALL public networks use shared networks at all levels).

Link Access Procedure – SDH (LAPS): LAPS is a variant of the original LAP protocol, is an encapsulation scheme for Ethernet over SONET/SDH. LAPS provides a point-to-point connectionless service over SONET/SDH. and enables the encapsulation of IP and Ethernet data.

Over IP and MPLS:

Layer 2 Tunnelling Protocol (L2TP). L2TP was originally standardised in 1999 but an updated version was published in 2005- L2TPv3. L2TP is a Layer 2 data-link protocol that enables data link protocols to be carried on IP networks along side PPP. This includes Ethernet, frame relay and ATM. L2TPv3 is essentially a point-to-point tunnelling protocol that is used to interconnect single- domain enterprise sites.

L2TPv3 is also known as a Virtual Private Wire service (VPWS) and is aimed at native IP networks. As it is a pseudowire technology it is grouped with Any Transport over MPLS (AToM).

Layer 2 MPLS VPN (L2VPN): Customer’s networks are separated from each other on a shared MPLS network using MPLS Label Distribution Protocol (LDP) to set up point-to-point Pseudo Wire Ethernet links. The picture below shows individual customer sites that are relatively near to each other connected by L2TPv3 or L2VPN tunnelling technology based on MPLS Label Switched paths.

Virtual Private LAN Service (VPLS): A VPLS is a method of providing a fully meshed multipoint wide area Ethernet service using Pseudo Wire tunnelling technology. VLPS is a Virtual Private Network (VPN) that enables all LANs on a customer’s premises connected to it are able to communicate with each other. A new carrier that has invested in an MPLS network rather than an SDH / SONET core network would use VPLS to offer Ethernet VPNs to their customers. The picture below shows a VPLS with LSP link containing multiple MPLS Pseudo-Wires tunnels.

MEF: The MEF defines several types of Virtual Private Wire Services (VPWS) services:

Ethernet Private Line (EPL). An EPL service supports a single Ethernet VC (EVC) between two customer sites.

Ethernet Virtual Private Line (EVPL). An EVPL service supports multiple EVCs between two two customer sites.

Virtual Private Line Service (VPLS) or Ethernet LAN (E-LAN) service supports multiple EVCs between multiple customer sites.

These MEF-created service definitions, which are not standards as such (indeed they are independent of standards), enable equipment vendors and service providers to achieve 3rd certification for their products.

Looking forward:

100GbE: In 2006, the IEEE’s Higher Speed Study Group (HSSG), tasked with exploring what Ethernet’s next speed might be, voted to pursue 100G Ethernet over other offerings, such as 40Gbit/s Ethernet to be delivered in the 2009 /10 time frame. The IEEE will work to standardize 100G Ethernet over distances as far as 6 miles over single-mode fiber optic cabling and 328 feet over multimode fibre.

PBT or PBB-TE: PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) group. I’ve covered this in Ethernet goes carrier grade with PBT / PBB-TE?

T-MPLS: -MPLS is a recent derivative of MPLS – I have covered this in PBB-TE / PBT or will it be T-MPLS?

Well, I hope I’ve covered most of the Ethernet wide area transmission standards activities here. If I havn’t I’ll add others as addendums. At least they are all on one page!

Islands of communication or isolation?

March 23, 2007

One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

What are the causes these so-called islands of isolation? Here are a few examples.

Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?

Video history of Ethernet by Bob Metcalfe, Inventor of Ethernet

March 22, 2007

The History of Ethernet

The Evolution of Ethernet to a Carrier Class Technology

The story of Ethernet goes back some 32 years to May 22, 1973. We had early Internet access. We wanted all of our PC’s to be able to access the Internet at high speed. So we came up with the Ethernet…

PBB-TE (PBBTE) / PBT or will it be T-MPLS (MPLS-TP)?

March 22, 2007

Welcome to more acronym hell. In Ethernet goes carrier grade with PBT? I looked at the history of Ethernet and its increasing use in wide area networks. A key aspect of that was for standardisation bodies to provide the additional capabilities to make Ethernet ‘carrier grade’ by creating a connection-oriented Ethernet with improved scalability, reliability and simplified management. (Picture credit: Alcatel)As is the want of the network industry, not only are commercial network equipment manufacturers extremely competitive (through necessity) but the the standards bodies are as well. So we have not only the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) but also the ITU‘s Transport-MPLS (T-MPLS) activities competing in a similar space. An ITU technical overview presentation can be found here.

T-MPLS is a recent derivative of MPLS and is being defined in cooperation with the IETF. One way of looking at this could be through the now confused term layers. IP clearly operates at layer-3 while MPLS itself has been said to operate at layer-2.5 as it does not operate at a layer-2 transport level. While T-MPLS has been specifically designed to operate at the layer-2 transport level an area that the ITU focus on.

I guess the simplest way of looking at this is that T-MPLS is a stripped down MPLS standard whereby irrelevant components concerned with MPLS’ support of IP connectionless service capabilities. T-MPLS is also based on the extensive standardisation work already undertaken and implemented in SDH / SONET thus representing a marriage of MPLS and SDH. The main components of T-MPLS were approved as recenly as November 2006.

The motivation behind T-MPLS is is to provide a compatible transport network that is able to appropriately support the needs of a fully converged NGN IP network while at the same time support the on-going technology enhancements taking place in the optical layer. Because MPLS has gained such popularity in the last few years, it only seems natural to enhance something was accepted, understood and more importantly is now deployed by most carriers.

So what is T-MPLS?

T-MPLS operates at the layer-2 data plane level i.e. underneath MPLS or IP/MPLS. It borrows many of the characteristics and capabilities of IETF’s MPLS but focuses on the additional aspects that address the need for any transport layer to provide what is known as high availability i.e. greater than 99.999%. Some of the additions are around the following areas:

  • Clear management and control of bandwidth allocation using MPLS’s Label Switched Paths (LSPs)
  • Improved control of a the transport layer’s operational state through SDH-like OA&M (operations, administration, and maintenance) used for administering, and maintaining the network.
  • Improved and new network an survivability mechanisms such as protection and restoration as seen in SDH

Another key aspect, as seen in PBB-TE, is the complete separation of the control and data planes creating full flexibility for network management and signaling that will take place in the control plane. This signalling is known as generalised multi-protocol label switching (GMPLS) (also known as Automatic switched-transport network [ASTN]) and provided the same capabilities as seen in the tools used today to manage optical networks.

T-MPLS has been designed to run over an optical transport hierarchy (OTH) or an SDH / SONET network. I assume the OTH terminology has been adopted retrospectively to make it compatible with SDH in the same way Plesiochronous Digital Hierarchy (PDH) was invented to cover pre-SDH technology 15 years ago. The reason for SDH / SONET support is clear as these transport technologies are not about to go away after the significant investments made by carriers over the last 15 years. We should also not forget that the mobile world is still mainly a time division multiplexed (TDM) world and SDH / SONET provides a bridge for those carriers in both market areas or interconnecting with mobile operators. (Picture credit: Alcatel)

Although T-MPLS has been defined as a generic transport standard, its early focus is clearly centred on Ethernet bringing it into potential competition with PBT / PBB-TE. It will be most interesting to see how this competition pans out.

What are differences between T-MPLS and MPLS?

Although T-MPLS is a subset of MPLS there are several enhancements. The principle on of these is T-MPLS’ ability to support bi-directional LSPs. MPLS LSPs are unidirectional and both the forward and backward paths between nodes need to be explicitly defined. Conventional transport paths are bi-directional.

The following MPLS features have been dropped in T-MPLS:

  • Equal Cost Multiple Path (ECMP): Traffic can follow two paths that have the same ‘cost’ and is not needed in a connection oriented optical world. This is a problem in MPLS as well as it provides an element of non-predictability that affects traffic engineering activities.
  • LSP merging option: This is a real problem in MPLS anyway as traffic can be merged from multiple LSPs into a single one and losing source information. Point-2-Multi-point (P2MP) are particularly badly affected.
  • Penultimate Hop Popping (PHP): Labels are removed one node before the egress node to reduce the processing power required on the egress node. A legacy issue caused by underpowered routers that is not of concern today.

There are many other issues that need to be resolved before T-MPLS can become a robust standard set that is ready for wide scale deployment:

  • Interoperability: Interoperability between MPLS and T-MPLS control planes. There is lots of issues in this space and most activities are at an early stage.
  • Application interface: One of the main reasons ATM failed was that the majority of applications needed to be adapted to utilise ATM and it just did not happen. Although the problem with T-MPLS is limited to management tool APIs and interfaces, there are a lot of software companies that will need to undertake a lot of work to support T-MPLS. This will be quite a challenge!

T-MPLS’ vision is similar to that of PBB-TE and encompasses high scalability, reduced OPEX costs, handle any packet service, strong security, high availability, high QoS, simple management, and high resiliency.

The drive to T-MPLS has not only been driven by the need to upgrade optical network management but also by the the realisation that traditional MPLS- based networks have inherited the IP characteristics of being expensive to manage from an OPEX perspective and very difficult to manage on a large scale.

This has made most carriers rather jittery. On one hand they need to follow the industry gestalt of everything-over-IP based on the assumption it will all be cheaper one day as well as enabling them to provide the multiplicity of services wanted by their customers. On the other hand, the principle technology being used to deliver this vision, MPLS, is turning out to be more expensive to manage than the legacy networks MPLS is replacing. Quite a conundrum I think and one of the principle components driving the interest in KISS Ethernet services.

The diagram below shows the ages of transport ‘culminating’ in T-MPLS!

Transport Ages (Picture credit: TPACK)

A good overview of T-MPLS can be read courtesy of TPACK.
A side by side comparison of TBB-TE and T-MPLS from Meriton
Addendum: One of the principle industry groups promoting and supporting carrier grade Ethernet is the Metro Ethernet Forum (MEF) and in 2006 they introduced their official certification programme. The certification is currently only availably to MEF members – both equipment manufacturers and carriers – to certify that their products comply with the MEF’s carrier Ethernet technical specifications. There are two levels of certification:

MEF 9 is a service-oriented test specification that tests conformance of Ethernet Services at the UNI inter-connect where the Subscriber and Service Provider networks meet. This represents a good safeguard for customers that the Ethernet service they are going to buy will work! Presentation or High bandwidth stream overview

MEF 14: Is a new level of certification that looks at Hard QoS which is a very important aspect of service delivery not covered in MEF 9. MEF 14 provides a hard QoS backed by Service Level Specifications for Carrier Ethernet Services and hard QoS guarantees on Carrier Ethernet business services to their corporate customers and guarantees for triple play data/voice/video services for carriers. Presentation.

Addendum #1: Ethernet-over-everything – what’s everything?

Addendum #2: Enabling PBB-TE – MPLS seamless services

GSM pico-cell’s moment of fame

March 21, 2007

Back in May 2006, the DECT – GSM guard bands, 1781.7-1785 MHz and 1876.7-1880 MHz, originally set up to protect cordless phones from interference by GSM mobiles were made available to a number of licensees. As is the fashion, these allocations were offered to industry by holding an auction with a reserve price of £50,000 per license. In fact, it was Ofcom’s first auction and I guess they were happy with the results although I would not have liked to have been in charge of Colt’s bidding team when it came to report to their Board post the auction!

For those of a technical bent, you can see Ofcom’s technical study here in a document entitled Interference scenarios, coordination between licensees and power limits and there is a good overview of cellular networks on Wikipedia also

The most important restriction on the use of this spectrum was outlined by the study:

This analysis confirms that a low power system based on GSM pico cells operating at the 23dBm power (200mW) level can provide coverage in an example multi-storey office scenario. Two pico cells per floor would meet the coverage requirements in the example 50m × 120m office building. For a population of 300 people per floor, the two pico cells would also meet the traffic demand.

It was all done using sealed bids so there was inevitably a wide spectrum (sorry for the pun!) of responses ranging from just over £50,000 to the highest, Colt, who bid £1,513,218. The 12 companies winning licenses were:

British Telecommunications £275,112
Cable & Wireless £51,002
COLT Mobile Telecommunications £1,513,218
Cyberpress Ltd £151,999
FMS Solutions Ltd £113,000
Mapesbury Communications £76,660
O2 £209,888
Opal Telecom £155,555
PLDT £88,889
Shyam Telecom UK £101,011
Spring Mobil £50,110
Teleware £1,001,880

One company that focuses on the supply of GSM and 3G picocells is IP Access based in Cambridge. Their nanoBTS base station can be deployed in buildings, shopping centres, transport terminals, at home, underground stations, rural and remote deployments; in fact, almost anywhere – according to their web site.

Many of the bigger suppliers of GSM and 3G equipment manufacture pico cell platforms as well, Nortel for example.

Of course, even though the pico cell base station (BTS) is lower cost compared to standard base stations, that is not the end to the costs. A pico cell operator still needs to install a base station controller (BSC) which can control a number of base stations plus an home location register (HLR) which stores the current state of mobile phone in a database database. If the network needs to support roaming customers, a virtual location register (VLR) is also required. On top of this, interconnect equipment to other mobile operators is required. All this does not come cheap!

Pico GSM cells are low power versions of their big brothers and are usually associated with a lower-cost backhaul technology based on IP in place of traditional point to point microwave links. GSM Pico cell technology can be used in a number of application scenarios.

In-building use as the basis of ‘seamless’ fixed-mobile voice services. The use of mobile phones as a replacement to fixed telephones has always been a key ambition for mobile operators. But, as we all know, in-building coverage by cellular operators is often not too good leading to the necessity of taking calls near windows or on balconies. The installation of an in-building pico-cell is one way of providing this coverage and comes under the heading of fixed mobile integration. One challenge in this scenario is the possible need to manually swap SIM cards when entering or exiting the building if a different operator is used inside the building to that outside. Of course, nobody would be willing to do this physically so a whole industry has been born to support dual SIM cards which can be selected from a menu option.

From a usability and interoperability perspectives fixed-mobile integration still represents a major industry challenge. Not the least of the problems is that a swap from one operator to another could trigger roaming charges. This is probably an application area that only the bigger license winners will participate in.

On ships using satellite backhaul: This has always been an obvious application for pico cells, especially for cruise ships.

On aeroplanes: In-cabin use of mobile phones is much more contentious than use on ships and I could write a complete post on this particular subject! But, I guess this is inevitable no matter how irritating it would be to fellow passengers – no bias here! e.g. OnAir with their agreement with Airbus.

Overseas network extensions: I was interested in finding out how some of the winners of the OFCOM auction were getting on now that they held some prime spectrum in the UK so I talked with Magnus Kelly, MD at Mapesbury (MCom). I’m sure they were happy with what they paid as they were at the ‘right end’ of the price spectrum.

Mapesbury are a relatively small service provider set up in 2002 to offer data, voice and wireless connectivity services to local communities. In 2003, MCom acquired the assets of Forecourt Television Limited, also known as FTV. FTV had a network of advertising screens on a selection of petrol station forecourts, among them Texaco. This was when I first met Magnus. Using this infrastructure, they later signed a contract with T-Mobile UK, to offer a W-Fi service in selected Texaco service stations across the UK.

More recently, they opened their first IP.District, providing complete Wi-Fi coverage over Watford, Herts using 5.8GHz spectrum. MCom has had pilot users testing the service for the last 12 months.

Magnus was quite ebullient about how things were going on the pico-cell front although there were a few sighs when talking about organising and negotiating the allocation of the required number blocks and point codes necessitated by the trials that they have been running.

He emphasised that that the technology seemed to work well and the issues they now had were the same as any company in the circumstances; creating a business that made money. They have looked at a number of applications and decided that the fixed-mobile integration is probably best left to the major mobile operators.

They are enamoured by the opportunities presented by what is called Overseas network extensions. In essence, this is creating what can envisioned as ‘bubble’ extensions to non-UK mobile networks in the UK. The traffic generated in these extensions can then be backhauled using low cost IP pipes. The core value proposition is low-cost mobile telephone calls aimed at dense local clusters of people using mobile phones. For example, these could be clusters of UK immigrants who would like the ability to make low-cost calls from their mobile phones back to their home countries. Clearly in these circumstances, these pico-cell GSM bubbles would be focused on selected city suburbs as they are following the same subscriber density logic that drives Wi-Fi cell deployment.

In the large mobile operator camp, O2 announced in November 2006 that they will offer indoor, low-power GSM base stations connected via broadband as part of its fixed-mobile convergence (FMC) strategy, an approach that will let customers use standard mobile phones in the office. I’ll be writing a post about FMC in the future.

Although it is early days for GSM pico cell deployment in the UK, it looks like it could have have a healthy future, although this should not be taken for granted. There are a host of technical, commercial and regulatory and political challenges to seamless use of phones inside and outside of buildings. There are also other technology solutions – IT based rather than wireless – for reducing mobile phone bills. An example of such an innovative approach is supplied by OnRelay.

The magic of ‘presence’

March 20, 2007

Presence is one of the in-words of the telecoms and web industry for the last few years. It sits alongside Location based services as a capability that is still to “realise its full potential”. A Wikipedia overview of presence can be found here.

I have used Google Alerts for the last couple of years to track market activity of technologies and companies that are of interest to me and presence has been one of the key words that I have looked at. These are typical of the results I see pouring into my in-tray each week:

As you can see, I havn’t seen too many announcements about presence from a telecommunications perspective!

Presence is a very broad church and like the word platform everyone uses it with their own interpretation. Wikipedia defines presence as “A user client may publish a presence state to indicate its current communication status.” Let’s go through some examples of presence as it used today.

One of the most simple examples of presence has been around for many years and that can be seen in email clients like Outlook by using an Out of Office auto reply message. I say simple, but in a typical Microsoft way it can be quite complicated to set up if you are not using an Exchange server. However that aside, by using this facility you can indicate to anyone that sends you an email that you are away from the office for a time. You can use this message to just say that you are not around or it could be helpful by providing an alternative contact.

The use Out of Office feature brings us straight way to the fact that using it can create problems! For example, I am a member of many news groups and Out of the Office auto-responses do create problems as everyone in the newsgroup receives them. following each and every post. If the group is very active, they can really build up and rapidly become an irritant.

Another simple use of presence information can be found on web sites and in email signatures. The one shown on the left is in an email from VerticalResponse where they show information about if their support organisation is open. This is an application of presence showing the availability of the support team.

This shows an interesting aspect of presence. Someone may be present but do they wish to be available? These are different concepts and need to be considered separately. Rolling them together can create all sorts of problems as we will see later.

The VerticalResponse signature shown above shows a live presence element. When Live Chat is available it is shown in green and I assume when it is not available it is shown as Not Available in red (I havn’t seen this). One company that helps provide this type of capability is Contact at Once. According to their web site they use their presence engine to “continually monitor the availability of advertiser sales representatives across multiple devices and aggregates the availability status of each representative into an advertiser-level availability or “presence.”

Another company that provides a presence engine is Jabber: “By integrating presence—i.e., information about the availability of entities, end-points, and content for communication over a network—into applications, devices, and systems you can streamline processes and increase the velocity of information within your organization. Discover the latest best practices organizations are implementing to take advantage of the benefits of adding presence to business processes.”

Although they offer Instant Messaging services described below they focus on integrating presence information into enterprise process flows to increase the efficiency of business processes. The available flag is an example of this as is the automatic routing of internal calls to an available expert or an available person in a company’s call centre.

On of the most common application of presence is in Instant Messaging (IM) services where you are able to set your status to be on-line or Away. The picture on the right is the status options in Microsoft Messenger.

On the left below is Yahoo Messenger. It is interesting to note the addition of the Invisible to Everyone option. Why is this supplied I wonder? Most instant messaging services and some PC-to-PC VoIP services provide an option to set your availability status. Many users have found that this capability can create real problems.

When you boot your PC, your status is set to On-line automatically. Annoyingly, this often leads to several of your buddies saying “Hi” or work colleagues asking you questions immediately!

Away status. You can choose to set an Away status automatically after say 5 minutes. The IM or VoIP service detects that you are away because you have not touched your keyboard or mouse. As soon as you return to your PC and touch the keyboard you are placed On-line again and open to immediate interruption just as you start working. This is just the exact opposite of what you want!

Multiple services. When you use several services such as IM, VoIP or calendar it is highly unlikely that you will set your status on every application before your leave your desk so that they do not reliably reflect your real status.

These problems can become so severe that the only solution is for many is to opt to appear to be Off-line permanently destroying any benefits of sharing status information.

One of the problems with presence being built into many on-line applications is that you need to set your status on every single one of them every time it changes or your will not see any benefit. There are quite a few companies who aggregate presence information. One such company is PRESENCEWORKS that enables you to integrate presence information from instant messaging into your pre-existing business software as shown below.

In my post about the Would u like to collaborate with YuuGuu? I showed their initial presence option. In its current simple form, this is of limited use because of the difference between presence and Availability. For example when I contacted YuuGuu, Philip was shown as being Available but it was half an hour later when he came back to me saying he was on another call thus demonstrating that he was not Available.

Many new social network services claim to use presence as a component to their service. This is so common it could be said to be ubiquitous. A good example of this is NeuStar who provide “next generation messaging” technology to service providers. To quote their web site: “Presence services enable people within a community to keep connected anytime, any place. When they indicate their availability or see that their contacts are on-line, presence is a catalyst for interactive services for those users who demand an enriched communications environment.”

This capability is similar to that seen in IM services. If your mobile phone is on, you are deemed to be available unless you manually set your status to be unavailable.

Another provider of presence technology is iotum who produce a Relevance Engine™ that can be used as the basis of a number of services.

One of the core applications is call handling: When someone calls you, iotum’s Relevance Engine instantly identifies the caller and cross-references their identity with your address book. Within a fraction of a second, iotum understands the relationship you have with this person.

Just as quickly, iotum accesses your IM presence status and/or online calendar program to determine what you are doing at that moment. It determines which of your communications devices should receive the call and helps to ensure your phone will only ring if you want it to, based on your schedule, your defined preferences and your past behaviour.

iotum has also launched a consumer presence service aimed at Blackberry users – Talk Now.

A recent mobile social networking service that uses presence is jaiku from Finland. Jaiku uses what it terms to be Rich Presence. This is about texting presence updates to your community from any phone.

Note: The above is an old picture and shows how information on your mobile phone can be used to interpret your presence or availability.

These free-form messages can show availability and they can show location but will often just show an irrelevant message commenting on something that is currently happening to the text sender – just like SMS messages really!

Which brings us to Twitter. Twitter is similar in many ways to Jaiku, in that it is a “global community of friends and strangers answering one simple question: What are you doing? Answer on your phone, IM, or right here on the web!”

Sam Sethi at Vecosys recently posted a good update on Twitter and its ecosystem – The Twitterfication of the blogosphere. To me, the best description of Twitter is a microblog – that says it all.

One of the big challenges of today’s on-line world is information overload and although there can be useful presence and availability information contained in Twitter updates if users control what they post, I’m sure that this is often not the case. Twitter is useful and fun in a social context but because of the baggage that goes along with it, I suspect that it will be of limited use in focused business applications today.

I do not use Twitter at the moment so I guess my last post shown above will remain as my current status forever?

This is but a brief overview of the world of presence and I have missed out many areas of interest. I will try and talk about these in future posts. Like Location Based Services, Presence is full of intrigue and promise. We shall see what happens in coming years.

Addendum: I was planning to planning to mention another presence aggregator that was started up up by Jeff Pulver in 2006 – Tello. But, according to Goodbye, Tello they are no longer as of a few days ago. Tello took a technology platform integration of providing presence information; an approach which I believe to be flawed for any start-up – even one with deep pockets.

Addendum: A good post about new presence

webex + Cisco thoughts

March 19, 2007

I first read about the Cisco acquisition of Webex on Friday when a colleague sent me a post from – It’s more than we wanted to spend, but look how well it fits. It’s synchronicity in operation again of course because I mentioned webex in posting about a new application sharing company: Would u like to collaborate with YuuGuu? There are many other postings about this deal with a variety of views – some more relevant than others – Techcrunch for example: Cisco Buys WebEx for $3.2 Billion

Although pretty familiar with the acquisition history of Cisco, I must admit that I was surprised at this opening of the chequebook for several reasons.

I used webex quite a lot last year and really found it quite a challenge to use. My biggest area of concern was usability.

(a) When using webex there are several windows open on your desktop making its use quite confusing. At least once I closed the wrong window thus accidentally closing the conference. As I was just concluding a pitch I was more than unhappy as it clused both the video and the audio components of the conference! I broke my golden rule of not using separate audio bridging and application sharing services.

(b) When using webex’s conventional audio bridge, you have to open the conference using the a webex web site page on a beforehand. If you fail to do so, the bridge cannot be opened with everyone receiving an error message when they dial in. To correct this takes a about 5 minutes. Even worse, you cannot use an audio bridge on a standalone basis without having access to a PC! Not good when travelling.

(c) The UI is over complicated and challenging for users under the pressure of giving a presentation. Even the invite email that webex sends out it confusing – the one below is typical. Although the example is the one sent to the organiser, the ones sent to participants are little better.

Hello Chris Gare,
You have successfully scheduled the following meeting:
TOPIC: zzzz call
DATE: Wednesday, May 17, 2006
TIME: 10:15 am, Greenwich Standard Time (GMT -00:00, Casablanca ) .
MEETING NUMBER: 705 xxx xxx
HOST KEY: yyyy
TELECONFERENCE: Call-in toll-free number (US/Canada): 866-xxx-xxxx
Call-in number (US/Canada): 650-429-3300
Global call-in numbers:
1. Please click the following link to view, edit, or start your meeting.
Here’s what to do:
1. At the meeting’s starting time, either click the following link or copy and paste it into your Web browser:
2. Enter your name, your email address, and the meeting password (if required), and then click Join.
3. If the meeting includes a teleconference, follow the instructions that automatically appear on your screen.
That’s it! You’re in the web meeting!
WebEx will automatically setup Meeting Manager for Windows the first time you join a meeting. To save time, you can setup prior to the meeting by clicking this link:
For Help or Support:
Go to, click Assistance, then Click Help or click Support.
………………..end copy here………………..
For Help or Support:
Go to, click Assistance, then Click Help or click Support.
To add this meeting to your calendar program (for example Microsoft Outlook), click this link:
To check for compatibility of rich media players for Universal Communications Format (UCF), click the following link:
We’ve got to start meeting like this(TM)

Giving presentations on-line is a stressful process at the best of times and the application sharing application needs to be so simple to use that you can just concentrate on the presentation not the medium. webex, in my opinion, fails on this criteria. There are so many new and easier to use conferencing services around that I was surprised that webex provided such a poor usability experience.

Reason #2: In another posting – Why in the world would Cisco buy WebEx?, Steve Borsch talks about the inherent value of webex’s proprietary MediaTone network. This could be called a Content Distribution network (CDN) such as operated by Akamai, Mirror Image or Digital Island bought by Cable and Wireless a few years ago. You can see a flash overview of MediaTone on their web site.

The flash talks about this as an “Internet overlay network” that provides better performance than the unpredictable Internet, but as a individual user of webex I was still forced to access webex services via the Internet as this was unavoidable. I assume that MediaTone is a backbone network interconnecting webex’s data centres. It seems strange to me that an applications company like webex felt the need to spend several $bn on building their own network when perfectly adequate networks could be bought in from the likes of Level3 quite easily and at low cost. In the flash presentation, webex says that it started to build the network a decade ago and it could have been seen as a value-added differentiator at that time. More likely was that it was actually needed for the company’s applications to actually work adequately as the Internet was so poor from a performance perspective in those days.

I have no profound insights into Cisco’s M&A strategy, but this particular acquisition brings Cisco into potential competition with two of its customer sectors at a stroke – on-line application vendors and the carrier community. This does strike me as a little perverse.

The insistent beat of Netronome!

March 15, 2007

Last week I popped into to visit Netronome in their Cambridge office and was hosted by David Wells their VP Technology, GM Europe who was one of the Founders of the company. The other two Founders were Niel Viljoen and Johann Tönsing who previously worked for companies such as FORE Systems (bought by Marconi), Nemesys, Tellabs and Marconi. Netronome is HQed in Pittsburgh but has offices in Cambridge UK and South Africa.

I mentioned Netronome in a previous post about network processors – The intrigue of network / packet processors so I wanted to bring myself up to date with what they were up to following their closing of a $20M ‘C’ funding round in November 2006 led by 3i.

What do Netronome do?

Netronome manufacture network processor based hardware and software that enables the development of applications that need to undertake real-time network content flow analysis. Or to be more accurate, enable significant acceleration and throughput for applications that need to undertake packet inspection or maybe deep packet inspection.

I say to to be more accurate because it is possible to monitor packets in a network without the use of network processors using a low-cost Windows or Linux based computer, but if the data is flowing through a port at gigabit rates – which is most likely these days – then there is little capability to react to a detected traffic type other than switching the flow to another port or simply blocking it. If your really want to detect particular traffic types in a gigabit packet flow, make an action decision, change some of the data bits in the header or body, all transparently and at full line speed then you will undoubtedly need a network processor based card from a company like Netronome. The Intel powered 16 micro-engine used in Netronome’s products enables the inspection of upwards of 1 million simultaneous bidirectional flows.

Netronome’s product is termed an Open Appliance Platform. Equipment vendor companies have used network processors (NPs) for many years. For example Cisco, Juniper and the like would use them to process packets on an interface card or blade. This would more than likely be an in-house developed NP architecture used in combination with hard-wired logic and Field Programmable Gate Arrays (FPGAs). This combination enables complete flexibility to run what’s best to run in software on the NP and use the FPGAs to accommodate possible architecture elements that may change – maybe due to standards being incomplete for example.

Netronome’s Network Acceleration Card

Other companies that have used NPs for a long time make what are known as Network Appliances. A network appliance is a standalone hardware / software bundle often based on Linux that provides a plug-and-play application that can be connected to a live network with a minimum of work. Many network appliances are simply using a server motherboard with two standard gigabit network cards installed and Linux as the OS with the application on top. These appliance vendors know that they need the acceleration they can get from an NP, but they often don’t want to deal with the complexity of hardware design and NP programming.

Either way, they have written their application specific software to run on top their hardware design. Every appliance manufacture has taken a proprietary approach which creates a significant support challenge as each new generation of NP architecture improves throughput. Being software vendors in reality, all they really want to do is write software and applications and not have the bother of supporting expensive hardware.

This is where Netronome’s Open Appliance Platform comes in. Netronome has developed a generic hardware platform and the appropriate virtual run-time software that enables appliance vendors to dump their own challenging-to-support hardware and use Netronome’s NP processor instead. The important aspect of this is that this can be achieved with minimum change to their application code.

What are the possible applications (or use cases) of Netronome’s Network Acceleration card?

The use of Netronome’s product is particularly beneficial as the core of network appliances in the following application areas.

Security: All type of enterprise network security application that depends on the inspection and modification of live network traffic.

SSL Inspector: The Netronome SSL Inspector is a transparent proxy for Secure Socket Layer (SSL) network communications. It enables applications to access the clear text in SSLencrypted connections and has been designed for security and network appliance manufacturers, enterprise IT organizations and system integrators. The SSL inspector allows network appliances to be deployed with the highest levels of flow analysis while still maintaining multi-gigabit line-rate network performance.

Compliance and audit: To ensure that all company employees are in compliance with new regulatory regimes, companies must voluntarily discover, disclose, expeditiously correct, and prevent recurrence of future violations.

Network access and identity: To check the behaviour and personal characteristics by which an individual is defined as a valid user of an application or network.

Intrusion detection and prevention: This has always been a heartland application for network processors.

Intelligent billing: By detecting a network event or a particular traffic flow, a billing event could be initiated.

Innovative applications: To me this is one of the most interesting areas as it depends on having a good idea, but applications could be, modifying QoS parameters on the fly in an MPLS network or detecting particular application flows on the fly – grey VoIP traffic for example. If you want to know about other application ideas – give me a call!

Netronome’s Architecture components

Netronome Flow Drivers: The Netronome Flow Drivers (NFD) provide high speed connectivity between the hardware components of the flow engine (NPU and cryptography hardware) and one or more Intel IA / x86 processors running on the motherboard. The NFD allows developers to write their own code for the IXP NPU and the IA / x86 processor.

Netronome Flow Manager: The Netronome Flow Manager (NFM) provides an open application programming interface for network and security appliances that require acceleration. The NFM not only abstracts (virtualises) the hardware interface of the Netronome Flow Engine (NFE), but its interfaces also guide the adaptation of applications to high-rate flow processing.

Overview of Netronome’s architecture components

Netronome real-time Flow Kernel: At the heart of the platform’s software subsystem is a real-time microkernel specialized for Network Infrastructure Applications. The kernel coordinates and steers flows, rather than packets, and is thus called the Netronome Flow Kernel (NFK). The NFK also does everything the NFM does and it also supports virtualisation.

Open Appliance Platform: Netronome have recently announced a chassis system that can be used by ISVs to quickly provide a solution to their customers.


If your application or service really needs a network processor you will realise this quite quickly as the the performance of your non-NP based network application will be too slow, is unable to undertake the real-time bit manipulation you need or, the real killer, it is unable to scale to the flow rates your application will see in real world deployment.

In the old days, programming NPs was a black art not understood by 99.9% of the world’s programmers, but Netronome is now making the technology more accessible by providing appropriate middleware – or abstraction layer – that enables network appliance software to be ported to their open platform without a significant rewrite being necessitated or a detailed understanding of programming an NP. Your application just runs in a virtual run-time environment and uses the flow API and the Netronome product does the rest.

Good on ’em I say.