Path Computation Element (PCE): IETF’s hidden jewel

April 10, 2007

In a previous post MPLS-TE and network traffic engineering, I talked about the challenges of communication network traffic engineering and capacity planning and their relation to MPLS-TE (or MPLSTE). Interestingly, I realised that I did not mention that all of the engineering planning, design and optimisation activities that form the core of network management usually take place off-line. What I mean by this, is that a team of engineers sit down either on an ad hoc basis driven by new network or customer acquisitions or as part of an annual planning cycle to produce an upgrade or migration plan that can be used to extend their existing network to meet the needs of the additional traffic. This work does not impact live networks until the OPEX and CAPEX plans have been agreed and signed off by management teams and then implemented. A significant proportion of data that drives this activity is obtained from product marketing and/or sales teams who are supposed to know how much additional business, hence additional traffic, will be imposed on the network in the time period coved by planning activities.

This long-term method of planning network growth has been used since the dawn of time and the process should put in place the checks and balances (that were thrown to the wind in the late 1990s) to ensure that neither too much nor too little investment is made in network expansion.

What is Path Computation Element (PCE)

What is a path through the network? I’ve covered this extensively in my previous posts about MPLS’s ability to guide traffic through a complex network and force particular packet streams to follow a constraint-based and pre-determined path from network ingress to network egress. This deterministic path or tunnel enables the improved QoS management of real-time services such as Voice over IP or IPTV.

Generally paths are calculated and managed off-line as part of the overall traffic engineering activity. When a new customer is signed up, their traffic requirements are determined and the most appropriate paths for the traffic superimposed on the current network topology that would best meet the customer’s needs and balance traffic distribution on the network. If new physical assets are required, then these would be provisioned and deployed as necessary.

Traditional planning cycles are traditionally focussed on medium to long term needs and cannot really be applied to shorter planning needs. Such short term needs could derive from a number of requirements such as:

  • Changing network configurations dependent on the time of day, for example, there is usually a considerable difference traffic profiles between office hours, evening hours and night time. The possibility of dynamically moving traffic dependent on busy hours (Time being the new constraint) could provide significant cost benefits.
  • Dynamic or temporary path creation based on customers’ transitory needs.
  • Improved busy hour management through auto-rerouting of traffic.
  • Dynamic balancing of network load to reduce congestion.
  • Improved restoration when faults occur.

To be able to undertake these tasks a carrier would need to move away from off-line Path Calculation to On-line Path Calculation and this is where IETF’s Path Computation Element (PCE) Working Group comes to the rescue.

In essence, on-line PCE software acts very much along the same lines a graphics chip handles off-loaded calculations for the main CPU in a personal computer. For example, a service requires that a new path be generated through the network and that request, together with the constrained-path requirements for the path such as bandwidth, delay etc., is passed to the attached PCE computer. The PCE has a complete picture of flows and paths in the network at the precise moment derived from other Operational Support Software (OSS) programmes so it can calculate in real time the optimal path through the network that will deliver the requested path. This path is then used to automatically update router configurations and Traffic engineering database.

In practice, the PCE architecture calls for each Autonomous System (AS) domain to have its own PCE and if a multi-domain path is required the affected PCEs will co-operate to calculate the required path with the requirement provided by a ‘master’ PCE. The standard supports any combination, number or location of PCEs.

Why a separate PCE?

There are a number of reasons why a separate PCE is being proposed:

  • Path Computation of any form is not an easy and simple task by any means. Even with appropriate software, computing all the primary, back-up and services paths on a complex network will strain computing techniques to the extreme. A number of companies that provide software capable of undertaking this task were provided in the above post.
  • The PCE will need undertake computationally intensive calculations so it is unlikely (to me) that a PCE capability would ever be embedded into a router or switch as they generally do not have the power to undertake path calculations in complex network.
  • If path calculations are to be undertaken in a real-time environment then, unlike off-line software which can take hours for an answer to pop out, a PCE would needs to provide an acceptable solution in just a few minutes or seconds.
  • Most MPLS routers calculate a path on the basis of a single constraint e.g. the shortest path. Calculating paths based on multiple constraints such as bandwidth, latency, cost or QoS significantly increases the computing power required to reach a solution.
  • Routers route and have limited or partial visibility of the complete network, domain and service mix and thus are not able to undertake the holistic calculations required in a modern converged network.
  • In a large network the Traffic engineering database (TED) can become very large creating a large computational overhead for a core router. Moving TED calculations to a dedicated PCE server could be beneficial in lowering path request response times.
  • In a traditional IP network there may be many legacy devices that do not have an appropriate control plane thus creating visibility ‘holes’.
  • A PCE could be used to provide alternative restorative routing of traffic in an emergency. As a PCE would have a holistic view of the network, restoration using a PCE could reduce potential knock-on effects of a reroute.

The key aspect of multi-layer support

One of the most interesting architecture aspects of the PCE is to address a very significant issue faced by all carriers today – multi-network support. All carriers utilise multiple layers to transport traffic – these could include IP-VPN, IP, Ethernet, TDM, MPLS, SDH and optical networks in several possible combinations. The issue is that a path computation at the highest level inevitably has a knock-on effect down the hierarchy to the physical optical layer. Today, each of these layers and protocols are generally managed, planned and optimised as separate entities so it would make sense that when a new path is calculated, its requirements are passed down the hierarchy so that knock-on effects can be better managed. The addition of a new small IP link could force the need to add an additional fibre.

Clearly, providing flow though and visibility of new services to all layers and manage path computation on a multi-layer basis would be a real boon for network optimisation and cost reduction. However, let’s bear in mind that this represents a nirvana solution for planning engineers!

A Multi-layer path

The PCE specification is being defined to provide this across layer or multi-layer capability. Note that a PCE is not a solution aimed at use on the whole Internet – clearly this would be a step just too challenging along the lines of the whole Internet upgrading IPV-6!

I will not plunge into the deep depths of the PCE architecture here, but a complete overview can be found in A Path Computation Element (PCE) Based Architecture (RFC 4655). At the highest level the PCE talks to a signalling engine that takes in requests for a new path calculation and passes any consequential requests to other PCEs that might be needed for an inter-domain path. The PCE also interacts with the Traffic Engineering Database to automatically update it if and as required (Picture source: this paper).

Another interesting requirement document is Path Computation Element Communication Protocol (PCECP) Requirements .

Round up

It is very early days for the PCE project, but it would seem to provide one of the key elements required to enable carriers to effectively manage a fully converged Next Generation Network. However, I would imagine that the operational management in many carriers would be aghast at putting the control of even transient path computation on-line when considering the risk and the consequence to customer experience if it went wrong.

Clearly PCE architecture has to be based on the use of powerful computing engines, software that can holistically monitor and calculate new paths in seconds and most importantly be a truly resilient network element. Phew!

Note: One of the few commercial companies working on PCE software is Aria Networks who are based in the UK and whose CTO, Adrian Farrell, is also Chairman of the PCE Working Group. I do declare an interest as I undertook some work for Aria Networks in 2006.

Addendum #1: GMPLS and common control

Addendum #2: Aria Networks shows the optimal path

Addendum #3: It was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ or function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?

Addendum #4: New web site focusing on PCE: http://pathcomputationelement.com

Advertisements

Islands of communication or isolation?

March 23, 2007

One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

What are the causes these so-called islands of isolation? Here are a few examples.

Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?


webex + Cisco thoughts

March 19, 2007

I first read about the Cisco acquisition of Webex on Friday when a colleague sent me a post from SiliconValley.com – It’s more than we wanted to spend, but look how well it fits. It’s synchronicity in operation again of course because I mentioned webex in posting about a new application sharing company: Would u like to collaborate with YuuGuu? There are many other postings about this deal with a variety of views – some more relevant than others – Techcrunch for example: Cisco Buys WebEx for $3.2 Billion

Although pretty familiar with the acquisition history of Cisco, I must admit that I was surprised at this opening of the chequebook for several reasons.

I used webex quite a lot last year and really found it quite a challenge to use. My biggest area of concern was usability.

(a) When using webex there are several windows open on your desktop making its use quite confusing. At least once I closed the wrong window thus accidentally closing the conference. As I was just concluding a pitch I was more than unhappy as it clused both the video and the audio components of the conference! I broke my golden rule of not using separate audio bridging and application sharing services.

(b) When using webex’s conventional audio bridge, you have to open the conference using the a webex web site page on a beforehand. If you fail to do so, the bridge cannot be opened with everyone receiving an error message when they dial in. To correct this takes a about 5 minutes. Even worse, you cannot use an audio bridge on a standalone basis without having access to a PC! Not good when travelling.

(c) The UI is over complicated and challenging for users under the pressure of giving a presentation. Even the invite email that webex sends out it confusing – the one below is typical. Although the example is the one sent to the organiser, the ones sent to participants are little better.

Hello Chris Gare,
You have successfully scheduled the following meeting:
TOPIC: zzzz call
DATE: Wednesday, May 17, 2006
TIME: 10:15 am, Greenwich Standard Time (GMT -00:00, Casablanca ) .
MEETING NUMBER: 705 xxx xxx
PASSWORD: xxxx
HOST KEY: yyyy
TELECONFERENCE: Call-in toll-free number (US/Canada): 866-xxx-xxxx
Call-in number (US/Canada): 650-429-3300
Global call-in numbers: https://webex.com/xxx/globalcallin.php?serviceType=MC&ED=xxxx
1. Please click the following link to view, edit, or start your meeting.
https://xxx.webex.com/xxx/j.php?ED=87894897
Here’s what to do:
1. At the meeting’s starting time, either click the following link or copy and paste it into your Web browser:
https://xxx.webex.com/xxx/j.php?ED=xxxxx
2. Enter your name, your email address, and the meeting password (if required), and then click Join.
3. If the meeting includes a teleconference, follow the instructions that automatically appear on your screen.
That’s it! You’re in the web meeting!
WebEx will automatically setup Meeting Manager for Windows the first time you join a meeting. To save time, you can setup prior to the meeting by clicking this link:
https://xxx.webex.com/xxx/meetingcenter/mcsetup.php
For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
………………..end copy here………………..
For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
To add this meeting to your calendar program (for example Microsoft Outlook), click this link:
https://xxx.webex.com/xxx/j.php?ED=87894897&UID=480831657&ICS=MS
To check for compatibility of rich media players for Universal Communications Format (UCF), click the following link:
https://xxx.webex.com/xxx/systemdiagnosis.php
http://www.webex.com
We’ve got to start meeting like this(TM)

Giving presentations on-line is a stressful process at the best of times and the application sharing application needs to be so simple to use that you can just concentrate on the presentation not the medium. webex, in my opinion, fails on this criteria. There are so many new and easier to use conferencing services around that I was surprised that webex provided such a poor usability experience.

Reason #2: In another posting – Why in the world would Cisco buy WebEx?, Steve Borsch talks about the inherent value of webex’s proprietary MediaTone network. This could be called a Content Distribution network (CDN) such as operated by Akamai, Mirror Image or Digital Island bought by Cable and Wireless a few years ago. You can see a flash overview of MediaTone on their web site.

The flash talks about this as an “Internet overlay network” that provides better performance than the unpredictable Internet, but as a individual user of webex I was still forced to access webex services via the Internet as this was unavoidable. I assume that MediaTone is a backbone network interconnecting webex’s data centres. It seems strange to me that an applications company like webex felt the need to spend several $bn on building their own network when perfectly adequate networks could be bought in from the likes of Level3 quite easily and at low cost. In the flash presentation, webex says that it started to build the network a decade ago and it could have been seen as a value-added differentiator at that time. More likely was that it was actually needed for the company’s applications to actually work adequately as the Internet was so poor from a performance perspective in those days.

I have no profound insights into Cisco’s M&A strategy, but this particular acquisition brings Cisco into potential competition with two of its customer sectors at a stroke – on-line application vendors and the carrier community. This does strike me as a little perverse.


The insistent beat of Netronome!

March 15, 2007

Last week I popped into to visit Netronome in their Cambridge office and was hosted by David Wells their VP Technology, GM Europe who was one of the Founders of the company. The other two Founders were Niel Viljoen and Johann Tönsing who previously worked for companies such as FORE Systems (bought by Marconi), Nemesys, Tellabs and Marconi. Netronome is HQed in Pittsburgh but has offices in Cambridge UK and South Africa.

I mentioned Netronome in a previous post about network processors – The intrigue of network / packet processors so I wanted to bring myself up to date with what they were up to following their closing of a $20M ‘C’ funding round in November 2006 led by 3i.

What do Netronome do?

Netronome manufacture network processor based hardware and software that enables the development of applications that need to undertake real-time network content flow analysis. Or to be more accurate, enable significant acceleration and throughput for applications that need to undertake packet inspection or maybe deep packet inspection.

I say to to be more accurate because it is possible to monitor packets in a network without the use of network processors using a low-cost Windows or Linux based computer, but if the data is flowing through a port at gigabit rates – which is most likely these days – then there is little capability to react to a detected traffic type other than switching the flow to another port or simply blocking it. If your really want to detect particular traffic types in a gigabit packet flow, make an action decision, change some of the data bits in the header or body, all transparently and at full line speed then you will undoubtedly need a network processor based card from a company like Netronome. The Intel powered 16 micro-engine used in Netronome’s products enables the inspection of upwards of 1 million simultaneous bidirectional flows.

Netronome’s product is termed an Open Appliance Platform. Equipment vendor companies have used network processors (NPs) for many years. For example Cisco, Juniper and the like would use them to process packets on an interface card or blade. This would more than likely be an in-house developed NP architecture used in combination with hard-wired logic and Field Programmable Gate Arrays (FPGAs). This combination enables complete flexibility to run what’s best to run in software on the NP and use the FPGAs to accommodate possible architecture elements that may change – maybe due to standards being incomplete for example.

Netronome’s Network Acceleration Card

Other companies that have used NPs for a long time make what are known as Network Appliances. A network appliance is a standalone hardware / software bundle often based on Linux that provides a plug-and-play application that can be connected to a live network with a minimum of work. Many network appliances are simply using a server motherboard with two standard gigabit network cards installed and Linux as the OS with the application on top. These appliance vendors know that they need the acceleration they can get from an NP, but they often don’t want to deal with the complexity of hardware design and NP programming.

Either way, they have written their application specific software to run on top their hardware design. Every appliance manufacture has taken a proprietary approach which creates a significant support challenge as each new generation of NP architecture improves throughput. Being software vendors in reality, all they really want to do is write software and applications and not have the bother of supporting expensive hardware.

This is where Netronome’s Open Appliance Platform comes in. Netronome has developed a generic hardware platform and the appropriate virtual run-time software that enables appliance vendors to dump their own challenging-to-support hardware and use Netronome’s NP processor instead. The important aspect of this is that this can be achieved with minimum change to their application code.

What are the possible applications (or use cases) of Netronome’s Network Acceleration card?

The use of Netronome’s product is particularly beneficial as the core of network appliances in the following application areas.

Security: All type of enterprise network security application that depends on the inspection and modification of live network traffic.

SSL Inspector: The Netronome SSL Inspector is a transparent proxy for Secure Socket Layer (SSL) network communications. It enables applications to access the clear text in SSLencrypted connections and has been designed for security and network appliance manufacturers, enterprise IT organizations and system integrators. The SSL inspector allows network appliances to be deployed with the highest levels of flow analysis while still maintaining multi-gigabit line-rate network performance.

Compliance and audit: To ensure that all company employees are in compliance with new regulatory regimes, companies must voluntarily discover, disclose, expeditiously correct, and prevent recurrence of future violations.

Network access and identity: To check the behaviour and personal characteristics by which an individual is defined as a valid user of an application or network.

Intrusion detection and prevention: This has always been a heartland application for network processors.

Intelligent billing: By detecting a network event or a particular traffic flow, a billing event could be initiated.

Innovative applications: To me this is one of the most interesting areas as it depends on having a good idea, but applications could be, modifying QoS parameters on the fly in an MPLS network or detecting particular application flows on the fly – grey VoIP traffic for example. If you want to know about other application ideas – give me a call!

Netronome’s Architecture components

Netronome Flow Drivers: The Netronome Flow Drivers (NFD) provide high speed connectivity between the hardware components of the flow engine (NPU and cryptography hardware) and one or more Intel IA / x86 processors running on the motherboard. The NFD allows developers to write their own code for the IXP NPU and the IA / x86 processor.

Netronome Flow Manager: The Netronome Flow Manager (NFM) provides an open application programming interface for network and security appliances that require acceleration. The NFM not only abstracts (virtualises) the hardware interface of the Netronome Flow Engine (NFE), but its interfaces also guide the adaptation of applications to high-rate flow processing.

Overview of Netronome’s architecture components

Netronome real-time Flow Kernel: At the heart of the platform’s software subsystem is a real-time microkernel specialized for Network Infrastructure Applications. The kernel coordinates and steers flows, rather than packets, and is thus called the Netronome Flow Kernel (NFK). The NFK also does everything the NFM does and it also supports virtualisation.

Open Appliance Platform: Netronome have recently announced a chassis system that can be used by ISVs to quickly provide a solution to their customers.

Round-up

If your application or service really needs a network processor you will realise this quite quickly as the the performance of your non-NP based network application will be too slow, is unable to undertake the real-time bit manipulation you need or, the real killer, it is unable to scale to the flow rates your application will see in real world deployment.

In the old days, programming NPs was a black art not understood by 99.9% of the world’s programmers, but Netronome is now making the technology more accessible by providing appropriate middleware – or abstraction layer – that enables network appliance software to be ported to their open platform without a significant rewrite being necessitated or a detailed understanding of programming an NP. Your application just runs in a virtual run-time environment and uses the flow API and the Netronome product does the rest.

Good on ’em I say.


Nexagent, an enigmatic company?

March 12, 2007

In a recent post, MPLS and the limitations of the Internet, I wrote about the challenge of expecting predictable performance for real-time services such as client / server applications, Voice over IP (VoIP) or video services over the public Internet.I also wrote about the challenges of obtaining predictable performance for these same applications on a Wide area Network (WAN) using IP-VPNs when the WAN straddles multiple carriers – as they most always do. This is brought about about by the fact that the majority of carriers to not currently inter-connect their MPLS networks to enable seamless end-to-end multi-carrier Class-of-Service based performance.

As mentioned in the above post, there are several companies that focus on providing this capability through a mixture of technologies, monitoring and a willingness to act as a prime contractor if they are a service provider. However today, the majority of carriers are only able to provide Service Level Agreements (SLAs) for IP traffic and customer sites that are on their own network. This forces enterprises of all sizes to either manage their own multi-carrier WANs or outsource the task to a carrier or systems integrator that is willing to offer a single umbrella SLA and back this off with separate SLAs to each component provider carrier.

Operational Support Software (OSS) Vendors challenges

An Operations Support System is the software that handles workflows, management, inventory details, capacity planning and repair functions for service providers. Typically, an OSS uses an underlying Network Management System to actually communicate with network devices. There are literally hundreds of OSS vendors providing software to carriers today, but it is interesting to note that the vast majority of these only provide software to help carriers manage their network inside the cloud i.e. to help them manage their own network. In practice, each carrier uses a mixture of bought in and 3rd party OSS to manage their network so each carrier has, in effect, a proprietary network and service management regime that makes it virtually impossible to inter-connect their own IP data services with those of other carriers.

As you would expect in the carrier world, there a number of industry standard organisations that are working on this issue but this is such a major challenge I would doubt that OSS environments could be standardised sufficiently to enable simple inter-connect of IP OSSs anywhere in the near future – if ever. Some of these bodies are:

  • The IETF, who work at the network level such as MPLS, IP-VPNs etc;
  • The Telecom Management Forum, who have been working in the OSS space for many years;
  • The MPLS Frame Relay Forum who “focus on advancing the deployment of multi-vendor, multi-service packet-based networks, associated applications, and interworking solutions
  • And one of the newest, IPSphere whose mission “is to deliver an enhanced commercial framework – or business layer – for IP services that preserves the fundamental ubiquity of the Internet’s technical framework and is also capable of supporting a full range of business relationships so that participants have true flexibility in how they add value to upstream service outcomes.”
  • The IT Information Library which is a “framework of best practice approaches intended to facilitate the delivery of high quality information technology (IT) services. ITIL outlines an extensive set of management procedures that are intended to support businesses in achieving both quality and value, in a financial sense, in IT operations. These procedures are supplier independent and have been developed to provide guidance across the breadth of IT infrastructure, development, and operations.”

As can be imagined, working in this challenging inter- carrier, service or OSS space provides both a major opportunity and a major challenge. One company that has chosen to do just this is Nexagent. Nexagent was formed in 2000 by Charlie Muirhead – also founder of Orchestream, Chris Gare – ex Cable and Wireless and Dave Page – ex Cisco.

In my travels, I often get asked “what is it that Nexagent actually does?” so I would like to have a go at answering this question after setting the scene in a previous post – MPLS and the limitations of the Internet .

The traditional way of delivering WAN-based enterprise services or solutions based on managing multiple service providers (carriers) has a considerable number of challenges associated with it. This could be a company WAN formed by integrating IP-VPNs or simple E1 / T1 TDM bandwidth services bought in from multiple service providers around the world.

The most common approach is a proprietary solution, which is usually of an ad hoc nature built up piecemeal over a number of years from earlier company acquisitions. The strategic idea would have been to integrate and harmonise these disparate networks but there was usually never enough money to start, let alone complete, the project.

Many of these challenges can be seen listed in the panel on the right taken from Nexagent’s brochure. Anyone that has been involved in managing an enterprise’s Information and Communications Technology (ICT) infrastructure will be very well aware of these issues!

Overview of Nexagent’s software

Deploying an end to end service or application running on a WAN, or solution as it is often termed, requires a combination of:

  1. workflow management: A workflow describes the order of a set of tasks performed by various individuals or systems to complete a given procedure within an organization., and
  2. supply chain management: A supply chain represents the flow of materials, information, and finances as they move in a process – or workflow – from one organisation or activity to the next.

In the situation where every carrier or service provider has adopted entirely different OSS combinations, workflow practices and supply chain processes, it is no wonder that every multi-carrier WAN represents a complex, proprietary and bespoke solution!

Nexagent has developed a unique abstraction or Meta Layer technology and methodology that manages and monitors the performance of an end-to-end WAN IP-VPN service or solution without the need for a carrier to swap-out their existing OSS infrastructure. Nexagent’s system runs in parallel with, and integrates with, multiple carriers’ existing OSS infrastructure and enables a prime-contractor to manage multi-carrier solutions in a coherent rather than an ad hoc manner.

Let’s go through what Nexagent offers following a standard process flow for deploying multi-supplier services or solutions using Nexagent’s adopted ICT Infrastructure Management (ICTIM) model.

Service or solution modelling: In the ICTIM reference model, this is the Design stage. The is a crucial step and is focused on capturing all the enterprise service requirements and design as early as possible in the process as possible and maintaining that knowledge for the complete service lifecycle. Nexagent has developed a CAD-like modelling capability with frontend capture based on a simpletouse Excel spreadsheet as every carrier uses a different method of capturing their WAN designs. The model is created up front and acts as a reference benchmark if the design is changed at any future time. The tool provides price query capabilities for use with bought-in carrier services together with automated design rule verification.

Implementation engine: In the ICTIM reference model, this is the Deploy stage. The software automatically populates the design created at the design stage with the required service provider interconnect configurations, generates service work orders for each service provider involved with the design and provisions the network interconnect to create a physical working network. The software is based on a unified information flow to network service providers. Importantly, it schedules the individual components of the end-to-end solution to meet the enterprise roll-out and change management needs.

Experience manager: In the ICTIM reference model, this is the Operate stage. The Nexagent software compares real-life end-to-end solution performance to the expected performance level as specified in the service or solution design. Any deviation from agreed component supplier SLAs will generate alerts into existing OSS environments.

The monitoring is characterised by active in-band measurement for each site and each CoS link by application group and is closed-loop in nature by comparing actual performance to expected performance stored in the service reference model. It can detect and isolate problems and includes optimisation and conformance procedures.

Physical Network interconnect: Lastly, Nexagent developed the service interconnect template with network equipment vendors such as Cisco Systems and physically enables the interconnection of IP-VPNs that have chosen different incompatible ways of defining their CoS-based services.

Who could use Nexagent’s technology?

Nexagent provides three examples of the application of their software – ‘use cases’ – on their web site:

Hybrid Virtual Network Operator is a service provider which has some existing network assets in some geography but lacks network reach and operational efficiency to win business from enterprises requiring out of territory service and customised solutions. Such a carrier could use Nexagent to extend their reach, standardise their off-net interface to save costs and ensure that services work as designed.

Data Centre Virtualisation: Data centre services are critical components of effective enterprise application solutions. A recent trend in data centre services is to share computing resources across multiple customers. Similarly, using multiple physical data centres enables more resilient and better performing services with improved load balancing . Using Nexagent simplifies the task of swapping carriers delivering services to customers and better monitor overall service performance.

Third Party Service Delivery: One of the main obstacles for carriers to growing market share and expanding into adjacent markets is the time and money to develop and implement new services. While many service providers want a broader portfolio of services, there is growing evidence that enterprises want to use multiple companies for services as a way of maintaining supply chain negotiation leverage – what Gartner calls enterprise multi-sourcing.

Round up

This all may sound rather complicated, but the industry pain that Nexagent helps solve is quite straight forward to appreciate when you consider the complexity of multi-carrier solutions and Nexagent and have taken a pretty unique approach to solving that pain.

Although there is not too much information in the public domain about Nexagent’s commercial activities, there is a most informative presentation – MPLS Interconnection and Multi-Sourcing for the Secure Enterprise by Leo McCloskey, who was Senior Director, Network and Partner Strategy at EDS when he presented it (Leo is now Nexagent’s VP of Marketing). An element of one of the slides is shown below. You can also see a presentation by Charlie Muirhead from Nexagent – Case Study: Solving the Interconnect Challenge.

These, and other presentations from the conference can be found from the 2006 MPLScon conference proceedings at Webtorials archive site. You will need to register with Webtorials before you can access these papers.

If you are still unsure about what Nexagent actually does or how they could help your business – go visit them in Reading!

Note: I should declare a personal interest in Nexagent as a co-founder, though I am no longer involved with day to day activities.

Addendum:  March 2008, EDS hoovers up Reading networking firm


3D heaven – Google’s SketchUp

February 23, 2007

After going on about the the pernicious nature of free voice services – Are voice (profits) history?, I’m about to sing the praises for some free software and as this free software comes from the Google stable, I guess this is acceptable! Of course, you can upgrade to the professional version…

SketchUp is an excellent 3D modelling tool that allows you to create 3D images of any complexity with relative simplicity and ease. I say with “relative simplicity” because its takes some to time to learn how to use it as it is SO different to any other program. On one hand, it is very intuitive to use, but on the other it can be very frustrating while you are learning a completely new way to use your mouse and its associated key clicks.

I’m modelling a house extension at the moment and the drawing on my left took me just a few hours to complete.

SketchUp was developed by start-up company @Last Software which was formed in 1999 and was acquired by Google in March 2006. Looking on the Internet, it seems the feature that attracted the purchase was a plug-in to Google Earth that allowed a user to place their 3D architectural creations on the Google earth maps – ideal for town planners!

Watch this video for a quick overview of SketchUp.

SketchUp is stuffed full of features and there is lots of help and tutorial material to get you going and it really is quite fun to use. There literally hundreds of of different materials to select when you fill a surface and you can download lots of pre-crafted objects to help you along the way.

It fully understands perspective and when you move objects, they get bigger as they get nearer to you – try doing this in PowerPoint! You can even even use PhotoMatch to import a photo and match the perspective of your drawing to that of the photo so that you can superimpose it on the photo in a realistic way.

I could go on for ages but you should download it and try it for yourself – you will hours of fun and have another useful tool to support your creative zeal.


Will I live through browser incompatibilities?

February 20, 2007

Living in the 2st century can be very trying sometimes – is this just me? After posting my PC problems last week I’ve found that I’ve been plagued in numerous other aspects of living in the 21st century as well. No, I’m not talking about those people who use mobiles on quiet railway carriages, by the age old problem of browser incompatibles.

No, I’m not talking about bad rendering of HTML code on a particular browser, but the much more pernicious writing of services that will only work on a particular browser.

Nobody would particularly mind a new service that has problems; you would just swear and move on! But honestly, to come across a service that has had millions of public money spent on it is another matter.

I’m talking here about the UK’s National Health Service (NHS). They have introduced a service that allows a patient to book their own appointment. On the service this sounds great, but honestly the fact that it only works with Internet Explorer is not brilliant by any means!

 

All I wanted to do was to make a hospital appointment using Firefox…

Another example was when I thought about early retirement a couple of year’s ago – if only! Great I thought, you can do it on line but Microsoft Visual basic enevitably decided otherwise!

 

To round things off, remember the eye glasses I broke last week in my ‘PC problem’ week? I went for an eye test at Boots this morning to get them replaced. All was going swimmingly well until I discovered that they had just installed a new IBM system based on – guess what? – Windows. I thought I might crack a joke to the receptionist but I held back. Then, guess what, right at the end of the consultation, up popped the usual little Windows style error message containing the message Abort! Well, the system did just that and dropped out losing all the imformation about my prescription just entered.

I promise to return to normal service soon and return to talking about positive aspects of technology and stop being a grumpy old man. Maybe I should swop to using an Apple (everyone else in my house uses an Apple) but what about the rest of the world?