Where now frame relay?

February 28, 2007

The invite to the 2007 MPLS Roadmap conference provides some interesting and timely ‘facts’ about frame relay and its current use by service providers.

Fact #1: Carriers are rushing to move from legacy voice/data networks to an MPLS-based network for their enterprise services.

Fact #2: Frame relay, the dominant enterprise network service of the past 10 years, runs on the carriers’ legacy networks.

Fact #3: Some carriers will no longer respond to a frame relay RFP…and even those that do are including contract language that offers little – sometimes no – assurance that frame relay will be supported for the term of the agreement.

Reading this, I thought it about time I brought the story of frame relay up to date. I first wrote about frame relay services (FR) an absolute age ago in 1992 when they were the very latest innovation along side SDH and ATM. The core innovation in the frame relay protocol was its simplicity and data efficiency compared to its complex predecessor – X.25 and filled the gap for an efficient protocol wide area network services. Interestingly this is the same KISS motivation that lies behind Ethernet.

Throughout the 90s, frame relay services went from strength to strength and was the principle data service that carriers pushed to enterprises to use as the basis of their wide area networks (WANs). Mind you, there were mixed views from enterprises about whether they actually trusted frame relay services and they have the same concerns about IP-VPNs today.

As FR is a packet based protocol running on a shared carrier infrastructure (over ATM usually), many enterprises wouldn’t touch it with a bargepole and stuck to buying basic TDM E1 / T1 bandwidth services and managed the WAN themselves. It was the financial community that principally took this view and they have probably not changed their minds even now – in with the advent of MPLS IP-VPNs. Remember, the principle traffic protocol being transferred over WANs is IP!

Often things seem quite funny when looked at in hindsight. In The rise and maturity of MPLS, I talked about the Achilles heel in connectionless IP data services as exemplified by the Internet – lack of Class of Service capability. This can have tremendous impact when the network is carrying real time services such as voice of video traffic.

FR services, because they were run over ATM networks actually had this capability and carriers offered several levels of service derived from the ATM bearer such as:

  • best effort (CIR=0, EIR=0) »
  • guaranteed (CIR>0, EIR=0) »
  • bandwidth on demand (CIR>0, EIR>0)

Where CIR is Committed Information Rate and EIR is Expected Information Rate and were applied to what were known as Permanent Virtual Networks (PVCs) although towards the end of the 90s, temporary Switched Virtual Networks (SVCs) became available. Also, voice over frame relay services were launched around the same time.

So frame relay did not suffer from the same lack as QoS as IP but still got steam rollered by the IP and MPLS bandwagon. Also, unlike IP, because frame relay was based on telecommunications standards that were defined by the ITU inter-connect standards were defined. Because of this, carriers did inter-connect frame relay services enabling the deployment of multi-carrier WANs. Again this was, and is, a weakness in MPLS as described in MPLS and the limitations of the Internet. It’s a strange world sometimes!

One of the principle reasons for frame relay’s downfall lay with its associated bearer service, ATM. In The demise of ATM I talked about how ATM was extinguished by IP and the direct consequence of this was that the management boards of carriers were unwilling to invest any more in ATM based infrastructure. Thus FR became tarred with the same brush and bracketed as a legacy service as well.

The second reason was the rise of MPLS and its associated wide area service, MPLS based IP-VPNs. Carriers wanted to reduce infrastructure CAPEX and OPEX by focusing on IP and MPLS based infrastructures only.

Traffic growth by protocol (Source: Infonetics in Feb 07 Capacity Magazine)

Where has it all ended up? Although frame relay was not a service that had so much religion associated with it as ATM and IP (as exemplified by ‘bellheads’ and ‘netheads’) it was the main data service for WANs for pretty much a decade. When, MPLS IP-VPNs, came along in the early years of this century, there seemed to be a tremendous resistance from the carrier frame relay data network folks who never believed for one moment that frame relay services were doomed.

»

We are now pretty much up to date. Carriers still have much legacy frame relay equipment installed and frame relay services are being supported for their customers, but frame relay can undoubtedly can be classed as a legacy service.

Like ATM and Ethernet, many carriers are migrating their legacy data services to being carried over their core MPLS networks encapsulated in ‘tunnels’. Thus frame relay could be considered to be just an access protocol in this new IP/MPLS world supported while customers are still using it.

Any Transport over MPLS is what it is all about these days. Ethernet, frame relay, IP and even ATM services are all being carried over MPLS-based core networks.

As an access service and because of its previous popularity with enterprises, frame relay will live on but it’s been several years since there has been any significant investment in frame relay service infrastructure in the majority of carriers around the world.

Other services that did not survive the IP and MPLS onslaught in the same time-frame were Switched Multi-Megabit Data services (SMDS) and the ill-feted Novell Network Connect Services (NCS) – more on this in a future post.

One protocol, that did not suffer but actually flourished was Ethernet whose genesis lay in local area networks like IP. Indeed, it could be said that LAN protocols have won the battles and wars with telecommunications industry defined standards.


#1 My 1993 predictions for 2003 – hah!

February 27, 2007

#1 Traditional Telephony: Advanced Services

Way back in 1993 I wrote a paper entitled Vision 2003 that dared to try and predict what the telecommunications would look like ten years in the future. I looked at ten core issues, telephony services was the first. I thought it might be fun to take a look at how much I got right and how much I got wrong! This is a cut down of the original version and I’ll mark the things I got right in green, things I got wrong in red.

Caveats: Although it is stating the obvious, it should be remembered that nobody knows the future and, even though we have used as many external attributable market forecasts from reputable market research companies as possible to size the opportunities, they in effect, know no more than ourselves. These forecasts should not be considered as being quantitative in the strictest sense of the word, but rather as qualitative in nature. Also, there is very little by way of available forecasts out to the year 2003 and certainly even fewer that talk about network revenues. You only need look back to 1983 and see whether the phenomenal changes that happened in the computer industry were forecasted to see that forecasting is a dangerous game.

Well I had to protect myself didn’t I?

As far as users are concerned, the differentiation between fixed and cellular networks will have completely disappeared and both will appear as a seamless whole. Although there will be still be a small percentage of the market still using the classical two-part telephone, most customers will be using a portable phone for most of their calls. Data and video services, as well as POTs, will be key business and residential services. Voice and computer capability will be integrated together and delivered in a variety of forms such as interactive TVs, smart phones, PCs, and PDAs. The use of fixed networks is cheap, so portable phones will automatically switch across to cordless terminals in the home or PABXs in the office to access the broad band services that cannot be delivered by wireless.

A good call on the dominance of mobile phones (It’s quant that I called them “portable phones”, I guess I was thinking of the house brick size phones of that era. The convergence of mobile and fixed phones still eludes us even in 2007 – now that really is amazing!

Network operators have deployed intelligent networks on a network-wide basis and utilise SDH together with coherent network-wide software management to maximise quality of service and minimise cost. As all operators have employed this technology, prices are still dropping and price is still the principal differentiator on core telephony services. Most individuals have access to advanced services such as CENTREX and network based electronic secretaries that were only previously available to large organisations in the early 1990s. Because of severe competition, most services are now designed for, and delivered to, the individual rather than being targeted at the large company. All operators are in charge of their own destiny and develop service application software in-house, rather than buying it from 3rd party switch vendors.

A real mixed bag here I think I was certainly right about the dominance of the mobile phone but way out about operators all developing their own service application software. I rabbited on for several pages about Intelligent (IN) networks bringing the the power of software to the network. This certainly happened but it didn’t lead to the plethora of new services that were all the rage at the time – electronic secretary services etc. What we really saw was a phenomenal rise in annoying services based on Automatic Call Distribution (ACD) – “Press 1 for….” then “Press n for…” and so loved by CFOs.

Customers, whether in the office or at home, will be sending significant amounts of data around the public networks. Simple voice data streams will have disappeared to be replaced with integrated data, signalling, and voice. Video telephony is taken for granted and a significant number of individuals use this by choice. There is no cost differential between voice and video.

All public network operators are involved in many joint ventures delivering services, products, software, information and entertainment services that were unimagined in 1993.

Tongue in cheek, I’m going to claim that I got a good hit predicting Next Generation Networks that integrate services on a single network. Wouldn’t it have been great if I had predicted that it would all be based on IP? It was a bit too early for this at the time. Wow, did I get video telephony wrong! This product sector has not even started, let alone taken off?

What did I really not see at all because it was way too into the future were free VoIP telephony services of course as discussed in Are voice (profits) history?

Next: #2 Integration of Information and the Importance of Data and Multimedia


Joost’s beta – first impressions

February 26, 2007

I have received a beta account for Joost at long last. Joost is the new initiative from Niklas Zennstrom and Janus Friis of Skype fame.

I can’t describe the service any better that than the way they do so here it is:

“Joost™ is a new way of watching TV on the internet, which uses new and established technologies to provide the best of both the internet and TV worlds. We’re in the process of making it as TV-like as we can, with programmes, channels and adverts. You can also see some things that we think will enhance the TV experience: searching for programmes and channels, for example, as well as social features like chat. There are many more new features to come!”

Here are a few screens from the service itself as running on my XP machine.

Click on the picture for full size images.

This is the Preferences menu and seems to be mainly concerned with time related issues with the user interface. What is of most interest to me – the connections options – I could not find at all! It may have been just me!

Clicking on the My Channels icon on the left-hand side of the screen brings up my favourite channels menu which you can scroll up and down with your mouse. I noticed some issues with stuttering of the sound when I moved my mouse around. This is a new machine so I’m not sure what causes this. Feels like a typical Windows problem with streaming services.

I think this could be caused by the focus being on another application when you are running Joost as a background window.

When you want to add a channel, you just click the option and up pops the selection menu that overlays the video. As this is created using the downloaded Joost client, all these option selections present themselves promptly with zero hesitation. Being not network based is a surprise but is the right way to go I think.

It’s possible to bring up a featured channel catalog to see what’s new. Interestingly, much of the content I’ve brought up so far is from the USA and certainly the spelling is US – as can be seen by the spelling of the word ‘catalogue’. It’s still a beta I guess, but I would need more relevant content to attract me to use the service extensively.

This is the Indy channel which is shown in Window mode. A click on a button takes it to full screen. Again, I haven’t examined all the content, but the quality of the video on a full size screen is inadequate – it certainly seems to be less than 2mbit/s and not particularly good to look at close up as you can see in the picture.

In fact, it seems fairly typical of most streamed video that you come across on the net.

My first impressions so far are:

  • the look and feel of the channel selection menus are really great and a pleasure to use.
  • I was disappointed as I expected to be able to get a better quality video stream from Joost than other streaming services but this doesn’t seem to be the case. I don’t know whether all the channels are encoded at the same data rate and use the same compression algorithm, but if so, it will limit the time I spend with the service.
  • Joost did lose its connection with the server several times while I was fiddling around – I don’t know where the beta server is located. As I mentioned in my recent post – MPLS and the limitations of the Internet, if I sat down to watch a film and this happened I would not find it acceptable.
  • In the preferences menu, I could not find anything to do with setting up the Internet connection at all. As a consumer service this may be OK, but I find it disconcerting not to be able to manage the basics of the connection speed myself. Maybe I want to restrict the bandwidth Joost uses if I had a limited bandwidth connection. The corollary of this is that Joost always takes as much bandwidth from your connect as it needs. If so that’s a worry and a worry to businesses as well.

I’ll try and do some more tests done over the next few days to get a better feel. By the way, as I type this post up, if I type fast enough, the Joost application stalls until I stop.

Addendum #1:

Using Windows Task Manager, Joost seems to be delivering its service to me at 1mbit/s which is about what I would have guessed from the quality. As I have 8mbit/s it would have been nice to take delivery at a higher rate so that it would be possible to look at National Geographic at full screen.

Oh well, back to the TV!

Addendum #2: A recent Silicon.com post articulated the challenge that could face the Internet with the advent of IPTV quite well:

Ferguson also suggested new, bandwidth-intensive services such as IPTV have the potential to cause even slower broadband speeds. “A provider only needs around 1.5 per cent of users to make use of, say, the BT Vision product at the same time to use up all the capacity,” he said.

He added: “The UK does not yet have the infrastructure to support millions watching EastEnders via broadband at the same time,” because of the high cost of retaining spare network capacity for times when online traffic is high.

So true…


3D heaven – Google’s SketchUp

February 23, 2007

After going on about the the pernicious nature of free voice services – Are voice (profits) history?, I’m about to sing the praises for some free software and as this free software comes from the Google stable, I guess this is acceptable! Of course, you can upgrade to the professional version…

SketchUp is an excellent 3D modelling tool that allows you to create 3D images of any complexity with relative simplicity and ease. I say with “relative simplicity” because its takes some to time to learn how to use it as it is SO different to any other program. On one hand, it is very intuitive to use, but on the other it can be very frustrating while you are learning a completely new way to use your mouse and its associated key clicks.

I’m modelling a house extension at the moment and the drawing on my left took me just a few hours to complete.

SketchUp was developed by start-up company @Last Software which was formed in 1999 and was acquired by Google in March 2006. Looking on the Internet, it seems the feature that attracted the purchase was a plug-in to Google Earth that allowed a user to place their 3D architectural creations on the Google earth maps – ideal for town planners!

Watch this video for a quick overview of SketchUp.

SketchUp is stuffed full of features and there is lots of help and tutorial material to get you going and it really is quite fun to use. There literally hundreds of of different materials to select when you fill a surface and you can download lots of pre-crafted objects to help you along the way.

It fully understands perspective and when you move objects, they get bigger as they get nearer to you – try doing this in PowerPoint! You can even even use PhotoMatch to import a photo and match the perspective of your drawing to that of the photo so that you can superimpose it on the photo in a realistic way.

I could go on for ages but you should download it and try it for yourself – you will hours of fun and have another useful tool to support your creative zeal.


Are voice (profits) history?

February 22, 2007

‘Free’ can sometimes be a pernicious word and is often the antithesis of real business.

It lies at the heart of the web 2.0 philosophy. Here it is often assumed that revenue will not be generated from the core service that a company is providing, but rather derived from ancillary or attached activities such as advertising. It seems that a service needs to get millions of customers to get sufficient revenue from low click-through rates. To achieve these high subscriber numbers the service needs to be free. There is an Alice in Wonderland type logic in play here sometimes.

It feels very peculiar to see that the kingpin of multimedia (which to me means multiple media voice video and information), voice, is currently experiencing a complete destruction of financial value. We all know that IP services are generally not profitable today due to low revenues and high costs, but I do wonder why everyone is jumping on this bandwagon and assuming that zero revenue voice is what the future is all about. This is not just a fixed wire-line issue but this will roll over into the mobile world as well.

Even though VoIP is just a technology it has always had the tag of being a particularly disruptive one from the very beginning and was highly resisted by many within the telecommunications industry for many years. We are now seeing some rather radical voice initiatives from the former telecommunications monopolies.

I read with interest the other day about AT&T’s Unity plan and wonder what the future holds. According to their press release:

AT&T Inc. today announced an unprecedented new offer, which gives subscribers the nation’s largest unlimited free calling community, including wireless and wireline phone numbers.

The AT&T UnitySM plan, which is available beginning Sunday, Jan. 21, brings together home, business and wireless calling, creating a calling community of more than 100 million AT&T wireless and wireline phone numbers.

AT&T Unity customers can call or receive calls for free from any AT&T wireless and wireline phone numbers nationwide without incurring additional wireline usage fees or using their wireless Anytime minutes. In addition to free domestic calling to and from AT&T numbers, the AT&T Unity plan includes wireless service with unlimited night and weekend minutes, as well as a package of Anytime Minutes.

Wow! If any carrier had announced such a package a few years ago they would have been thought of as rather crazy. In the 90s everyone complained about the destruction of core value from a number of carriers who undercut wholesale network charges (I guess this has somewhat stabilised in recent years) but this seems much worse to me.

This is complex issue which has its foundations, I would assume, with AT&T losing significant numbers of customers and revenue to free peer-to-peer voice services such as Skype on one hand, many more to wireless operators and still more to cable companies on the other. Most industry people believe in the mantra that the future lies in triple and even quadruple plays, as promoted by our own Virgin Media. But, if you have no customers to deliver these too then the future is going to be rather bleak!

Is this the reason why AT&T is discarding their future voice revenue? I find it hard to believe that on one hand every carrier is pursuing a converged IP-based Next Generation network strategy and biggest multimedia service of all, voice, will not contribute to revenue flow? Is this case of 2 + 2 not making 5 but 2?

I spent so much time in the 90s promoting VoIP technology and services but I never expected it to be this disruptive! Let’s look on the bright side. To me this narrowly focused strategic thinking creates opportunities for start-ups who are able to go in a different direction to the industry gestalt that is driving the majority of carriers.

Let’s also hope that the drive towards converged NGN networks based on reduced costs does not shrink revenues even more.


MPLS and the limitations of the Internet

February 21, 2007

In my post The rise and maturity of MPLS, I mentioned a number of challenges that face those organisations wishing to deliver or manage global Wide Are Networks (WANs). Whether it be a carrier, a systems integrator, an outsourcer or a global enterprise managing a global WAN, they are all faced with one particular issue that just can not be ignored and is quite profound in its nature.It is also one of the most un-understood issues in the telecommunications and Information and Communication Technology (ICT) industries today. This issue is managing end-to-end service performance where the service is being delivered over multiple carrier networks and exemplified in WANs.

The impact of this is extremely wide, affecting closed VoIP VPNs, Internet VPNs, layer-2 based VPNs and layer-3 IP-VPNs. This post will focus principally on Internet VPNs and MPLS-based layer-3 IP-VPNs although the concepts discussed are just as applicable to layer-2 services such as Layer 2 Tunnelling Protocol (L2TP).

The downside of the Internet

It strange to talk about such issues in this day and age where the Internet is all all pervasive in homes and businesses around the world. When we think of the Internet we often think of it as a single ‘cloud’ as shown on the right, where a web site is one side of the cloud and users accessing it from the other. That this is even possible, is testament to the founders of the Internet and the resilience of the IP routing algorithms (an excellent book that pragmatically goes through the origins of the Internet is Where Wizards Stay Up Late. This is well worth a read).

However in reality, the Internet is unable to deliver many types services that individuals and enterprises need at an acceptable level of performance. Why should this be so?

At an abstracted level, this perception of the Internet being a single cloud is correct, but the reality at the network level, this is somewhat different.

As the name implies – Inter and net, the Internet is built from 100s of thousands of individual networks known as Autonomous Systems (AS) connected together in a hierarchy. If you go to An Atlas of Cyberspace or The Internet Mapping Project you can see this drawn in quite an artistic way. The hierarchy is made of major teir-1 carriers such as Level3 who provide the inter-continental backbones, connecting to local regional or country carriers such as BT who, in turn, connect to small local ISPs. Consumers and enterprises can connect to the cloud at any level of the hierarchy dependent on their scale and how deep their pockets are.

Each carrier uses the standard IP routing algorithms such as Open Shortest Path First (OSPF) internally within their ‘domain’ and Border gateway Protocol (BGP) to inter-connect domains, thus creating a highly resilient network. In fact, providers of geographic components of the Internet come and go on a frequent basis with little disruption of the Internet holistically (of course, this is a pain to us as individuals if we were one of their customers!).

So what could be possibly wrong with this? It comes down to a question of predictable end-to-end performance, or rather the lack of it. Think of one of the red dots in the picture above as your local broadband DSL provider (e.g. ZEN), connecting to your local incumbent carrier – light blue diamond – who is providing the copper connection to your house or fibre into your business premises (e.g. BT), connecting via one of the global carriers to the USA (e.g. Level3) – large blue dots. In practice, you can end up transiting 60 to 800 separate routers and 40 different networks going to and from the web site you are accessing. This is shown below in the path from my PC to www.cisco.com on the West Coat of the USA using Ping Plotter.

Getting back to technology, it may be that every one of these carriers has deployed MPLS within their own so-called private network (this is a bit of a misnomer as they are carrying public traffic) that carries Internet traffic to and from their customers’ houses or business premises. This enables them to better manage Quality of Service while that traffic is on their network – once packets leave their network winding their way to and from the web site server they have no control over them at all. On the Internet, although there are supervisory bodies looking after such things as standards (IETF) or domain registration (ICANN), but there no one in control of end-to-end performance.

If a particular path becomes congested due to for any reason such as under-powered routers being used as a carrier is cash-strapped, or having insufficient bandwidth available to support the number of customers they have due to a successful advertising campaigns or any number of other reasons, then packets get put into ‘queues’ and unpredictable performance, high latency or delays or even a cessation of a connection when the link times out could be experienced.

Business use of the Internet

Many companies use IP-SEC based Internet VPNs to inter-connect their office sites quite effectively as they represent a low-cost solution. However, in most situations for larger enterprises, they provide a too unpredictable and unreliable service for use as WAN replacements.

Unpredictable performance may be acceptable for consumer browsing of the Internet and what are known as ‘store and forward’ services such as email, but it is a real killer for real-time times and other services that must have a guaranteed and predictable end-to-end performance to function effectively. Here are some examples of real time services:

Real time interactive services: Many of have experienced break up of a Voice over IP call when using services such as Skype on the Internet. One minute, the conversation is going well and next you experience weird distortion and ringing which makes the person who you are talking to unintelligible. This is due to packet loss and the VoIP software trying to interpolate gaps left by the lost packets. Would you be prepared to accept this if your were having a critical sales discussion with a potential customer or investor? Of course not. Would you ever be prepared to accept this on a fixed landline? No. If you use the Public Switched Telephone Network, then there are quality standards in place to prevent this.

Ironically, we DO seem to be prepared to accept this on mobile or cell phones where quality can be sometimes abysmal but we are prepared to accept it because we can use the phone anywhere. More on this in a future post.

One other aspect that affects intelligibility and the free conversational interplay on voice calls is delay. Once delays get above about 180mS then delay becomes very noticeable and starts to make conversations awkward. Combine this with packet loss and voice conversations become very difficult.

Another service that faces the same issue, but in a compounded way, is video conferencing where both voice and video are disrupted. Poor Internet performance killed many video conferencing start-ups in the late 90s.

Streamed but non-interruptible services: The prime example of this is video streaming on the Internet where loss of packets can be catastrophic – delay is of less importance.. This may be acceptable when looking at short snippets as exemplified by the content of YouTube, but if you want to immerse yourself in a two hour psychological thriller and ‘suspend disbelief’ then packet loss or drop-outs is a killer.

This has been raised by many people when considering the new global video service Joost recently announced by the founders of Skype. Is the quality of today’s Internet really adequate to support this type of service? I guess we will find out.

Client server, thin client and web applications: Although client / server terminology seems a little dated and is being replaced by web based services as exemplified by salesforce.com, they all have the same high performance need no matter where the application server resides.

If key press commands get ‘lost’ or edited text ‘disappears due to packet loss the service will soon become unusable and the company providing buried under a snow storm of complaints.

The main take away from the above is that guaranteed Internet performance cannot really be relied upon for real-time or client-server based services if a predicable service is mandatory. This clearly the case for the majority of business services where end-to-end predicable and guaranteed performance is an absolute need covered by tight Service Level Agreements with their network or service providers.

Carriers’ private IP networks

So how do carriers provide reliable IP data services to their business customers? They use MPLS to segment traffic on their networks into two strands:

(a) Label Switch paths (LSPs) dedicated to Internet traffic from their own customers or traffic transiting their network. Carriers want to get this traffic off their network and pass the burden to another carrier as quickly as possible to reduce costs and risk. They use a routing policy called hot potato routing to ensure this happens as quickly as possibly for traffic that does not originate from their customers.

(b) LSPs dedicated to providing VoIP, video and IP-VPN services with SLAs to their business customers. This traffic is kept on their network for as long as possible – this is called cold potato routing.

In general, carriers are only able to provide IP performance guarantees for packets on their own network i.e. only for users and business locations directly connected to the carrier’s own network. The reason for this is that carriers are not able or generally willing to physically interconnect their networks with other carriers to provide seamless, performance guaranteed services that straddle multiple networks for their customers. In general, each carrier stands alone in isolation. Why should this be so in the age of the ubiquitous Internet? Therein lies the big elephant in the room!

 

  1. Each carrier defines Class of Service in a different way as this is not explicitly covered in IETF standards, so it is not easy to interconnect and maintain the same level of priority without translation. (The easy way of doing this is not to translate but to just back-to-back two routers that terminate both carriers IP-VPNs without the need for translation. Several carriers and network integrators use this approach).
  2. Each carrier has adopted an entirely different set of Operation Support Software (OSS) tools make interconnect to other carriers to exchange performance data exceedingly challenging. OSS systems are usually a mixture of internally developed proprietary tools and bought-in 3rd party products (This is a major issue that affects all IP services in a big way and is not seen in the PSTN world because inter-connect standards exist as defined by the ITU).
  3. Carriers are generally unwilling to provide SLAs on behalf of other carriers.

Note: I would like to make it clear that this is not always the case and there are several large carriers and integrators who have proactively followed a strategy of IP-VPN interconnect with partners to better support their customers or extend the reach of their network such as Cable and Wireless, Global Crossing and Vanco (with their MPLS Matrix) to name but three.

So, if you are a small to medium UK enterprise (SME) and you are able to connect all your offices and home workers to your national incumbent carrier then they would be prepared to provide you with as SLA. If performance dropped below the performance specified in the the SLA, then you will be able to claim compensation from that provider.

However if you are a multinational enterprise, with sites located in many counties you need to work with many carriers. In general, there is no way today that any of those carriers could provide you with end-to-end performance guarantees (there is no such thing as a global carrier). So what are the alternatives to managing entyerprise multi-carrier VoIP services, MPLS IP-VPNs or layer-2 VPNs in 2007?

  1. As an regional enterprise, manage the integration of multiple carriers yourself.
  2. Go to a systems integrator, outsourcer or carrier who will act as a prime contractor by having a single SLA with you. They will back this up with multiple SLAs with each carrier providing a geographic component of the WAN.

Round up

To address IP QoS issues on their own networks MPLS has been rolled out in the majority of carriers around the world today and most offer SLA-based IP-VPNs to their business customers as one of the ways they can create WANs. It is generally perceived that IP-VPNs represent a lower cost solution than legacy services services such as frame relay. Of course, enterprises can avoid reliance on carriers altogether by just buying bandwidth in the form of TDM E1 circuits and managing WAN IP themselves as per (1) above.

There is still no universal business Internet that mirrors the public Internet so if a company requires end-to-end performance guarantees to support their client-server database or VoIP service, they will need to manage multiple carriers themselves or go to a integrator who is willing to act as a prime contractor as described above.

If you are not a network person, this may all seem quite strange and will make you wonder why the global telecommunications industry has not got its act together to better support its enterprise customers who all require this capability and spend 100s of billions of $ annually? It would seem be one of the biggest, if not the biggest, commercial opportunity in telecoms today.

One technology company that is addressing this end-to-end solution delivery issue is Nexagent and I will talk about them in a post shortly (Note: now posted as Nexagent, an enigmatic company?). Note: I should declare a personal interest in Nexagent as a co-founder in 2000.


Will I live through browser incompatibilities?

February 20, 2007

Living in the 2st century can be very trying sometimes – is this just me? After posting my PC problems last week I’ve found that I’ve been plagued in numerous other aspects of living in the 21st century as well. No, I’m not talking about those people who use mobiles on quiet railway carriages, by the age old problem of browser incompatibles.

No, I’m not talking about bad rendering of HTML code on a particular browser, but the much more pernicious writing of services that will only work on a particular browser.

Nobody would particularly mind a new service that has problems; you would just swear and move on! But honestly, to come across a service that has had millions of public money spent on it is another matter.

I’m talking here about the UK’s National Health Service (NHS). They have introduced a service that allows a patient to book their own appointment. On the service this sounds great, but honestly the fact that it only works with Internet Explorer is not brilliant by any means!

 

All I wanted to do was to make a hospital appointment using Firefox…

Another example was when I thought about early retirement a couple of year’s ago – if only! Great I thought, you can do it on line but Microsoft Visual basic enevitably decided otherwise!

 

To round things off, remember the eye glasses I broke last week in my ‘PC problem’ week? I went for an eye test at Boots this morning to get them replaced. All was going swimmingly well until I discovered that they had just installed a new IBM system based on – guess what? – Windows. I thought I might crack a joke to the receptionist but I held back. Then, guess what, right at the end of the consultation, up popped the usual little Windows style error message containing the message Abort! Well, the system did just that and dropped out losing all the imformation about my prescription just entered.

I promise to return to normal service soon and return to talking about positive aspects of technology and stop being a grumpy old man. Maybe I should swop to using an Apple (everyone else in my house uses an Apple) but what about the rest of the world?

 

 


Follow

Get every new post delivered to your Inbox.