The rise and maturity of MPLS

February 14, 2007

In Networks Part 2: The phenomenon of Ipsilon, it was mentioned that one of the key strengths of IP routing was its resilience. If a particular node broke, then packets would find an alternative route. However, this strength also created a weakness in that individual packets in a stream could arrive at their destination following different paths through the network thus creating additional delay or latency while the packets were reassembled in the correct order.

This is what happens in connectionless IP networks such as the Internet. In practice, this creates unpredictable performance in congested networks as any one who uses the Internet will experience each and every day. MPLS was the white knight that came along to save the world from this fate.

When the future of ATM and IP seemed to be coming clear back in 1997 I wrote: The time was never better for the introduction of MPLS.

  • IP is all dominant – ATM relegated to transmission.
  • Multiple media integration will take place at IP layer NOT at the ATM layer – video, data, image, text and voice.
  • There are no NATIVE ATM services beyond ‘cheap bandwidth’ cell-relay services.
  • IP is the ‘killer application’ for ATM (if there is one!)
  • Customers are not asking for ATM, they are asking for IP
  • The advent of ‘label switching’ has changed the nature of WAN ATM network architecture for ever.
  • The time was never better for the introduction of MPLS .

So what is MPLS?

The advent of Multi-Protocol Label Switching (MPLS) defined a mechanism for packet forwarding in routers and enabled the rollout of connection-oriented networks. Connection-oriented data networks emulated the way that paths were set up in the Public Switched Telephone Network (PSTN) networks. This involves setting up a path prior to sending packets across the network.

Using labels that were added to packet headers, MPLS uses IP addresses to identify end points and intermediate routers on the path from source to destination. This makes MPLS networks IP-compatible and easily integrated with traditional IP networks. However, packets are routed along pre-configured Label Switched Paths (LSPs) using signalling protocol such as LDP, RSVP-TE, or CR-LDP thus making their flow both predictable and manageable.

Showing an MPLS tunnel (LSP) passing transparently through Router ‘B’

When a label-headed packet arrives at a router, the router uses this label to identify the LSP. It then looks up the LSP in its own forwarding table to determine the link over which to forward the packet, and the label to use on this next hop. A different label is used for each hop, and it is chosen by the router or switch performing the forwarding operation. This allows the use of very fast and simple forwarding engines, as the router only has to select the label to minimize processing. In a normal router, it has to intercept and process every packet that transits. Ingress routers (called Label Edge Routers) at the edge of the MPLS network use the packet’s destination address to determine which LSP to use. Inside the network, the MPLS routers use only the LSP labels to forward the packet to the egress router.

If you would like a more in-depth overview of MPLS, there is no better place to visit than Cisco’s web site and read their MPLS Overview

The arrival of the concept of cut through routing enabled a new generation of engineers and companies to focus on these issues and they brought a new over-arching vision for this innovative technology – convergence of services over IP euphemistically called ‘Everything over IP). Once most vendors had jumped on the IP bandwagon, following the demise of ATM, convergence became all the rage back in the late 1990s as the next step in IP world domination. It has never looked back.

The belief in convergence

Convergence of what you may ask. In the 90s, there was a complete separation of voice and data networks in the public service world. Voice services were carried over the highly standardised, understood tightly managed PSTN, while Wide Area Data Networks (WANs) were migrating towards high efficient Frame relay services (as described in 1992). The IP convergence vision forecast that all services would be converged onto a single IP-based network. Of course, there was a tremendous backlash to this idea from the traditional telecommunications engineering community who generally thought that this was all nonsense and could never work (they might still be right!). As with most things in life, both sides of the debate were correct in many areas but everyone was wrong about the timescales that it would take to achieve convergence. It’s only now (2007) that we are seeing the convergence vision start to pan out in incumbent carrier consumer services with the possible ‘leader’ turning out to be our very own British Telecom with their 21C initiative (or here).

The advent of cut-through routing pushed Cisco into launching Tag Switching which morphed into Multi Protocol Label Switching (MPLS) when taken up the IETF Internet standardisation body which addressed a number of key issues that needed to resolved before convergence of certain services could take place over a pure IP network. The principle one of these was Class of Service (CoS).

One of the big strengths of ATM was that it enabled service and network designers to separate traffic into different classes of services that would be treated differently if the ATM network ever became congested. The highest class of service was reserved for real time services (isochronous services) such as voice and video that would take priority over less important store-and-forward services such as email. The modern equivalent to this is Voice over IP (VoIP) services taking precedence over Internet traffic.

CoS is replicated in MPLS by, not surprisingly, the Class of Service (CoS) feature. This enables network designers to provide differentiated types of service across an MPLS network by classifying packets at the edge of the network using Committed access rate (CAR). Packets are classified at the edge of the network before labels are assigned. CAR then uses the type of service (TOS) bits in the IP header to separate traffic that have been tagged with different TOS bits onto different Label Switched Paths.

In addition TOS, there is another feature that carriers can deploy in addition to basic MPLS to ensure QoS in times of congestion – Differentiated services or DiffServ.

On normal router, when more packets are sent through a port than it is able to handle, the queue will fill up and excess packets will be discarded. This will cause jitter, delays and malfunctions in your application software.

If the carrier uses the same categorisation of services as defined in ATM, best effort, assured and real-time services, multiple queues can be set up in ports to ensure that these three classes of service are treated differently in times of congestion. There are two basic queuing/scheduling mechanisms Weighted Round Robin (WRR) and Strict Priority. Strict Priority queue is used for real-time traffic, and WRR for assured and best efforts, what’s left is best efforts.

The service uses of MPLS

At the end of the day, the use of technology for technologies sake is never justified so just what was it that drove the deployment of MPLS?

(a) Voice over IP (VoIP): The first real deployment of MPLS was not in customer-facing services, but was buried in the core of carrier’s networks to carry voice traffic. Before the advent of IP networks, 100% of voice traffic was carried over traditional PSTN networks and was subject to a strong technical and commercial regime defined by the ITU. Carriers needed to agree to use appropriate public technical and interconnect standards driven by the big incumbent monopolies. They also needed to agree to commercial settlement for other carrier’s traffic transiting their networks and to pay other carriers to carry their voice traffic. Settlement was both a cost and a revenue and in the ‘good ‘ol pre-IP days’ these were usually in balance to everyone’s benefit.

However, if international voice traffic could be buried or hidden in IP connections using VoIP, it was possible to significantly reduce transit costs AND avoid settlement out-payments at the same time! I guess the practical strategy in the growing market of the time was to siphon-off traffic growth onto IP networks to cap regulated out-payments to other carriers. “Not me Guv”, was the mantra!

This financial gain led to the wide-scale deployment of VoIP in the late 1990s but was pretty much hidden from the public arena.

Since 2002, VoIP has slowly moved from hidden deployment out to end-user customer services. Initially this took place as enterprise services but a little later in the form of low-cost VoIP international call operators. There are literally hundreds of VoIP service providers today (you can see my list on TechnologySpectra).

(b) MPLS-based IP-VPNs: Another service that drove MPLS deployment was the idea that frame relay based WANs could be replaced by IP based services. These were called IP Virtual Private Networks (IP-VPNs). When this was first mooted, data services engineers familiar with frame relay services dismissed the concept out of hand but eventually came round under pressure.

The first IP VPNs came to market using the public Internet as the transport network and used IP-SEC as the encryption mechanism to keep the content private and are called Internet-VPNs. Unfortunately, these had one big problem – unpredictable performance (down to full stop situations) due to congestion on the Internet. Things are better these days especially if your Internet-based IP-VPN is national, rather than international in nature and they certainly provide low-cost solutions if performance is not of concern.

However, to get adequate and dependable performance carriers needed to provide MPLS-based IP-VPNs on their own networks using MPLS to to deliver appropriate performance using MPLS’ Quality of Service (QoS) capabilities described above. The mechanisms were defined in a seminal IETF standard, RFC-2547 published in 1999.

One of the big issues that is still to be faced in 2006 is that if an IP-VPN straddles multiple carrier networks, as it has to do in the real world with global WANs, there are major technical and commercial issues in making QoS seamless across those networks. More on this in a future post.

The deployment of MPLS-based IP-VPNS has been pretty much universal over the last few years and is pushing frame relay WANs to one side (see graph at bottom of the page). However, things are not all rosy as IP-VPNS, like all IP networks, require significantly skilled (and expensive) CCIE engineers to manage them. They are very complex to set up and manage and could not in any sense be considered to follow a Keep it simple, stupid! (KISS) philosophy as expostulated by the Ethernet community [Confusingly, most carriers use MPLS to carry Ethernet services on their networks].

Note: An interesting Ethernet initiative is PBB-TE (Provider Backbone Bridging Traffic Engineering is an emerging IEEE standard that incorporates a set of enhancements to Ethernet known as Provider Backbone Transport (PBT) that allows the use of Ethernet for a carrier class public transport network. Again, more in a later post.

(c) MPLS traffic engineering: What is traffic engineering? My network book defines it thus: The ability to guarantee performance in a network for a certain amount of capacity for a certain amount of time. Also known as “traffic management,” it implies the ability to analyze the current traffic load and dynamically make necessary adjustments to accommodate the different types of traffic or changing conditions.”

Delivering appropriate Quality of Service to conform to customers’ Service Level Agreements (SLAs) for both voice and data is key for a carrier. Traffic engineering as been at the heart of the PSTN for decades but it was a new activity for IP networks – and still is in practice.

In the past, most carriers delivered backbone Internet services used ATM as the bearer for IP traffic by routing it over specific inter-city paths to better manage the uncontrollable nature of IP. MPLS was seen to be the principle way forward in enabling carriers to remove ATM completely from their networks and still be able to traffic engineer their networks. This lead to the IETF standardising this need in a standard called MPLS-TE.

Interestingly, MPLS-TE has not been taken up on the scale originally expected by the carrier community for various reasons. More on this in a future post.

(d) Cost reduction: We have talked about services and engineering so far, but by far the biggest reason that MPLS has been taken up is the promise of reduced network management OPEX costs – the core of many upgrade business plans.

A non-converged legacy network has multiple layers. The lowest being the Wave Division Multiplexing (WDM) fibre layer , above this is the Synchronous Digital Hierarchy (SDH or SONET) layer, above this ATM and finally IP. Each of these layers runs independent of each other and each required a separate software management regime and separate fault management systems. This is extremely costly to operate and can quite easily lead to knock-on effects if a network element breaks.

Convergence to a single network based on IP predicted a wonderful Return on Investment (RoI) by collapsing those multiple layers down to IP using MPLS on a fibre network. Even after ten years, these benefits have yet still to be proven and the world’s carriers are looking at BT’s 21C initiative to see whether this proves to be the case. Personally, I think that there is no cheaper network to manage for voice services than a fully written off traditional TDM-based PSTN network which can still be be a part of a converged network – I hope this does not make me appear to be a luddite…

Pseudowires

Building on the theme of everything over IP, pseudowires is a Cisco standard authored by Luca Martini several years ago and is now on the IETF agenda.

Pseudowires enables the transport of ‘legacy’ time division multiplexed (TDM) services over an MPLS enabled core network. It does this by emulating the characteristics of those legacy services, such as frame relay or ATM.

Pseudowires has been particularly useful for carriers providing DSL services for Internet access as this rather old technology is based on ATM.

Round up

Having read back over what I have written, I think I have raised more questions than I have answered and have certainly triggered several future posts that will talk about some of the issues raised.

So what about the title of this post? Yes, MPLS is now an old protocol. Back in 2000 / 2001 very few carriers had actually deployed it and it is even more recent that MPLS-based IP-VPNs have become common. Technologies really do take decades to penetrate our everyday lives.

Looking at the graph below, services such as ATM and frame relay, that could be described as legacy protocols, are now in terminal decline. However, IP over MPLS and Ethernet are growing rapidly. The point to notice though, is that it could be considered that MPLS growth has matured while Ethernet is still experiencing high growth.

Traffic growth by protocol (Source: Infonetics in Feb 07 Capacity Magazine)

MPLS has provided solutions to the QoS enigma but, as with any IP based service, it has proved to be complex and expensive to manage and the jury is still out about whether its deployment (as with convergence) really saves cost. In this age of ever-accelerating technology and service roll-outs, I think it is fair to say that MPLS itself has is now in a state of maturity. Indeed, maybe there are other solutions that will replace it going forward. But, as I’ve said several times in this particular post, that is the subject for a future post as well.

Part 4: MPLS and the limitations of the Internet

Postscript: The new network dogma: Has the wheel turned full circle?


Are entrepreneurs born or made?

February 13, 2007

In the Sunday Times on Sunday 11th February, there was a interesting article entitled Are entrepreneurs born or made? Which talks about a theory put forward by Adrian Atkinson of Human Factors that entrepreneurs are born not made – no matter how much effort the individual puts in. To quote, “This theory that anyone can become an entrepreneur is absolute nonsense”.

Contentious stuff indeed. This may be applicable for lifestyle start-ups where you are going it alone, but I think it’s a real stretch to apply this to a start-up where real team effort is required – and looked for from VCs.

Yes, of course a focused, driven, visionary individual is always required to drive things forward, but to use a simplistic evaluation about whether you are prepared to mortgage your house and work seven days a week to evaluate whether you will make an entrepreneur is just not realistic – contributing characteristics yes.

Unless, you start a business before you get married, have children and sign up for a damn big mortgage just to get a roof over your head (or decide not bother with all these trivialities of life). Or you start later in life when the children have grown up (you can’t say “and left home” these days can you?).

Mind you, if you score low and believe what you are being told, then you are definitely not of an entrepreneurial bent. A real entrepreneur will shrug this off and get back to the job of making their venture successful no matter what – but this would mean you do have a good entrepreneurial attitude! You will only find out whether you are an entrepreneur or not BY DOING.

Anyway, if you want to find out whether you will be successful as an entrepreneur you can spend 5 minutes completing the following questionnaire or go to the web site and spending some money – www.humanfactors.co.uk/pep

How to see if you have what it takes

For each of the following five groups of statements choose the one that best describes what would be most Important to you when starting your business.

Group 1
A Working with other like-minded individuals
B Making a big effort to get the company structure right
C Willing to work seven days a week
D Realising that technical excellence is the key to success

Group 2

E Getting some qualifications before starting your business
F Only starting the business with all the finance in place
G Keeping your existing job until your business is established
H Seeing work as relaxation

Group 3

I Making sure you have a social life as well
J Be willing to sell your house and car to start your business
K Taking your time to make all the important decisions
L Plan your exit strategy from the beginning

Group 4

M Not selling more than the company can deliver
N Making sure the product is perfect before getting sales
O Be willing to fire people who perform badly
P Developing business plans to make strategic decisions

Group 5

Q Always involving colleagues in decisions
R Only aiming for the highest quality
S Be willing to sacrifice family life to build the business
T Realising that all that matters in business is making money

SCORING
Score 4 points each if you chose C, H, J, 0, S
Score 3 points each if you chose A, E, L, P, T
Score 2 points each if you chose B, F, K, M, Q
Score 1 point each if you chose D, G, I, N, R

To see your what your score means:

Read the rest of this entry »


What makes Cheapflights tick?

February 12, 2007

0Coincidentally a few days after I posted about the new investment body Howzat Media, I happened to hear David Soskin, CEO of Cheapflights, talking at the recent AlwaysOn media conference in New York via a webcast.

According to the Cheapflights About page information:

Cheapflights is a travel price comparison website. Well, if we were in a bragging mood, we’d tell you we are the country’s leading travel price comparison website (and give you the stats to back it up). But, we’re a nice bunch of people and we don’t want to brag, we just want to help you find the best and cheapest flight we possibly can.

So what does David Soskin believe has made Cheapflights and its advertising policies successful? Here is the essence of his talk:

  • “We have been an innovator in the sector since 1996”
  • “Our market share has been booming in the US and doubled our share in a year”
  • “Our model is quite similar to TRAVELZOO except we focus on flights, whereas TRAVELZOO covers everything.”
  • “Search for the travel industry is very important because of its inherent long tail [lots of individuals who want to book flights I assume cg] – there is more to the travel industry than just Expedia.”
  • “Google is very much the market leader for providing leads to airlines, but there is a growing segment called vertical search. Do people come to vertical search? Yes, because for many people it provides a more satisfying user experience.”
  • “If you want to find a great deal from Boston to Los Angeles and you want to find a great price, Google doesn’t help you. You need to come to a site such as Cheapflights where we have selected the best flight operators and, using our own algorithms, sorted out the very best prices for a whole range of different dates.”
  • “The advertisers on vertical sites are gaining much exposure to would-be customers who are so much closer to a purchasing decision than those who come from a Google search. That’s the motivation for advertisers to appear on vertical sites.”
  • “Why are metrics and analysis important? Because there is a very long tail in travel. We display 200,000 city pairs, 800,000 flight offers daily and we have over 80 travel advertisers on our site.”
  • “We are the innovator in the sector, we invented the model in 1996 and introduced pay-to-click in the UK in 2000. We opened bidding for premium positions in 2002 and in 2003 launched the first multi-booking solution. In 2006 we debuted the Cheapflights partner portal where analytics are so very important. We have developed the first place on the Internet where travel operators could analyse the clicks they were getting on any particular route. They can see in real time where they are getting their clicks and adjust their advertising spend and prices they send us accordingly. So it’s a very powerful tool for the huge US travel business to market on-line.”

David finished with a touch of philosophy:

“Finally, we at Cheapflights follow the philosophy of possibly the richest man in the world, the founder of Ikea, Ingvar Kamprad. He says ‘Only while sleeping one makes no mistakes, the fear of making mistakes is the root of bureaucracy and the enemy of evolution.’ We constantly trial new things, some work some don’t, but we keep trying.”

 


Are you suffering from ‘the knack’ as well?

February 11, 2007

Well, I guess anyone looking at this blog suffers from this sad affliction. When did it start happening for you?


Amazing, from 2″ to 17″ wafer sizes in 35 years!

February 9, 2007

In January my son, Steve, popped into the Intel museum in Santa Clara, California to look at the 4004 display mentioned in a previous post.

He came back with various photos some of which showed how semiconductor wafer sizes have increased over the last 30 years. This is the most interesting one.

1969 2-inch wafer containing the 1101 static random access memory (SRAM) which stored 256 bits of data.

1972 3-inch wafer containing 2102 SRAMswhich stored 1024 bits of data.

1976 4-inch wafer containing 82586 LAN co-processors.

1983 6-inch wafer containing 1.2 million transistor 486 microprocessors

1993 8-inch wafer containing 32Mbit flash memory.

2001 12-inch wafer containing Pentium 4 microprocessors moving to 90 nanometer line widths.

Bringing the subject of wafer sizes up to date, here is a photo of an Intel engineer holding a recent 18-inch (450mm) wafer! The photo was taken by a colleague of mine at the Materials Integrity Management Symposium – sponsored by Entegris, Stanford CA June 2006.

The wafer photo on the left is a 5″ wafer from a company that I worked for in the 1980s called European Silicon Structures (ES2). They offered low-cost ASIC (Application Specific Integrated Circuit) prototypes using electron beam beam lithography rather than the more ubiquitous optical lithography of the time. The technique never really caught on as it was uneconomic, however I did come across the current use of such machines in a Chalmers University of Technology in Göteborg, Sweden if I remember rightly.

If you want to catch up with all the machinations in the semiconductor world take a look at ChipGeek.


Cingular to AT&T renaming spoof video…

February 8, 2007

After posting about C&W renaming, C&W’s UK rebranding and another post on creating company names, this spoof YouTube video about the convoluted AT&T branding history will definitely make you smile.

Each new management team who arrives on the scene has its own branding ideas as part of the ‘turn around’ strategy so I guess we havn’t seen the end of this in this age of consolidation in the telecommunications industry!

Enjoy!


Virgin Media near a Cable and Wireless tie-up?

February 8, 2007

The newspapers are full of rumour about C&W and Virgin Media – arn’t they always?

In this case its “Virgin Media is close to announcing a strategic tie-up with Cable & Wireless (C&W) to help it provide telecom and television services to homes not served by its existing cable network.”

Virgin Media, let us not forget, is the new name for NT who merged with Telewest…

Let us also not forget that Mercury Communications changed its name to Cable & Wireless Communications (CWC) in 1997, to bring it closer from a brand perspective to its parent company C&W. Along with the CWC rebrand, Dick Brown C&W’s CEO at the time, acquired three cable companies.

In 1999 CWC sold its consumer assets to NTL. It was these cable businesses along with Mercury’s consumer telephony business that was sold. The reason given at the time was that CWC wanted to focus on UK business services.

After CWC sold its consumer business, it was merged with Cable and Wireless and renamed it C&W. You can see a logo history of C&W here.

Then came the Bulldog and Energis acquisition and the sale of the customer bit of Bulldog to Pipex in 2006 to concentrate on UK DSL wholesale deals (C&W UK kept the co-location assets).

So somewhere in this is, perhaps, a bit of irony that C&W UK might sign a wholesale agreement with NTL to provide infrastructure to NTL – whoops, Virgin Media – who want to outsource, if that is the right term, their DSL infrastructure. Will this also include the original Mercury telephony network sold to NTL I wonder?

No maybe, on reflection after a cup of coffee, I’ve got it all wrong and I’m just confused!


The phenomenon of Ipsilon

February 8, 2007

Part 1: The demise of ATM

You don’t read this sort of review about a start-up too often do you?

“A year ago, they were the Beatles. When Ipsilon Networks, Inc. touched down in March 1996, it whipped the industry into a frenzy with its hooky melodies and breezy harmonies on IP switching. Ipsilon’s chart-busters included tunes on establishing cut-through IP routes over ATM, snubbing complex routing dogma from standards groups, and divorcing enterprise nets from the self-centered Cisco Systems, Inc.

The industry was transformed into a screaming, blubbering mess, tearing its locks out pleading for more, more IP switching. But that was yesterday. Today, Ipsilon is but a nostalgic footnote… ” This quote was from Nokia catches a falling Ipsilon by Jim Duffy Network World, 12/9/97.”

Phew! I first came across Ipsilon one wet winter’s morning in 1996 I think. I can’t quite remember which newspaper it was in, but there was this small 3″ column on the right-hand side of the page that talked about a revolution in networking – such a headline would always catch my eye!

I immediately added them to my list of companies that I wanted to visit the next time I was in Silicon Valley. I did, and what I heard changed many of my views. So just what was so seminal about Ipsilon that caused all the frenzy?

One of the key strengths of the Internet Protocol (IP) was that it was resilient. If a particular node or router had problems, packets would find an alternative path to their destination if there was one to be found. In fact, this was the core capability of the routing algorithms that were used to move IP packets around a network. OSPF (Open Shortest Path First) was the most common routing protocol developed for IP networks and it was based on finding and using the shortest path first.

However, this algorithmic approach although having many benefits, was flawed in a particular area that remains with us even today. The flaw was that it was possible for individual packets in a particular packet stream to take different routes between the source and the destination depending on congestion. For downloads based on TCP/IP this did not particularly matter as the packets could be reconstituted into the correct sequence no matter in which order they were received.

BUT, this causes a major problem for real-time services such as voice over IP (VoIP) or interactive video conferencing where latency beyond, say 200mS, causes a noticeable hesitation which significantly affects the quality of conversations. Having to wait around for all the packets to arrive and reorder them could add significant latency. We all have heard this effect when using services such as Skype.

Another issue behind Ipsilon was that IP routers at the time were very expensive. As ATM was seen as the next dominant technology at the time, every man and his dog were producing ATM switches. In reality, ATM was not seeing the enterprise uptake expected by the vendor industry, price wars broke out and ATM switches became available at reasonable costs.

The light that went on in the Ipsilon founders’ brains was that if they could embed enhanced routing software on generic ATM switch hardware, they would have a competitively costed router that would use ATM functionality to provide, what they called, cut through routing or IP Switching as shown below in one of Ipsilon’s presentations.

This involved taking off-the-shelf ATM switchs, throwing all the ATM software away and replacing it with enhanced routing software. What this meant in practice was that the only bit of ATM that remained was the physical switch fabric and the ability to set up a virtual connection from one end of the network to another so that data can be transmitted in one hop only.

 

Ipsilon’s vision for ‘cut through’ routing.

To quote Mary Petrosky from 1998:

“Although the ATM Forum had been working on a short-cut routing solution called Multiprotocol Over ATM for some time, Ipsilon Networks catalyzed the industry in the spring of 1996 with the announcement of its scheme, dubbed IP switching. By exploiting ATM switching hardware with a new set of IP-oriented protocols, Ipsilon promised to deliver millions of packets-per-second throughput, compared to the forwarding rate of hundreds of thousands of packets per second supported by the current generation of routers.”

This was really magic stuff and went right against the core Internet philosophy of the time which focused on what was known as connectionless routing. i.e. a packet would be dumped into the network and it would follow whatever path it could to reach its destination. Engineers that supported this approach were colloquially called ‘netheads‘.

Whereas exponents of cut-through routing followed the routing principles of the Public Switched Telephone Network (PSTN). Here, a path between network (domain) ingress and egress was set up prior to sending the packets into the network which is called deterministic routing. Of course, exponents of this approach were enevitably called ‘bellheads’. It sometimes felt that there was outright war between these two factions! In fact both were right in their own way.

Deterministic routing is crucial to obtaining sufficient Quality of Service for real-time services and the insight provided by Ipsilon turned the industry on its head. One of the early pre-MPLS IETF standards Transmission of Flow labelled IPV4 on ATM networks, drafted by Ipsilon, makes an interesting read.

Although I do not know (more likely can’t remember!) all the industry inside stories, it was clear that this concept was not foreseen by Cisco or any of the other big equipment vendors of the time.

You can read the full story of Ipsilon in the article linked above, but sadly Ipsilon failed as a company due to insufficient sales and were sold to Nokia in 1997. To quote Nokia’s press release:

Nokia will acquire Ipsilon Networks, Inc., on December 9th, 1997, a data communications company based in Sunnyvale, California, US, for approximately USD 120 million, subject to regulatory approval expected by end of the year…Ipsilon Networks is a leading innovator in the development of open Internet Protocol (IP) routing platforms.”

To quote David Passmore:

“Ipsilon also discovered that shortly after turning everyone on to IP switching, Cisco froze the market with Tag Switching and Multi-protocol Label Switching.”

“It’s obvious that Ipsilon basically failed in their mission to establish IP switching,” Quite frankly, the whole cut-through routing technique embodied by IP switching, 3Com FastIP and Cabletron’s SecureFast doesn’t make sense anymore in this era of gigabit, wire-speed routers.”

Once the cut-through routing idea hit the market, all the major IP equipment vendors jumped on the bandwagon. In particular, Cisco announced proprietary Tag switching (which was rumoured to be a an Ipsilon killer and, if so, succeeded in that ambition) which eventually morphed into the IETF’s Multiprotocol label switching (MPLS) standards still in use today.

Ipsilon was a seminal start-up in networking that transformed the industry and had a completely unique approach to market. It had so much going for it, but maybe being #1 with an idea is not always a good position to be in – especially when your competitor is a behemoth such as Cisco! My view is that they were just well ahead of of their time and trying to re-educate the world is just too much of a challenge when all thier backers wanted was sales.

To finish, Dave Passmore made an extremely pertinent point back in 1997 that could be just as applicable today: “[IP switching] doesn’t make sense anymore in this era of gigabit, wire-speed routers.” More about this later.

Next: The rise and maturity of MPLS


In Loco Parentis anti-grooming software

February 7, 2007

It seems kind of strange that after writing about Crisp Thinking, who have taken a network-based approach to detecting possible predatory paedophile discussions in chat rooms, I noticed xGATE who have come up with a hardware firewall solution aimed at solving the same problem. Now blow me down, along comes In Loco Parentis who produce anti-grooming PC software.

Interestingly, the software has been developed on behalf of a charity together with an ethical backer. Phoenix Survivors was established by Shy Keenan and Sara Payne. Phoenix Survivors is a completely independent web based voluntary support group set up for, and by, the victims of child sexual abuse and the families of children murdered by child molesters.

To quote the In Loco Parentis web site:

In Loco Parentis is an interactive system which allows you to:

  • See which websites your child has visited, allowing you to check the website’s suitability and block if necessary
  • Build a personalised library of words that you do not want your child to use or see online (watch words).
  • Monitor their online behaviour and set actions to be taken in the event that good conduct rules are broken
  • Be alerted by email to violations of the rules you have set up
  • Encourage respect between users with a bad language alert and lock out feature

In Locus Parentis provide a list of acronyms that could be used as part of a grooming discussion:

ASL Age, Sex, Location
BF/GF Boyfriend, Girlfriend
BRB Be Right Back
CD9 Code 9— means parents are around
GNOC Get Naked On Cam (webcam)
G2G Got To Go
IDK I Don’t Know
LMIRL Let’s Meet In Real Life
101 Laugh Out Loud
MorF Male or Female
MOS Mum Over Shoulder
NIFOC Naked In Front Of Computer
Noob When someone cannot use a computer very well
NMU Not Much, You?
P911 Parents Emergency
PAW Parents Are Watching
PIR Parents In Room
POS Parents Over Shoulder
PRON Porn
PRW Parents are Watching
S2R Send to Receive (this relates to pictures)
TDTM Talk Dirty To Me
W/E Whatever
Warez pirated software

The software runs on the PC itself, unlike Crisp and xGATE. It contains a ‘watchword’ dictionary’ – like xGATE – that allows parents to be alerted to inappropriate language or behaviour. These words are divided into the categories of swearing, bullying, sex and grooming for easy access and administration. Parents are provided with search and blocking options.

The watchword dictionary

I wonder whether just being alerted by email to violations of the rules is suffcient. As we know, email is not a real-time service and is not much use in multi-PC homes where parents may be working, otherwise engaged or sleeping while their children are in chat rooms. xGATE’s SMS alerting seems far more sophisticated and usable.

An article in the Sun newspaper proivides an overview.

Anyway, we now have a network, firewall and a software solution for detecting chat room grooming and it will be interesting to see how this all plays out in the market.


Name that start-up – simple?

February 6, 2007

Back in December 2004, there was article in the US magazine Business 2.0 entitled The New Science of Naming. This excellent article talked about the history of the ways companies chose their company’s name and the current fashions that determined their approach.

The link above points to the text, but the most interesting part were the four graphics reproduced here on the left. (I hope I do not get into trouble reproducing them here but in amelioration, Business 2.0 really is an excellent magazine!)

In the early days of mass production, eponyms – companies or products named after the people who created them – used the comforting familiarity of personal names to evoke traditional ideals of quality and craftsmanship.
Industrial firms with long, descriptive names gradually embraced the shorthand acronyms used by customers and employees alike. Like New York Stock Exchange ticker symbols, these two- and three-letter names made companies seem larger than life.
Company names became abstract as the computer age got under way. Names often sought to imply high-tech precision, encouraging customers and investors to em­brace the future. Seldom-used let­ters like ‘X’ were believed to have extra potency.
Hooked on the idea of synergy,companies adopted meaning­less umbrella names that could accommodate expansion into multiple lines of business. Dur­ing the dotcom boom, the rush for online addresses pushed this logic to absurd extremes.

Following my earlier post, The Art of the Start by Guy Kawasaki that talks about company names, it’s interesting to ponder about what the appropriate strategy for naming companies should be in 2007. If you look through TechCrunch’s list of Web 2.0 company names, there are some really unmemorable ones! To quote the article:

Today’s style is to build corporate identity around words that have real meaning. Aucent’s transition to Rivet is a typical effort to eliminate the obfuscation of the Internet era; the preference now is to name things in the spirit of what they actually are. The new names are all about purity, clarity, and organicism. Rivet helps companies tag financial data, for example, so the name functions as an effective metaphor for what the firm actually does. Similarly, Silk (soy milk), Method (home products), Blackboard (school software), and Smartwater (beverages) are new names that are simple and make intuitive sense.

“There’s a trend toward meaning in words. When it comes down to evocative words va straightforward names, straightforward will win in testing every time,” says Jeff Lapatine, group director of naming and brand architecture at New York branding firm Siegel & Gale. That hardly comes as a surprise, of course. But why has it taken so long for this idea to catch on?

Here are a few things that come to my mind:

  • The name should be a domain name. This is very challenging as it seems to me that you can string any three words together and the name is already taken. Also, choosing a ‘simple’ name will enevitably cause problems when trying to find an untaken domain name.
  • The name should be a ‘xxx.com’ NOT a ‘xxx.net’ where ‘xxx.com’ is another company! This happens so often when a name is chosen first then someone decides to use WHOIS.
  • It should associated with what the company actually does i.e. the company does as it ‘says on the can’.
  • The name xxx should be spellable. If you can’t spell it intuitively then no one will be able to enter the URL into a browser.
  • The name should be pronouncable. If the name does not have a unique pronounciation, then everyone will say the name differently e.g. do you say Skype with a silent ‘e’ or do you say Skypeeee with the ‘e’ pronounced? It’s very confusing.
  • As Guy Kawasaki commented, if it can be used as a verb, or in a sentence that would be good.
  • If the name can also contains a call to action such as “You can ContactMeAnywhere at any time!” (They probably don’t say this – or do they?) so much the better.
  • Don’t use hyphenation to get round taken domain names.
  • Don’t choose a temporary name that you plan to change later – that’s just a cop out and will put in jeapardy all the early launch publicity you might receive.

I reckon that chosing a company name is one of the most difficult tasks facing a new company and the effort and arguments that can ensue during the process are to be seen to be believed.

Oh and lastly, please don’t spend thousands of dollars buying a name from a domain horder – start as you mean to go on by being frugal.