The new network dogma: Has the wheel turned full circle?

August 26, 2008

An authoritative principle, belief, or statement of ideas or opinion, especially one considered to be absolutely true”

When innovators proposed Internet Protocol (IP) as the universal protocol for carriers in the mid 90s, they met with furious resistance from the traditional telecommunications community. This post asks whether the wheel has now turned full circle with new innovative approaches often receiving the same reception.

Like many others, I have found the telecommunications industry so very interesting and stimulating over the last decade. There have been so many profound changes that it is hard to indentify with the industry that existed prior to the new religion of IP that took hold in the late 90s. In those balmy days the industry was commercially and technically controlled by the robust world standards of the Public Switched Telecommunications Services (PSTN).

In some ways it was a gentleman’s industry where incumbent monopoly carriers ruled their own lands and had detailed inter-working agreements with other telcos to share the end-to-end revenue generated by each and every telephone call. To enable these agreements to work, the International Telecommunications Union (ITU) in Geneva spent decades defining the technical and commercial standards that greased the wheels. Life was relatively simple as there was only one standards body and one set of rules to abide by. The ITU is far from dead of course and the organisation went on to develop the highly successful GSM standard for mobile telephony and is still very active defining standards to this very day.

In those pre-IP days, the industry was believed to be at its nadir with high revenues, similarly high profits with every company having its place in the universe. Technology had not significantly moved on for decades ( though this does an injustice to the development of ATM and SDH/SONET) and there was quite a degree of complacency driven by a monopolistic mentality. Moreover, it was very much a closed industry in that individuals chose to spend their entire careers in telecommunications from a young age with few outsiders migrating into it. Certainly few individuals with an information technology background joined telcos as there was a significant mismatch in technology, skills and needs. It was not until the mid 90s, when the industry started to use computers by adopting Advanced Intelligent Networks (AIN) and Operational Software and Systems (OSS), that computer literate IT engineers and programmers saw new job opportunities and jumped aboard.

In many ways the industry was quite insular and had its own strong world view of where it was going. As someone once said, “the industry drank its own bathwater” and often chose to blinker out opposing views and changing reality. It is relatively easy to see how this came about with hindsight. How could an industry that was so insular embrace disruptive technology innovation with open arms? The management dogma was all about “We understand our business, our standards and our relationships. We are in complete control and things won’t change.”

Strong dogma dominated and was never more on show than in the debate about the adoption of Asynchronous Transfer Mode (ATM) standards that were needed to upgrade the industry’s switching networks. If ATM had been developed a decade earlier there would have never been an issue but unfortunately the timing could not have been worse as it coincided with the major uptake of IP in enterprises. When I first wrote about ATM back in 1993, IP was pretty much an unknown protocol in Europe. (The demise of ATM ). ATM and the telco industry lost that battle and IP has never looked back.

In reality it was not so much a battle but all out war. It was the telecommunications industry eyeball-to-eyeball with the IT industry. The old “we know best” dogma did not triumph and the abrupt change in industry direction led to severe trauma  in all sections of the industry. Many old-style telecommunications equipment vendors, who had focused on ATM with gusto, failed to adapt with many either writing off billions of Dollars or being sold at knock-down valuations. Of course, many companies made a killing. Inside telcos, commercial and engineering management who had spent decades at the top of their profession, found themselves floundering and over a fifteen year period a significant proportion of that generation of  management ended up leaving the industry.

The IP band wagon had started rolling and its unstoppable inertia has relentlessly driven the industry through to the current time. Interestingly, as I have covered in previous posts such as MPLS and the limitations of the Internet, not all the pre-IP technologies were dumped. This was particularly so with fundamental transmission related network technologies such as SDH / SONET (SDH, the great survivor). These technologies were 100% defined within the telecommunications world and provided capabilities that were wholly lacking in IP. IP may have been perfect for enterprises, but many capabilities were missing that were required if it was to be used as the bedrock protocol in the telecommunications industry. Such things as:

  • Unlike telecommunications protocols, IP networks were proud that their networks were non-deterministic. This meant that packets would always find their way to the required destination even if the desired path faulted. In the IP world this was seen as a positive feature. Undoubtedly it was, but it also meant that it was not possible to predict the time it would take for a packet to transit a network. Even worse, a contiguous stream of packets could arrive at a destination via different paths. This was acceptable for e-mail traffic but a killer for real-time services like voice.
  • Telecommunications networks required high reliability and resilience so that in event of any failure, automatic switchover to an alternate route would occur within several milliseconds so that even live telephone calls were not interrupted. In this situation IP would lackadaisically find another path to take and packets would eventually find their way to their destination (well maybe that is a bit of an overstatement, but it does provide a good image of how IP worked!).
  • Real time services require a very high Quality of Service (QoS) in that latency, delay, jitter and drop-out of packets need to be kept to an absolute minimum. This was, and is, a mandatory requirement for delivery of demanding voice services. IP in those days did not have the control signalling mechanisms to ensure this.
  • If PSTN voice networks had one dominant characteristic – it was reliable. Telephone networks just could not go down. There were well engineered and extensively monitored so if any fault occurred comprehensive network management systems flagged it very quickly to enable operational staff to correct it or provide a work round. IP networks just didn’t have this level of capability of operational management systems.

These gaps in capabilities in the new IP-for-everything vision needed to be corrected pretty quickly, so a plethora of standards development was initiated through the IETF that remains in full flow to this day. I can still remember my amazement in the mid 1990s when I came across a company had come up with the truly innovative idea to combine the deterministic ability of ATM with an IP router that brought together the best of the old with the new still under-powered IP protocol (The phenomenon of Ipsilon). This was followed by Cisco’s and the IETF’s development of MPLS and all its progeny protocols. (The rise and maturity of MPLS and GMPLS and common control).

Let’s be clear, without these enhancements to basic IP, all the benefits the telecommunications world gained from focusing on IP would not have been realised. The industry should be making a huge sigh of relief as many of the required enhancements were not developed until after the wholesale industry adoption of IP. If IP itself had not been sufficiently adaptable, it could be conjectured that there would have been one of the biggest industry dead ends imaginable and all the ‘Bellheads’ would have been yelling “I told you so!”.

Is this the end of story?

So, that’s it then, it’s all done. Every carrier of every description, incumbent, alternate, global, regional, mobile, and virtual has adopted IP / MPLS and everything is hunky-dory. We have the perfect set of network standards and everything works fine. The industry has a clear strategy to transport all services over IP and the Next Generation Network (NGN) architecture will last for several decades.

This may very well turn out to be the case and certainly IP /MPLS will be the mainstream technology set for a long time to come and I still believe that this was one of the best decisions the industry took in recent times. However, I cannot help asking myself whether if we have not gone back to many of the same closed industry attitudes that drove it prior to the all-pervasive adoption of IP?

It seems to me that it is now not the ‘done thing’ to propose alternative network approaches or enhancements that do not exactly coincide with the now IP way of doing things for risk of being ‘flamed’. For me the key issue that should drive network architectures should be simplicity and nobody could use the term ‘simple’ when describing today’s IP carrier networks. Simplicity means less opportunity for service failure and simplicity means lower cost operating regimes. In these days of ruthless management cost-cutting, any innovation that promises to simplify a network and thus reduce cost must have merit and should justify extensive evaluation – even if your favourite vender disagrees. To put it simply, simplicity cannot not come from deploying more and more complex protocols that micro-manage a network’s traffic.

Interestingly, in spite of there being a complete domination of public network cores by MPLS, there is still one major area where the use of MPLS is being actively questioned – edge and or metro networks. There is currently quite a vibrant discussion taking place concerning the over complexity of MPLS for use in metro and the possible benefits of using IP over Ethernet (Ethernet goes carrier grade with PBT / PBB-TE?). More on this later.

We should also not forget that telcos have never dropped other aspects of the pre-IP world. For example, the vast majority of telcos who own physical infrastructure still use that leading denizen of the pre-IP world, Synchronous Digital Hierarchy (SDH or SONET) (SDH, the great survivor). This friendly dinosaur of a technology still holds sway at the layer-1 network level even though most signalling and connectivity technologies that sit upon it have been brushed aside by the IP family of standards. SDH’s partner in crime, ATM, was absorbed by IP through the creation of standards that replicated its capabilities in MPLS (deterministic routing) and MPLS-TE (fast rerouting). The absorption of SDH into IP was not such a great success as many of the capabilities of SDH could not effectively be replaced by layer-3 capabilities (though not for the want of trying!).

SDH is based on time division multiplexing (TDM), the pre-IP packet method of sharing a defined amount of bandwidth between a number of services running over an individual wavelength on a fibre optic cable. The real benefit of this multiplexing methodology is that it had proved to be ultra-reliable and offers the very highest level of Quality of Service available. SDH also has the in-built ability par-excellence to provide restoration of an inter-city optical cable in the case of major failure. One of SDH’s limitations however, is that it only operates at very high granularity of bandwidth so smaller streams of traffic more appropriate to the needs of individuals and enterprises cannot be managed through SDH alone. This capability was provided by ATM and is now provided by MPLS.

Would a moment of reflection be beneficial?

The heresy that keeps popping up in my head when I think about IP and all of its progeny protocols, is that the telecommunications industry has spent fifteen years developing a highly complex and inter-dependent set of technical standards that were only needed to effectively replace what was a ‘simple’ standard that did its job effectively at a lower layer in the network. Indeed, pre MPLS, many of the global ISPs used ATM to provide deterministic management of the global IP networks.

Has the industry now created a highly over-engineered and over-complex reference architecture? Has a whole new generation of staff been so marinaded for a decade in deep IP knowledge, training and experience that it’s for an individual to question technical strategy? Has the wheel has turned full circle?

In my post Traffic Engineering, capacity planning and MPLS-TE, I wrote about some of the challenges facing the industry and the carriers’ need to undertake fine-grain traffic engineering to ensure that individual service streams are provided with appropriate QoS. As consumers start to use the Internet more and more for real-time isochronous services such as VoIP and video streaming, there is a major architectural concern about how this should be implemented. Do carriers really want to continue to deploy an ever increasing number of protocols that add to the complexity of live networks and hence increase risk?

It is surprising just how many carriers use only very light traffic engineering and simply rely on over-provisioning of bandwidth at a wavelength level. This may be considered to be expensive (but is it if they own the infrastructure?) and architects may worry about how long they will be able to continue to use this straightforward approach, but there does seem to be a real reticence to introduce fine-grained traffic management. I have been told several times that this is because they do not trust some of the new protocols and it would be too risky to implement them. It is industry knowledge that a router’s operating system contains many features that are never enabled and this is as true today as it was in the 90s.

It is clear that management of fine-grain traffic QoS is one of the top issues to be faced in coming years. However, I believe that many carriers have not even adopted the simplest of traffic engineering standards in the form of MPLS-TE that starts to address the issue. Is this because many see that adopting these standards could create a significant risk to their business or is it simply fear, uncertainty and doubt (FUD)?

Are these some of the questions carriers we should be asking ourselves?

Has management goals moved on since the creation of early MPLS standards?

When first created, MPLS was clearly focused on providing predictable determinability at layer-3 so that the use of ATM switching could be dropped to reduce costs. This was clearly a very successful strategy as MPLS now dominates the core of public networks. This idea was very much in line with David Isenberg’s ideas articulated in The Rise of the Stupid Network in 1997 which we were all so familiar with at the time. However ambitions have moved on, as they do, and the IP vision was considerably expanded. This new ambition was to create a universal network infrastructure that could provide any service using any protocol that any customer was likely to need or buy. This was called an NGN.

However, is that still a good ambition to have? The focus these days is on aggressive cost reduction and it makes sense to ask whether an NGN approach could ever actually reduce costs compared to what it would replace. For example, there are many carriers today who wish to exclusively focus on delivering layer-2 services. For these carriers, does it make sense to deliver these services across a layer 3 based network? Maybe not.

Are networks so ‘on the edge’ that they have to be managed every second of the day?

PSTN networks that pre-date IP were fundamentally designed to be reliable and resilient and pretty much ran without intervention once up and running. They could be trusted and were predictable in performance unless a major outside event occurred such as a spade cutting a cable.

IP networks, whether they be enterprise or carrier, have always had an well-earned image of instability and going awry if left alone for a few hours. This is much to do with the nature of IP and the challenge of managing unpredicted traffic bursts. Even today, there are numerous times when a global IP network goes down due to an unpredicted event creating knock-on consequences. A workable analogy would be that operating an IP network is similar to a parent having to control an errant child suffering from Attention Deficit Disorder.

Much of this has probably been brought about by the unpredictable nature of routing protocols selecting forwarding paths. These protocols have been enhanced over the years by so many bells and whistles that a carrier’s perception of the best choice of data path across the network will probably be not the same as the one selected by the router itself.

Do operational / planning architecture engineers often just want to “leave things as they are” because it’s working. Better the devil you know?

When a large IP network is running, there is a strong tendency to want to leave things well alone. Is this because there are so many inter-dependent functions in operation at any one time that it’s beyond an individual to understand it? Is it because when things go wrong it takes such an effort to restore service and it’s often impossible to isolate the root cause if it not down to simple hardware failure?

Is risk minimisation actually the biggest deciding factor when deciding what technologies to adopt?

Most operational engineers running a live network want to keep things as simple as possible. They have to because their job and sleep are on the line every day. Achieving this often means resisting the use of untried protocols (such as MPLS-TE) and replacing fine-grained traffic engineering with the much simpler strategy of using over-provisioned networks ( Telcos see it as a no-brainer because they already own the fibre in the ground and it is relatively easy to light an additional dark wavelength).

At the end of the day, minimising commercial risk is right at the top of everyone’s agenda, though it usually sits below operation cost reduction.

Compared to the old TDM networks they replace, are IP-based public networks getting too complex to manage when considering the ever increasing need for fine-grain service management at the edge of the network?

The spider’s web of protocols that need to perform flawlessly in unison to provide a good user experience is undoubtedly getting more and more complex as time goes by. There is only little effort to simply things and there is a view that it is all becoming too over-engineered. Even if a new standard has been ratified and is recommended for use, this does not mean it will be implemented in live networks on a wide scale basis. The protocol that heads the list of under exploited protocols is IPv6 (IPv6 to the rescue – eh?).

There is significant on-going standards development activity in the space of path provisioning automation (Path Computation Element (PCE): IETF’s hidden jewel) and of true multilayer network management. This would include seamless control of layer-3 (IP), layer-2.5 (MPLS) and layer-1 networks (SDH) (GMPLS and common control). The big question is (risking being called a Luddite) would a carrier in the near future risk the deployment of such complexity that could bringing down all layers of a network at once? Would the benefits out weigh the risk?

Are IP-based public networks more costly to run than legacy data networks such as Frame Relay?

This is a question I would really like to get an objective answer to as my current views are mostly based on empirical and anecdotal data. If anyone has access to definitive research, please contact me! I suspect, and I am comfortable with the opinion until proved wrong, that this is the case and could be due to the following factors:

  • There needs to be more operations and support staff permanently on duty than with the old TDM voice systems thus leading to higher operational costs.
  • Operational staff require a higher level of technical skill and training caused by the complex nature of IP. CCIEs are expensive!
  • Equipment is expensive as the market is dominated by only a few suppliers and there are often proprietary aspects of new protocols that will only run on a particular vendor’s equipment thus creating effective supplier lock-in. The router clone market is alive and healthy!

It should be remembered that the most important reason given to justify the convergence on IP was the cost savings resulting from collapsing layers. This has not really taken place except for the absorption of ATM into MPLS. Today, each layer is still planned, managed and monitored by separate systems. The principle goal of a Next Generation Network (NGN) architecture is still to achieve this magic result of reduced costs. Most carriers are still waiting on the fence for evidence of this.

Is there a degradation in QoS using IP networks?

This has always been a thorny question to answer and a ‘Google’ to find the answer does not seem to work. Of course, any answer lies in the eyes of the beholder as there is no clear definition of what the term QoS encompasses. In general, the term can be used at two different levels in relation to a network’s performance: micro-QoS and the macro-QoS.

Micro-QoS is concerned with individual packet issues such as order of reception of packets, number of missing packets, latency, delay and jitter. An excessive amount of any of these will severely degrade a real-time service such as VoIP or video streaming. Macro-QoS is more concerned with network wide issues such as network reliability and resilience and other areas that could affect overall performance and operational efficiency of a network.

My perspective is that on a correctly managed IP / MPLS network (with all the hierarchy and management that requires), micro-QoS degradation is minimal and acceptable and certainly no worse than IP over SDH. Indeed, many carriers deliver traditional private wire services such as E1 or T1 connectivity over an MPLS network using pseudowire tunnelling protocols such as Virtual Private LAN Service (VPLS). However this does significantly raise the bar in respect to the level of IP network design and network management quality required.

The important issue is the possible degradation at the macro-QoS level where I am comfortable with the view that using an IP / MPLS network there will always be a statistically higher risk of fault or problems due to its complexity compared to a simpler IP over SDH system. There is a certain irony in that macro-QoS performance of a network could be further degraded when additional protocols are deployed to improve-micro-QoS performance.

Is there still opportunity for simplification?

In an MPLS dominated world, there is still significant opportunity for simplification and cost reduction.

Carrier Ethernet deployment

I have written several posts (Ethernet goes carrier grade with PBT / PBB-TE?) about carrier Ethernet standards and the benefits its adoption might bring to public network. In particular, the promise of simplification. To a great extent this interesting technology is a prime example of where a new (well newish) approach that actually does make quite a lot of sense comes up against the new MPLS-for-everything-and-everywhere dogma. It is not just a question of convincing service providers of the benefit but also overcoming the almost overwhelming pressure brought on carrier management form MPLS vendors who have clear vested interests in what technologies their customers choose to use. This often one-sided debate definitely harks back to the early 90s no-way-IP culture. Religion is back with a vengeance.

Metro networks

Let me quote Light Reading from September 2007. What once looked like a walkover in the metro network sector has turned into a pitched battle – to the surprise, but not the delight, of those who saw Multiprotocol Label Switching (MPLS) as the clear and obvious choice for metro transport.” MPLS has encountered several road bumps on its way to domination and it should always be appropriate to question whether any particular technology adoption is appropriate.

To quote the column further:The carrier Ethernet camp contends that MPLS is too complex, too expensive, and too clunky for the metro environment.” Whether ‘thin MPLS’ (PBB-TE / PBT or will it be T-MPLS?) will hold off the innovative PBB intruder remains to be seen. At the end of the day, the technology that provides simplicity and reduced operational costs will win the day.

Think the unthinkable

As discussed above, the original ambition of MPLS has ballooned over the years. Originally solving the challenge of how to provide a deterministic and flexible forwarding methodology for layer-3 IP packets and replace ATM, it has achieved this objective exceptionally well. These days, however, it seems to be always assumed that some combination or mix of Ethernet (PBB-TE) and/or MPLS-TE and maybe even GMPLS is the definitive, but highly complex, answer to creating that optimum highly integrated NGN architecture that can be used to provide any service any customer might require.

Maybe, it is worth considering a complementary approach that is highly focused on removing complexity. There is an interesting new field of innovation that is proposing that path forwarding ‘intelligence’ and path bandwidth management is moved from layer-3, layer.2.5 and layer-2 back into layer-1 where it rightly belongs. By adding additional capability to SDH, it is possible to reduce complexity in the above layers. In particular deployment scenarios this could have a number of major benefits, most of which result in significantly lower costs.

This raises an interesting point to ponder. While revenues still derive from traditional telecom-oriented voice services, the services and applications that are really beginning to dominate and consume most bandwidth are real time interactive and streaming services such as IPTV, TV replays, video shorts, video conferencing, tele-presence, live event broadcasting, tele-medicine, remote monitoring etc. It could be argued that all these point-to-point and broadcast services could be delivered with less cost and complexity using advanced SDH capabilities linked with Ethernet or IP / MPLS access? Is it worth thinking about bringing SDH back to the NGN strategic forefront where it could deliver commercial and technical benefits?

To quote a colleague: “The datacom protocol stack of IP-over-Ethernet was designed for asynchronous file transfer, and Ethernet as a local area network packet-switching protocol, and these traditional datacom protocols do a fine job for those applications (i.e. for services that can tolerate uncertain delays, jitter and throughput, and/or limited-scope campus/LAN environments). IP-over-Ethernet was then assumed to become the basis protocol stack for NGNs in the early 2000s, due to the popularity of that basic datacom protocol stack for delivering the at-that-time prevailing services carried over Internet, which were mainly still file-transfer based non-real-time applications.”

SDH has really moved on since the days when it was only seen as a dumb transport layer. At least one service provider company, Optimum Communications Services offers an innovative vision whereby instead of inter-node paths being static, as is the case with the other NGN technologies discussed in this post, the network is able to dynamically determine the required inter-node bandwidth based on a fast real-time assessment of traffic demands between nodes.


So has the wheel has turned full circle?

As most carriers’ architectural and commercial strategies are wholly focused on IP with the Yellow Brick Road ending with the sun rising over a fully converged NGN, how much real willingness is there to listen to possible alternate or complementary innovative ideas?

In many ways the telecommunications industry could be considered to have returned to the closed shutter mentality that dominated before IP took over in the late 1990s – I hope that this is not the case. There is no doubt that choosing to deploy IP / MPLS was a good decision, but a decision to deploy some of the derivative QoS and TE protocols is far from clear cut.

We need to keep our eyes and minds open as innovation is alive and well and most often arises in small companies who are free to think the the unthinkable. They may might not be always right but they may not be wrong either. Just cast your mind back to the high level of resistance encountered by IP in the 90s and let’s not repeat that mistake again. There is still much scope for innovation within the IP based carrier network world and I would suspect this has everything to do with simplifying networks and not complicating them further.

Addendum #1: Optimum Communications Services – finally a way out of the zero-sum game?


EBay paid too much for Skype

October 2, 2007

I don’t normally post news, but I couldn’t resist posting this as it so close to my heart. Ever since the deal was done everyone has been asking whether it was worth what they paid.

The  article was in the London Evening Standard today.

ONLINE auctioneer eBay today admitted it had paid too much for internet telephone service Skype in 2005.

EBay, which forked out $2.6 billion (fl.3 billion), will now take a $1.4 billion charge on the company as it fails to convert users into revenue.

Skype’s chief executive Nikias Zennström, one of eBay’s founders, will step down, but the company denies he is walking the plank.

EBay will pay some investors $530 million to settle future obligations under the disastrous Skype deal.

In a desperate bid to get the deal over the line in 2005, eBay promised an extra $L7 billion to Skype investors if the unit met certain targets including number of users.

Now it is offering those shareholders $530 million as “an early, one-time payout”. The parent company will write down $900 million in the value of Skype.

Since eBay took over, Skype’s membership accounts have risen past 220 million, but it earned just $90 million during the second quarter of 2007, far below projections.

I wonder if this will cool some of the outrageous values being put on some of the social network services?


How to Be a Disruptor

September 11, 2007

An excellent article from Sandhill.com on running a software business along disruptive lines. Written by the CEO of MySQL, it looks like it needs a lot of traditional common sense!

These are the key issues  he talks about:

Follow No Model
Get Rich Slow
Make Adoption Easy
Run a Distributed Workforce
Foster a Culture of Experimentation
Develop Openly
Leverage the Ecosystem
Make Everyone Listen to Customers
Run Sales as a Science
Fraternize with the Enemy

Take a read: How to Be a Disruptor


Business plans are overrated

July 29, 2007

There more than element of obvious insight in Paul Kedrosky‘s recent post:

“Business plans are overrated. … Why?

… Because VCs are professional nit-pickers. Give them something to find fault with, and they’ll do it with abandon. I generally tell people to come to pitch meetings with less information rather than more. Sure, you’ll get pressed for more, but finesse it.

Presenting a full and detailed plan is, nine times out of ten, a path to a ‘No’ — or at least more time-consuming than having said less.”

Paul Kedrosky, in the wake of the VC financing of Twitter, which has no business plan, no business model and no profits.


The Cloud hotspotting the planet

July 25, 2007

I first came across the first across the embryonic idea behind The Cloud in 2001 when I first met its Founder, George Polk. In those days George was the ‘Entrepreneur in Residence’ at iGabriel, an early stage VC formed in the same year.

One of his first questions was “how can I make money from a Wi-Fi hotspot business?” I certainly didn’t claim that I knew at the time but sure as eggs is eggs I guess that George, his co-founder Niall Murphy and The Cloud team are world experts by now! George often talked about environmental issues but I was sorry to hear that he had stepped down from his CEO position (he’s still on the Board) to work on climate change issues.

The vision and business model behind The Cloud is based on the not unreasonable idea that we all now live in a connected world where we use multiple devices to access the Internet. We all know what these are: PCs, notebooks, mobile phones, PDAs and games consoles etc. etc. Moreover, we want to transparently use any transport bearer that is to hand to access the Internet, no matter where we are or what we are doing. This could be DSL in the home, a LAN in the office, GPRS on a mobile phone or a Wi-Fi hotspot.

The Cloud focuses on the creation and enablement of public Wi-Fi so that consumers and business people are able to connect to the Internet where ever they may be located when out and about.

One of the big issues with Wi-Fi hotspots back in the early years of the decade (and it still is but less so these days), was that Wi-Fi hotspot provision industry was highly fractured with virtually every public hotspot being managed by a different provider. When these providers wanted to monetise their activities it seemed that you needed to set up a different account at each site you visited. This cast a big shadow over users and slowed down market growth considerably.

What was needed in the market place was Wi-Fi aggregators or market consolidation that would allow a roaming user to seamlessly access the Internet from lots of different hotspots without having to having multiple accounts.

Meeting this need for always on connectivity is where The Cloud is focused and their aim is to enable wide-scale availability of public Wi-Fi access through four principle methods:

  1. Direct deployment of hot spots:(a) In coffee shops, airports public houses etc. in partnership with the owners of these assets.(b) In wide area locations such as city centre in partnership with local councils.
  2. Wi-Fi extensions of existing public fixed IP networks .
  3. Wi-Fi extension of existing private enterprise networks – “co-opting networks”
  4. Roaming relationships with other Wi-Fi operators and service providers, such as with iPass in 2006.

The Cloud’s vision is to stitch together all these assets and create a cohesive and ubiquitous Wi-Fi network to enable Internet access at any location using the most appropriate bearer available.

It’s The Cloud’s activities in 1(a) above that is getting much publicity at the moment as back in April the company announced coverage of the City of London in partnership with City of London Corporation. The map below shows the extent of the network.

Note: However, The Cloud will not have everything all to itself in London as a ‘free’ WiFi Thames based network has just been launched (July 2007) by Meshhopper.

On July 18th 2007 The Cloud announced coverage of Manchester city centre as per the map below:

These network roll-outs are very ambitious and are some largest deployments of wide-area Wi-Fi technology in the world so I was intrigued as to how this was achieved and what challenges were encountered during the roll out.

Last week I talked with Niall Murphy, The Cloud’s Co-Founder and Chief Strategy Officer, to catch up with what they were up to and to find out what he could tell me about the architecture of these big Wi-Fi networks.

One of my first questions in respect of the city-centre networks was about in-building coverage as even high power GSM telephony has issues with this and Wi-Fi nodes are limited to a maximum power of 100mW.

I think I already knew the answer to this, but I wanted to see what The Cloud’s policy was. As I expected, Niall explained that “this is a challenge” and consideration of this need was not part of the objective of the deployments which are focused on providing coverage in “open public spaces“. This has to be right in my opinion as the limitation in power would make this an unachievable objective in practice.

Interestingly, Niall talked about The Cloud’s involvement in OFCOM‘s investigation to evaluate whether there would be any additional commercial benefit by allowing transmit powers greater tha 100mW. However, The Cloud’s recommendation was not to increase power for two reasons:

  1. Higher power would create a higher level of interference over a wider area which would negate the benefits of additional power.
  2. Higher power would negatively impact battery life in devices.

In the end, if I remember correctly, the recommendation by OFCOM was to leave the power limits as they were.

I was interested in the architecture of the city-wide networks as I really did not know how they had gone about the challenge. I am pretty familiar with the concept of mesh networks as I tracked the path of one of the early pioneers in the UK of this technology, Radiant Networks. Unfortunately, Radiant went to the wallRadiant Networks flogged – in 2004 for reasons I assume to be concerned with the use of highly complex, proprietary and expensive nodes (as shown on the left) and the use of the 26, 28 and 40Ghz bands which would severely impact economics due to small cell sizes.

Fortunately, Wi-Fi is nothing like those early proprietary approaches to mesh networks and the technology has come of age due to wide-scale global deployment. More importantly, this has also led to considerably lower equipment costs. The reason that this is that Wi-Fi uses the 2.4GHz ‘free band’ and most countries around the world have standardised on the use of this band giving Wi-Fi equipment manufacturers access to a truly global market.

Anyway getting back to The Cloud, Niall, said that “the aims behind the City of London network was to provide ubiquitous coverage in public spaces to a level of 95% which we have achieved in practice“.

The network uses 127 nodes which are located on street lights, video surveillance poles or other street furniture owned by their partner, the City of London Corporation. Are 127 nodes enough I ask? Niall’s answer was an emphatic “yes” although “the 150 metre cell radius and 100mW power limitation of Wi-Fi definitely provides a significant challenge“.

Interestingly, Niall observed that deploying a network in the UK was much harder than in the US due to the lower power levels of the 2.4Ghz band than in the USA. The Cloud’s experience has shown that a cell density two or three times greater is required in a UK city – comparing London to Philadelphia for example. This raises a lot of interesting questions about hotspot economics!

Much time was spent on hotspot planning and this was achieved in partnership with a Canadian company called Belair Networks. One of the interesting aspects of this activity was that there was “serious head scratching” by Belair as being a Canadian company they were used to nice neat square grids of streets and not the no-straight-line topology mess of London!

Data traffic from the 127 nodes that form The Cloud’s City of London network are back-hauled to seven 100Mbit/s fibre PoPs (Points of Presence) using 5.6GHz radio. Thus each node has two transceivers. The first is the Wi-Fi transceiver with a 2.4GHz antenna trained on the appropriate territory. The second is a 5.6GHz transceiver pointing to the next node where the traffic daisy chains back to the fibre PoP effectively creating a true mesh network (Incidentally, backhaul is one of the main uses of WiMax technology). I won’t talk about the strengths and weaknesses of mesh radio networks here but will write post on this subject at a future date.

According to Niall, the tricky part of the build was to find appropriate sites for the nodes. You might think this was purely due to radio propagation issues but there was also the issue that the physical assets they were using didnt always turn out to be where they appeared to be on the maps! “We ended up arriving at the street lamp indicated on the map and it was not there!” This is the same as many carriers who also do not know where some of their switches are located or do not know how many customer leased lines they have in place.

Another interesting anecdote was concerned with the expectations of journalists at the launch of the network. “Because we were talking about ubiquitous coverage, many thought they could jump in a cab and watch Joost streaming video as they weaved their way around the city“. Oh, it didn’t work then I say to Niall expecting him to say that they were disappointed.. “No” he said, “it absolutely worked!

Niall says the network is up and running and working according to their expectations. “there is still a lot of tuning and optimisation to do but we are comfortable with the performance.

Incidentally, The Cloud owns the network and works with the Corporation of London as the landlord.

Round up

The Cloud has seemingly really achieved a lot this year with the roll out of the city centre networks and the sign up of 6 to 7 thousand users in London alone. This was backed up by the launch of UltraWiFi, a flat rate service costing £11.99 pounds per month.

Incidentally, The Cloud do not see themselves in competition with cable companies or mobile operators concentrating as they do on providing pure Wi-Fi access to individuals on the move. Although in many ways it actually does.

They operate in the UK, Sweden, Denmark, Norway, Germany and The Netherlands. Theyre also working with a wide array of service providers, including O2, Vodafone, Telenor, BT, iPass, Vonage, Nintendo amongst others.

The big challenge ahead, as I’m sure they would acknowledge, is how they are going to ramp up revenues and take their business into the big time. I am confident that they are well able to accept this challenge and exceed it. All I know is that public Wi-Fi access is a crucial capability in this connected world and without it the Internet world will be a much less exciting and usable place.


iotum’s Talk-Now is now available!

April 4, 2007

In a previous post The magic of ‘presence’, I talked about the concept of presence in relation to telecommunications services and looked at different examples of how it had been implemented in various products.

One of the most interesting companies mentioned was iotum, a Canadian company. iotum had developed what they called a relevance engine which enabled the provision of ability to talk and willingness to talk information into a telecom service by attaching it to appropriate equipment such as a Private Branch Xchanges (PBX) or a call centre Automatic Call Distribution (ACD) managers.

One of the biggest challenges for any company wanting to translate presence concepts into practical services is how to make it useable rather than just being just a fancy concept that is used to describe a of a number peripheral and often unusable features of a service. Alec Saunders, iotum’s founder, has been articulating his ideas about this in his blog Voice 2.0: A Manifesto for the Future. Like all companies that have their genesis in the IT and applications world, Alec believes that “Voice 2.0 is a user-centric view of the world… “it’s all about me” — my applications, my identity, my availability.

And rather controversially, if you come from the network or the mobile industry: “Voice 2.0 is all about developers too — the companies that exploit the platform assets of identity, presence, and call control. It’s not about the network anymore.” Oh by the way, just to declare my partisanship, I certainly go along with this view and often find that the stove-pipe and closed attitudes sometimes seen in mobile operators is one the biggest hindrances to the growth of data related applications on mobile phones.

There is always a significant technical and commercial challenge to OEMing platform-based services to service providers and large mobile operators so the launch of a stand-alone service that is under complete control of iotum is not a bad way to go. Any business should have to full control of their own destiny and the choice of the relatively open Blackberry platform gives iotum a user base they can clearly focus on to develop their ideas.

iotum launched the beta version of Talk-Now in January and provides a set of features that are aimed at helping Blackberry users to make better use of the device that the world has become addicted to using in the last few years. Let’s talk turkey, what does the Talk-Now service do?

According to web site, as seen in the picture on the left, it provides a simple-in-concept bolt-on service for Blackberry phone users to see and share their availability status to other users.

At the in-use end of the service, the Talk-Now service interacts with a Blackberry user’s address book by adding colour coding to contact names to show the individual’s availability. On initial release only three colours were used, white, red and green.

Red and and green clearly show when a contact is either Not-Available or Available, I’ll talk about white in a minute. Yellow was added later, based on user feedback, to indicate an Interruptible status.

The idea behind Talk-Now is that helps users reduce the amount of time they waste in non-productive calls and leaving voicemails. You may wonder how this availability guidance is provided by users. A contact with a white background provides the first indication of how this is achieved.

Contacts with a white background are not Talk-Now users so their availability information is not available (!) so one of the key features of the service is an Invite People process to get them to use Talk-Now and see your availability information.

If you wish a non-Talk-Now contact to see your availability, you can select their name from the contact list and send them an “I want to talk with you” email. This email will provide a link to an Availability Page as shown below. This email talks about the benefits of using the service (I assume) and asks you to use the service. This is a secure page that is only available to that contact and for a short time only.

Once a contact accepts the invite and signs up to the service, you will be able to see their availability – assuming that they set up the service.

So, how do you indicate your availability? This is set up with a small menu as shown on the left. Using this you can set up status information.

Busy: set your free/busy status manually from your BlackBerry device

In a meeting: iotum Talk-Now synchronizes with your BlackBerry calendar to know if you are in a meeting.

At night: define which hours you consider to be night time.

Blocked group: you can add contacts to the “blocked” group.

You can also set up VIPs (Very Important Persons) who are individuals who receive priority treatment. This category needs to be used with care. Granting VIP status to a group overrides the unavailability settings you have made. You can also define Workdays. Some groups might be VIPs during work hours, while other groups might get VIP status outside of work. This is designed to help you better manage your personal and business communications.

There is also a feature whereby you can be alerted when a contact becomes available by a message being posted on your Blackberry as shown on the right.

 

Many of the above setting can be set up via a web page, for example:

Setting your working week

Setting contact groups

However, it should be remembered that like Plaxo and LinkedIn, this web based functionally does require you to upload – ‘synchronise’ – your Blackberry contact list to the iotum server and many Blackberry users might object to this. It should be noted as well that the calendar is accessed as well to determine when you are in meetings and deemed busy.

If you want to hear more, then take a look at the video that was posted after a visit with Alec Saunders and the team by Blackberry Cool last month:

Talk-Now looks to be an interesting and well thought out service. Following traditional Web 2.0 principles, the service is provided for free today with the hope that iotum will be able to charge for additional features at a future date.

I wish them luck in their endeavours and will be watching intensely to see how they progress in coming months.


Panel Defies VC Wisdom

March 7, 2007

This video came across my desk recently from AlwaysOn and it is certainly entertaining an informative!

According to Guy Kawasaki, the panel’s moderator:

At the CommunityNext conference I moderated this panel with the founders of six very successful web properties: Akash Garg of hi5, Sean Suhl of Suicide Girls, Max Levchin of Slide, James Hong of HotorNot, Markus Frind of PlentyofFish, and Drew Curtis of Fark.

This is the most amusing panel that I’ve ever moderated, and the speakers defied many conventions of tech entrepreneurship—in particular the ones that venture capitalists believe are “proven.” If you’d like to learn how these companies became successful without “proven teams, proven technology, and proven business models,” you’ll love this video.Here’s a little factoid that blew my mind: Both Fark and PlentyofFish have only one employee!


What does an average US SVP of sales earn these days?

February 15, 2007

The following data came from an email from Taber Consulting. It’s a real pity that when you go to their web site you are forced to look and listen to lift music for a 28 second flash presentation before you can get to the home page and there is no button to bypass it! I thought those days were long gone! It’s a pity ‘cos there is some excellent information on their web site.

I would not recommend using these US corporate salary levels in your average start-up business plan…

The data below are for public and private companies with a direct sales model and revenues below $250 M/yr. We’re showing the averages, but specifics vary widely for individual firms. Thanks to PhoneWorks for these data.

The average SVP or EVP of Sales has an base of $187K/yr, and a commission structure of $170K/yr. To meet these average on-target earnings, they must achieve an overall quota of $31M/yr, and manage a staff of 14 (~10 reps).

The average VP of Sales pulls in about 10% less in most of the numbers, yet carries about the same quota and has to work with a price point that’s lower. Seems as if the extra $$ for an SVP is due to the complexity of the sale (length of sales cycle/high price point) and the skills/seniority of the leader.

These numbers reflect a gradual downward trend over the last few years. Quotas have moderated and price points have trended lower, so commissions have decreased as well. Similar trends have occurred in individual rep pay.

As you’d expect, Inside Sales compensation is significantly lower:

  • Sales development – lead cultivation and appointment setting — has an average total package of about $130K for the manager and $80K for the rep.
  • Telesales reps – who actually close deals – average $165K for the manager and $110 K for reps.
  • Inside Sales Senior Mgt average $206K/yr and have to manage a quota of $63 M and a team of 22. The quota numbers are higher because of the short sales cycle and higher transaction volume.

Are entrepreneurs born or made?

February 13, 2007

In the Sunday Times on Sunday 11th February, there was a interesting article entitled Are entrepreneurs born or made? Which talks about a theory put forward by Adrian Atkinson of Human Factors that entrepreneurs are born not made – no matter how much effort the individual puts in. To quote, “This theory that anyone can become an entrepreneur is absolute nonsense”.

Contentious stuff indeed. This may be applicable for lifestyle start-ups where you are going it alone, but I think it’s a real stretch to apply this to a start-up where real team effort is required – and looked for from VCs.

Yes, of course a focused, driven, visionary individual is always required to drive things forward, but to use a simplistic evaluation about whether you are prepared to mortgage your house and work seven days a week to evaluate whether you will make an entrepreneur is just not realistic – contributing characteristics yes.

Unless, you start a business before you get married, have children and sign up for a damn big mortgage just to get a roof over your head (or decide not bother with all these trivialities of life). Or you start later in life when the children have grown up (you can’t say “and left home” these days can you?).

Mind you, if you score low and believe what you are being told, then you are definitely not of an entrepreneurial bent. A real entrepreneur will shrug this off and get back to the job of making their venture successful no matter what – but this would mean you do have a good entrepreneurial attitude! You will only find out whether you are an entrepreneur or not BY DOING.

Anyway, if you want to find out whether you will be successful as an entrepreneur you can spend 5 minutes completing the following questionnaire or go to the web site and spending some money – www.humanfactors.co.uk/pep

How to see if you have what it takes

For each of the following five groups of statements choose the one that best describes what would be most Important to you when starting your business.

Group 1
A Working with other like-minded individuals
B Making a big effort to get the company structure right
C Willing to work seven days a week
D Realising that technical excellence is the key to success

Group 2

E Getting some qualifications before starting your business
F Only starting the business with all the finance in place
G Keeping your existing job until your business is established
H Seeing work as relaxation

Group 3

I Making sure you have a social life as well
J Be willing to sell your house and car to start your business
K Taking your time to make all the important decisions
L Plan your exit strategy from the beginning

Group 4

M Not selling more than the company can deliver
N Making sure the product is perfect before getting sales
O Be willing to fire people who perform badly
P Developing business plans to make strategic decisions

Group 5

Q Always involving colleagues in decisions
R Only aiming for the highest quality
S Be willing to sacrifice family life to build the business
T Realising that all that matters in business is making money

SCORING
Score 4 points each if you chose C, H, J, 0, S
Score 3 points each if you chose A, E, L, P, T
Score 2 points each if you chose B, F, K, M, Q
Score 1 point each if you chose D, G, I, N, R

To see your what your score means:

Read the rest of this entry »


The phenomenon of Ipsilon

February 8, 2007

Part 1: The demise of ATM

You don’t read this sort of review about a start-up too often do you?

“A year ago, they were the Beatles. When Ipsilon Networks, Inc. touched down in March 1996, it whipped the industry into a frenzy with its hooky melodies and breezy harmonies on IP switching. Ipsilon’s chart-busters included tunes on establishing cut-through IP routes over ATM, snubbing complex routing dogma from standards groups, and divorcing enterprise nets from the self-centered Cisco Systems, Inc.

The industry was transformed into a screaming, blubbering mess, tearing its locks out pleading for more, more IP switching. But that was yesterday. Today, Ipsilon is but a nostalgic footnote… ” This quote was from Nokia catches a falling Ipsilon by Jim Duffy Network World, 12/9/97.”

Phew! I first came across Ipsilon one wet winter’s morning in 1996 I think. I can’t quite remember which newspaper it was in, but there was this small 3″ column on the right-hand side of the page that talked about a revolution in networking – such a headline would always catch my eye!

I immediately added them to my list of companies that I wanted to visit the next time I was in Silicon Valley. I did, and what I heard changed many of my views. So just what was so seminal about Ipsilon that caused all the frenzy?

One of the key strengths of the Internet Protocol (IP) was that it was resilient. If a particular node or router had problems, packets would find an alternative path to their destination if there was one to be found. In fact, this was the core capability of the routing algorithms that were used to move IP packets around a network. OSPF (Open Shortest Path First) was the most common routing protocol developed for IP networks and it was based on finding and using the shortest path first.

However, this algorithmic approach although having many benefits, was flawed in a particular area that remains with us even today. The flaw was that it was possible for individual packets in a particular packet stream to take different routes between the source and the destination depending on congestion. For downloads based on TCP/IP this did not particularly matter as the packets could be reconstituted into the correct sequence no matter in which order they were received.

BUT, this causes a major problem for real-time services such as voice over IP (VoIP) or interactive video conferencing where latency beyond, say 200mS, causes a noticeable hesitation which significantly affects the quality of conversations. Having to wait around for all the packets to arrive and reorder them could add significant latency. We all have heard this effect when using services such as Skype.

Another issue behind Ipsilon was that IP routers at the time were very expensive. As ATM was seen as the next dominant technology at the time, every man and his dog were producing ATM switches. In reality, ATM was not seeing the enterprise uptake expected by the vendor industry, price wars broke out and ATM switches became available at reasonable costs.

The light that went on in the Ipsilon founders’ brains was that if they could embed enhanced routing software on generic ATM switch hardware, they would have a competitively costed router that would use ATM functionality to provide, what they called, cut through routing or IP Switching as shown below in one of Ipsilon’s presentations.

This involved taking off-the-shelf ATM switchs, throwing all the ATM software away and replacing it with enhanced routing software. What this meant in practice was that the only bit of ATM that remained was the physical switch fabric and the ability to set up a virtual connection from one end of the network to another so that data can be transmitted in one hop only.

 

Ipsilon’s vision for ‘cut through’ routing.

To quote Mary Petrosky from 1998:

“Although the ATM Forum had been working on a short-cut routing solution called Multiprotocol Over ATM for some time, Ipsilon Networks catalyzed the industry in the spring of 1996 with the announcement of its scheme, dubbed IP switching. By exploiting ATM switching hardware with a new set of IP-oriented protocols, Ipsilon promised to deliver millions of packets-per-second throughput, compared to the forwarding rate of hundreds of thousands of packets per second supported by the current generation of routers.”

This was really magic stuff and went right against the core Internet philosophy of the time which focused on what was known as connectionless routing. i.e. a packet would be dumped into the network and it would follow whatever path it could to reach its destination. Engineers that supported this approach were colloquially called ‘netheads‘.

Whereas exponents of cut-through routing followed the routing principles of the Public Switched Telephone Network (PSTN). Here, a path between network (domain) ingress and egress was set up prior to sending the packets into the network which is called deterministic routing. Of course, exponents of this approach were enevitably called ‘bellheads’. It sometimes felt that there was outright war between these two factions! In fact both were right in their own way.

Deterministic routing is crucial to obtaining sufficient Quality of Service for real-time services and the insight provided by Ipsilon turned the industry on its head. One of the early pre-MPLS IETF standards Transmission of Flow labelled IPV4 on ATM networks, drafted by Ipsilon, makes an interesting read.

Although I do not know (more likely can’t remember!) all the industry inside stories, it was clear that this concept was not foreseen by Cisco or any of the other big equipment vendors of the time.

You can read the full story of Ipsilon in the article linked above, but sadly Ipsilon failed as a company due to insufficient sales and were sold to Nokia in 1997. To quote Nokia’s press release:

Nokia will acquire Ipsilon Networks, Inc., on December 9th, 1997, a data communications company based in Sunnyvale, California, US, for approximately USD 120 million, subject to regulatory approval expected by end of the year…Ipsilon Networks is a leading innovator in the development of open Internet Protocol (IP) routing platforms.”

To quote David Passmore:

“Ipsilon also discovered that shortly after turning everyone on to IP switching, Cisco froze the market with Tag Switching and Multi-protocol Label Switching.”

“It’s obvious that Ipsilon basically failed in their mission to establish IP switching,” Quite frankly, the whole cut-through routing technique embodied by IP switching, 3Com FastIP and Cabletron’s SecureFast doesn’t make sense anymore in this era of gigabit, wire-speed routers.”

Once the cut-through routing idea hit the market, all the major IP equipment vendors jumped on the bandwagon. In particular, Cisco announced proprietary Tag switching (which was rumoured to be a an Ipsilon killer and, if so, succeeded in that ambition) which eventually morphed into the IETF’s Multiprotocol label switching (MPLS) standards still in use today.

Ipsilon was a seminal start-up in networking that transformed the industry and had a completely unique approach to market. It had so much going for it, but maybe being #1 with an idea is not always a good position to be in – especially when your competitor is a behemoth such as Cisco! My view is that they were just well ahead of of their time and trying to re-educate the world is just too much of a challenge when all thier backers wanted was sales.

To finish, Dave Passmore made an extremely pertinent point back in 1997 that could be just as applicable today: “[IP switching] doesn’t make sense anymore in this era of gigabit, wire-speed routers.” More about this later.

Next: The rise and maturity of MPLS


Follow

Get every new post delivered to your Inbox.