The Cloud hotspotting the planet

July 25, 2007

I first came across the first across the embryonic idea behind The Cloud in 2001 when I first met its Founder, George Polk. In those days George was the ‘Entrepreneur in Residence’ at iGabriel, an early stage VC formed in the same year.

One of his first questions was “how can I make money from a Wi-Fi hotspot business?” I certainly didn’t claim that I knew at the time but sure as eggs is eggs I guess that George, his co-founder Niall Murphy and The Cloud team are world experts by now! George often talked about environmental issues but I was sorry to hear that he had stepped down from his CEO position (he’s still on the Board) to work on climate change issues.

The vision and business model behind The Cloud is based on the not unreasonable idea that we all now live in a connected world where we use multiple devices to access the Internet. We all know what these are: PCs, notebooks, mobile phones, PDAs and games consoles etc. etc. Moreover, we want to transparently use any transport bearer that is to hand to access the Internet, no matter where we are or what we are doing. This could be DSL in the home, a LAN in the office, GPRS on a mobile phone or a Wi-Fi hotspot.

The Cloud focuses on the creation and enablement of public Wi-Fi so that consumers and business people are able to connect to the Internet where ever they may be located when out and about.

One of the big issues with Wi-Fi hotspots back in the early years of the decade (and it still is but less so these days), was that Wi-Fi hotspot provision industry was highly fractured with virtually every public hotspot being managed by a different provider. When these providers wanted to monetise their activities it seemed that you needed to set up a different account at each site you visited. This cast a big shadow over users and slowed down market growth considerably.

What was needed in the market place was Wi-Fi aggregators or market consolidation that would allow a roaming user to seamlessly access the Internet from lots of different hotspots without having to having multiple accounts.

Meeting this need for always on connectivity is where The Cloud is focused and their aim is to enable wide-scale availability of public Wi-Fi access through four principle methods:

  1. Direct deployment of hot spots:(a) In coffee shops, airports public houses etc. in partnership with the owners of these assets.(b) In wide area locations such as city centre in partnership with local councils.
  2. Wi-Fi extensions of existing public fixed IP networks .
  3. Wi-Fi extension of existing private enterprise networks – “co-opting networks”
  4. Roaming relationships with other Wi-Fi operators and service providers, such as with iPass in 2006.

The Cloud’s vision is to stitch together all these assets and create a cohesive and ubiquitous Wi-Fi network to enable Internet access at any location using the most appropriate bearer available.

It’s The Cloud’s activities in 1(a) above that is getting much publicity at the moment as back in April the company announced coverage of the City of London in partnership with City of London Corporation. The map below shows the extent of the network.

Note: However, The Cloud will not have everything all to itself in London as a ‘free’ WiFi Thames based network has just been launched (July 2007) by Meshhopper.

On July 18th 2007 The Cloud announced coverage of Manchester city centre as per the map below:

These network roll-outs are very ambitious and are some largest deployments of wide-area Wi-Fi technology in the world so I was intrigued as to how this was achieved and what challenges were encountered during the roll out.

Last week I talked with Niall Murphy, The Cloud’s Co-Founder and Chief Strategy Officer, to catch up with what they were up to and to find out what he could tell me about the architecture of these big Wi-Fi networks.

One of my first questions in respect of the city-centre networks was about in-building coverage as even high power GSM telephony has issues with this and Wi-Fi nodes are limited to a maximum power of 100mW.

I think I already knew the answer to this, but I wanted to see what The Cloud’s policy was. As I expected, Niall explained that “this is a challenge” and consideration of this need was not part of the objective of the deployments which are focused on providing coverage in “open public spaces“. This has to be right in my opinion as the limitation in power would make this an unachievable objective in practice.

Interestingly, Niall talked about The Cloud’s involvement in OFCOM‘s investigation to evaluate whether there would be any additional commercial benefit by allowing transmit powers greater tha 100mW. However, The Cloud’s recommendation was not to increase power for two reasons:

  1. Higher power would create a higher level of interference over a wider area which would negate the benefits of additional power.
  2. Higher power would negatively impact battery life in devices.

In the end, if I remember correctly, the recommendation by OFCOM was to leave the power limits as they were.

I was interested in the architecture of the city-wide networks as I really did not know how they had gone about the challenge. I am pretty familiar with the concept of mesh networks as I tracked the path of one of the early pioneers in the UK of this technology, Radiant Networks. Unfortunately, Radiant went to the wallRadiant Networks flogged – in 2004 for reasons I assume to be concerned with the use of highly complex, proprietary and expensive nodes (as shown on the left) and the use of the 26, 28 and 40Ghz bands which would severely impact economics due to small cell sizes.

Fortunately, Wi-Fi is nothing like those early proprietary approaches to mesh networks and the technology has come of age due to wide-scale global deployment. More importantly, this has also led to considerably lower equipment costs. The reason that this is that Wi-Fi uses the 2.4GHz ‘free band’ and most countries around the world have standardised on the use of this band giving Wi-Fi equipment manufacturers access to a truly global market.

Anyway getting back to The Cloud, Niall, said that “the aims behind the City of London network was to provide ubiquitous coverage in public spaces to a level of 95% which we have achieved in practice“.

The network uses 127 nodes which are located on street lights, video surveillance poles or other street furniture owned by their partner, the City of London Corporation. Are 127 nodes enough I ask? Niall’s answer was an emphatic “yes” although “the 150 metre cell radius and 100mW power limitation of Wi-Fi definitely provides a significant challenge“.

Interestingly, Niall observed that deploying a network in the UK was much harder than in the US due to the lower power levels of the 2.4Ghz band than in the USA. The Cloud’s experience has shown that a cell density two or three times greater is required in a UK city – comparing London to Philadelphia for example. This raises a lot of interesting questions about hotspot economics!

Much time was spent on hotspot planning and this was achieved in partnership with a Canadian company called Belair Networks. One of the interesting aspects of this activity was that there was “serious head scratching” by Belair as being a Canadian company they were used to nice neat square grids of streets and not the no-straight-line topology mess of London!

Data traffic from the 127 nodes that form The Cloud’s City of London network are back-hauled to seven 100Mbit/s fibre PoPs (Points of Presence) using 5.6GHz radio. Thus each node has two transceivers. The first is the Wi-Fi transceiver with a 2.4GHz antenna trained on the appropriate territory. The second is a 5.6GHz transceiver pointing to the next node where the traffic daisy chains back to the fibre PoP effectively creating a true mesh network (Incidentally, backhaul is one of the main uses of WiMax technology). I won’t talk about the strengths and weaknesses of mesh radio networks here but will write post on this subject at a future date.

According to Niall, the tricky part of the build was to find appropriate sites for the nodes. You might think this was purely due to radio propagation issues but there was also the issue that the physical assets they were using didnt always turn out to be where they appeared to be on the maps! “We ended up arriving at the street lamp indicated on the map and it was not there!” This is the same as many carriers who also do not know where some of their switches are located or do not know how many customer leased lines they have in place.

Another interesting anecdote was concerned with the expectations of journalists at the launch of the network. “Because we were talking about ubiquitous coverage, many thought they could jump in a cab and watch Joost streaming video as they weaved their way around the city“. Oh, it didn’t work then I say to Niall expecting him to say that they were disappointed.. “No” he said, “it absolutely worked!

Niall says the network is up and running and working according to their expectations. “there is still a lot of tuning and optimisation to do but we are comfortable with the performance.

Incidentally, The Cloud owns the network and works with the Corporation of London as the landlord.

Round up

The Cloud has seemingly really achieved a lot this year with the roll out of the city centre networks and the sign up of 6 to 7 thousand users in London alone. This was backed up by the launch of UltraWiFi, a flat rate service costing £11.99 pounds per month.

Incidentally, The Cloud do not see themselves in competition with cable companies or mobile operators concentrating as they do on providing pure Wi-Fi access to individuals on the move. Although in many ways it actually does.

They operate in the UK, Sweden, Denmark, Norway, Germany and The Netherlands. Theyre also working with a wide array of service providers, including O2, Vodafone, Telenor, BT, iPass, Vonage, Nintendo amongst others.

The big challenge ahead, as I’m sure they would acknowledge, is how they are going to ramp up revenues and take their business into the big time. I am confident that they are well able to accept this challenge and exceed it. All I know is that public Wi-Fi access is a crucial capability in this connected world and without it the Internet world will be a much less exciting and usable place.


iotum’s Talk-Now is now available!

April 4, 2007

In a previous post The magic of ‘presence’, I talked about the concept of presence in relation to telecommunications services and looked at different examples of how it had been implemented in various products.

One of the most interesting companies mentioned was iotum, a Canadian company. iotum had developed what they called a relevance engine which enabled the provision of ability to talk and willingness to talk information into a telecom service by attaching it to appropriate equipment such as a Private Branch Xchanges (PBX) or a call centre Automatic Call Distribution (ACD) managers.

One of the biggest challenges for any company wanting to translate presence concepts into practical services is how to make it useable rather than just being just a fancy concept that is used to describe a of a number peripheral and often unusable features of a service. Alec Saunders, iotum’s founder, has been articulating his ideas about this in his blog Voice 2.0: A Manifesto for the Future. Like all companies that have their genesis in the IT and applications world, Alec believes that “Voice 2.0 is a user-centric view of the world… “it’s all about me” — my applications, my identity, my availability.

And rather controversially, if you come from the network or the mobile industry: “Voice 2.0 is all about developers too — the companies that exploit the platform assets of identity, presence, and call control. It’s not about the network anymore.” Oh by the way, just to declare my partisanship, I certainly go along with this view and often find that the stove-pipe and closed attitudes sometimes seen in mobile operators is one the biggest hindrances to the growth of data related applications on mobile phones.

There is always a significant technical and commercial challenge to OEMing platform-based services to service providers and large mobile operators so the launch of a stand-alone service that is under complete control of iotum is not a bad way to go. Any business should have to full control of their own destiny and the choice of the relatively open Blackberry platform gives iotum a user base they can clearly focus on to develop their ideas.

iotum launched the beta version of Talk-Now in January and provides a set of features that are aimed at helping Blackberry users to make better use of the device that the world has become addicted to using in the last few years. Let’s talk turkey, what does the Talk-Now service do?

According to web site, as seen in the picture on the left, it provides a simple-in-concept bolt-on service for Blackberry phone users to see and share their availability status to other users.

At the in-use end of the service, the Talk-Now service interacts with a Blackberry user’s address book by adding colour coding to contact names to show the individual’s availability. On initial release only three colours were used, white, red and green.

Red and and green clearly show when a contact is either Not-Available or Available, I’ll talk about white in a minute. Yellow was added later, based on user feedback, to indicate an Interruptible status.

The idea behind Talk-Now is that helps users reduce the amount of time they waste in non-productive calls and leaving voicemails. You may wonder how this availability guidance is provided by users. A contact with a white background provides the first indication of how this is achieved.

Contacts with a white background are not Talk-Now users so their availability information is not available (!) so one of the key features of the service is an Invite People process to get them to use Talk-Now and see your availability information.

If you wish a non-Talk-Now contact to see your availability, you can select their name from the contact list and send them an “I want to talk with you” email. This email will provide a link to an Availability Page as shown below. This email talks about the benefits of using the service (I assume) and asks you to use the service. This is a secure page that is only available to that contact and for a short time only.

Once a contact accepts the invite and signs up to the service, you will be able to see their availability – assuming that they set up the service.

So, how do you indicate your availability? This is set up with a small menu as shown on the left. Using this you can set up status information.

Busy: set your free/busy status manually from your BlackBerry device

In a meeting: iotum Talk-Now synchronizes with your BlackBerry calendar to know if you are in a meeting.

At night: define which hours you consider to be night time.

Blocked group: you can add contacts to the “blocked” group.

You can also set up VIPs (Very Important Persons) who are individuals who receive priority treatment. This category needs to be used with care. Granting VIP status to a group overrides the unavailability settings you have made. You can also define Workdays. Some groups might be VIPs during work hours, while other groups might get VIP status outside of work. This is designed to help you better manage your personal and business communications.

There is also a feature whereby you can be alerted when a contact becomes available by a message being posted on your Blackberry as shown on the right.

 

Many of the above setting can be set up via a web page, for example:

Setting your working week

Setting contact groups

However, it should be remembered that like Plaxo and LinkedIn, this web based functionally does require you to upload – ‘synchronise’ – your Blackberry contact list to the iotum server and many Blackberry users might object to this. It should be noted as well that the calendar is accessed as well to determine when you are in meetings and deemed busy.

If you want to hear more, then take a look at the video that was posted after a visit with Alec Saunders and the team by Blackberry Cool last month:

Talk-Now looks to be an interesting and well thought out service. Following traditional Web 2.0 principles, the service is provided for free today with the hope that iotum will be able to charge for additional features at a future date.

I wish them luck in their endeavours and will be watching intensely to see how they progress in coming months.


GSM pico-cell’s moment of fame

March 21, 2007

Back in May 2006, the DECT – GSM guard bands, 1781.7-1785 MHz and 1876.7-1880 MHz, originally set up to protect cordless phones from interference by GSM mobiles were made available to a number of licensees. As is the fashion, these allocations were offered to industry by holding an auction with a reserve price of £50,000 per license. In fact, it was Ofcom’s first auction and I guess they were happy with the results although I would not have liked to have been in charge of Colt’s bidding team when it came to report to their Board post the auction!

For those of a technical bent, you can see Ofcom’s technical study here in a document entitled Interference scenarios, coordination between licensees and power limits and there is a good overview of cellular networks on Wikipedia also

The most important restriction on the use of this spectrum was outlined by the study:

This analysis confirms that a low power system based on GSM pico cells operating at the 23dBm power (200mW) level can provide coverage in an example multi-storey office scenario. Two pico cells per floor would meet the coverage requirements in the example 50m × 120m office building. For a population of 300 people per floor, the two pico cells would also meet the traffic demand.

It was all done using sealed bids so there was inevitably a wide spectrum (sorry for the pun!) of responses ranging from just over £50,000 to the highest, Colt, who bid £1,513,218. The 12 companies winning licenses were:

British Telecommunications £275,112
Cable & Wireless £51,002
COLT Mobile Telecommunications £1,513,218
Cyberpress Ltd £151,999
FMS Solutions Ltd £113,000
Mapesbury Communications £76,660
O2 £209,888
Opal Telecom £155,555
PLDT £88,889
Shyam Telecom UK £101,011
Spring Mobil £50,110
Teleware £1,001,880

One company that focuses on the supply of GSM and 3G picocells is IP Access based in Cambridge. Their nanoBTS base station can be deployed in buildings, shopping centres, transport terminals, at home, underground stations, rural and remote deployments; in fact, almost anywhere – according to their web site.

Many of the bigger suppliers of GSM and 3G equipment manufacture pico cell platforms as well, Nortel for example.

Of course, even though the pico cell base station (BTS) is lower cost compared to standard base stations, that is not the end to the costs. A pico cell operator still needs to install a base station controller (BSC) which can control a number of base stations plus an home location register (HLR) which stores the current state of mobile phone in a database database. If the network needs to support roaming customers, a virtual location register (VLR) is also required. On top of this, interconnect equipment to other mobile operators is required. All this does not come cheap!

Pico GSM cells are low power versions of their big brothers and are usually associated with a lower-cost backhaul technology based on IP in place of traditional point to point microwave links. GSM Pico cell technology can be used in a number of application scenarios.

In-building use as the basis of ‘seamless’ fixed-mobile voice services. The use of mobile phones as a replacement to fixed telephones has always been a key ambition for mobile operators. But, as we all know, in-building coverage by cellular operators is often not too good leading to the necessity of taking calls near windows or on balconies. The installation of an in-building pico-cell is one way of providing this coverage and comes under the heading of fixed mobile integration. One challenge in this scenario is the possible need to manually swap SIM cards when entering or exiting the building if a different operator is used inside the building to that outside. Of course, nobody would be willing to do this physically so a whole industry has been born to support dual SIM cards which can be selected from a menu option.

From a usability and interoperability perspectives fixed-mobile integration still represents a major industry challenge. Not the least of the problems is that a swap from one operator to another could trigger roaming charges. This is probably an application area that only the bigger license winners will participate in.

On ships using satellite backhaul: This has always been an obvious application for pico cells, especially for cruise ships.

On aeroplanes: In-cabin use of mobile phones is much more contentious than use on ships and I could write a complete post on this particular subject! But, I guess this is inevitable no matter how irritating it would be to fellow passengers – no bias here! e.g. OnAir with their agreement with Airbus.

Overseas network extensions: I was interested in finding out how some of the winners of the OFCOM auction were getting on now that they held some prime spectrum in the UK so I talked with Magnus Kelly, MD at Mapesbury (MCom). I’m sure they were happy with what they paid as they were at the ‘right end’ of the price spectrum.

Mapesbury are a relatively small service provider set up in 2002 to offer data, voice and wireless connectivity services to local communities. In 2003, MCom acquired the assets of Forecourt Television Limited, also known as FTV. FTV had a network of advertising screens on a selection of petrol station forecourts, among them Texaco. This was when I first met Magnus. Using this infrastructure, they later signed a contract with T-Mobile UK, to offer a W-Fi service in selected Texaco service stations across the UK.

More recently, they opened their first IP.District, providing complete Wi-Fi coverage over Watford, Herts using 5.8GHz spectrum. MCom has had pilot users testing the service for the last 12 months.

Magnus was quite ebullient about how things were going on the pico-cell front although there were a few sighs when talking about organising and negotiating the allocation of the required number blocks and point codes necessitated by the trials that they have been running.

He emphasised that that the technology seemed to work well and the issues they now had were the same as any company in the circumstances; creating a business that made money. They have looked at a number of applications and decided that the fixed-mobile integration is probably best left to the major mobile operators.

They are enamoured by the opportunities presented by what is called Overseas network extensions. In essence, this is creating what can envisioned as ‘bubble’ extensions to non-UK mobile networks in the UK. The traffic generated in these extensions can then be backhauled using low cost IP pipes. The core value proposition is low-cost mobile telephone calls aimed at dense local clusters of people using mobile phones. For example, these could be clusters of UK immigrants who would like the ability to make low-cost calls from their mobile phones back to their home countries. Clearly in these circumstances, these pico-cell GSM bubbles would be focused on selected city suburbs as they are following the same subscriber density logic that drives Wi-Fi cell deployment.

In the large mobile operator camp, O2 announced in November 2006 that they will offer indoor, low-power GSM base stations connected via broadband as part of its fixed-mobile convergence (FMC) strategy, an approach that will let customers use standard mobile phones in the office. I’ll be writing a post about FMC in the future.

Although it is early days for GSM pico cell deployment in the UK, it looks like it could have have a healthy future, although this should not be taken for granted. There are a host of technical, commercial and regulatory and political challenges to seamless use of phones inside and outside of buildings. There are also other technology solutions – IT based rather than wireless – for reducing mobile phone bills. An example of such an innovative approach is supplied by OnRelay.


The insistent beat of Netronome!

March 15, 2007

Last week I popped into to visit Netronome in their Cambridge office and was hosted by David Wells their VP Technology, GM Europe who was one of the Founders of the company. The other two Founders were Niel Viljoen and Johann Tönsing who previously worked for companies such as FORE Systems (bought by Marconi), Nemesys, Tellabs and Marconi. Netronome is HQed in Pittsburgh but has offices in Cambridge UK and South Africa.

I mentioned Netronome in a previous post about network processors – The intrigue of network / packet processors so I wanted to bring myself up to date with what they were up to following their closing of a $20M ‘C’ funding round in November 2006 led by 3i.

What do Netronome do?

Netronome manufacture network processor based hardware and software that enables the development of applications that need to undertake real-time network content flow analysis. Or to be more accurate, enable significant acceleration and throughput for applications that need to undertake packet inspection or maybe deep packet inspection.

I say to to be more accurate because it is possible to monitor packets in a network without the use of network processors using a low-cost Windows or Linux based computer, but if the data is flowing through a port at gigabit rates – which is most likely these days – then there is little capability to react to a detected traffic type other than switching the flow to another port or simply blocking it. If your really want to detect particular traffic types in a gigabit packet flow, make an action decision, change some of the data bits in the header or body, all transparently and at full line speed then you will undoubtedly need a network processor based card from a company like Netronome. The Intel powered 16 micro-engine used in Netronome’s products enables the inspection of upwards of 1 million simultaneous bidirectional flows.

Netronome’s product is termed an Open Appliance Platform. Equipment vendor companies have used network processors (NPs) for many years. For example Cisco, Juniper and the like would use them to process packets on an interface card or blade. This would more than likely be an in-house developed NP architecture used in combination with hard-wired logic and Field Programmable Gate Arrays (FPGAs). This combination enables complete flexibility to run what’s best to run in software on the NP and use the FPGAs to accommodate possible architecture elements that may change – maybe due to standards being incomplete for example.

Netronome’s Network Acceleration Card

Other companies that have used NPs for a long time make what are known as Network Appliances. A network appliance is a standalone hardware / software bundle often based on Linux that provides a plug-and-play application that can be connected to a live network with a minimum of work. Many network appliances are simply using a server motherboard with two standard gigabit network cards installed and Linux as the OS with the application on top. These appliance vendors know that they need the acceleration they can get from an NP, but they often don’t want to deal with the complexity of hardware design and NP programming.

Either way, they have written their application specific software to run on top their hardware design. Every appliance manufacture has taken a proprietary approach which creates a significant support challenge as each new generation of NP architecture improves throughput. Being software vendors in reality, all they really want to do is write software and applications and not have the bother of supporting expensive hardware.

This is where Netronome’s Open Appliance Platform comes in. Netronome has developed a generic hardware platform and the appropriate virtual run-time software that enables appliance vendors to dump their own challenging-to-support hardware and use Netronome’s NP processor instead. The important aspect of this is that this can be achieved with minimum change to their application code.

What are the possible applications (or use cases) of Netronome’s Network Acceleration card?

The use of Netronome’s product is particularly beneficial as the core of network appliances in the following application areas.

Security: All type of enterprise network security application that depends on the inspection and modification of live network traffic.

SSL Inspector: The Netronome SSL Inspector is a transparent proxy for Secure Socket Layer (SSL) network communications. It enables applications to access the clear text in SSLencrypted connections and has been designed for security and network appliance manufacturers, enterprise IT organizations and system integrators. The SSL inspector allows network appliances to be deployed with the highest levels of flow analysis while still maintaining multi-gigabit line-rate network performance.

Compliance and audit: To ensure that all company employees are in compliance with new regulatory regimes, companies must voluntarily discover, disclose, expeditiously correct, and prevent recurrence of future violations.

Network access and identity: To check the behaviour and personal characteristics by which an individual is defined as a valid user of an application or network.

Intrusion detection and prevention: This has always been a heartland application for network processors.

Intelligent billing: By detecting a network event or a particular traffic flow, a billing event could be initiated.

Innovative applications: To me this is one of the most interesting areas as it depends on having a good idea, but applications could be, modifying QoS parameters on the fly in an MPLS network or detecting particular application flows on the fly – grey VoIP traffic for example. If you want to know about other application ideas – give me a call!

Netronome’s Architecture components

Netronome Flow Drivers: The Netronome Flow Drivers (NFD) provide high speed connectivity between the hardware components of the flow engine (NPU and cryptography hardware) and one or more Intel IA / x86 processors running on the motherboard. The NFD allows developers to write their own code for the IXP NPU and the IA / x86 processor.

Netronome Flow Manager: The Netronome Flow Manager (NFM) provides an open application programming interface for network and security appliances that require acceleration. The NFM not only abstracts (virtualises) the hardware interface of the Netronome Flow Engine (NFE), but its interfaces also guide the adaptation of applications to high-rate flow processing.

Overview of Netronome’s architecture components

Netronome real-time Flow Kernel: At the heart of the platform’s software subsystem is a real-time microkernel specialized for Network Infrastructure Applications. The kernel coordinates and steers flows, rather than packets, and is thus called the Netronome Flow Kernel (NFK). The NFK also does everything the NFM does and it also supports virtualisation.

Open Appliance Platform: Netronome have recently announced a chassis system that can be used by ISVs to quickly provide a solution to their customers.

Round-up

If your application or service really needs a network processor you will realise this quite quickly as the the performance of your non-NP based network application will be too slow, is unable to undertake the real-time bit manipulation you need or, the real killer, it is unable to scale to the flow rates your application will see in real world deployment.

In the old days, programming NPs was a black art not understood by 99.9% of the world’s programmers, but Netronome is now making the technology more accessible by providing appropriate middleware – or abstraction layer – that enables network appliance software to be ported to their open platform without a significant rewrite being necessitated or a detailed understanding of programming an NP. Your application just runs in a virtual run-time environment and uses the flow API and the Netronome product does the rest.

Good on ’em I say.


Nexagent, an enigmatic company?

March 12, 2007

In a recent post, MPLS and the limitations of the Internet, I wrote about the challenge of expecting predictable performance for real-time services such as client / server applications, Voice over IP (VoIP) or video services over the public Internet.I also wrote about the challenges of obtaining predictable performance for these same applications on a Wide area Network (WAN) using IP-VPNs when the WAN straddles multiple carriers – as they most always do. This is brought about about by the fact that the majority of carriers to not currently inter-connect their MPLS networks to enable seamless end-to-end multi-carrier Class-of-Service based performance.

As mentioned in the above post, there are several companies that focus on providing this capability through a mixture of technologies, monitoring and a willingness to act as a prime contractor if they are a service provider. However today, the majority of carriers are only able to provide Service Level Agreements (SLAs) for IP traffic and customer sites that are on their own network. This forces enterprises of all sizes to either manage their own multi-carrier WANs or outsource the task to a carrier or systems integrator that is willing to offer a single umbrella SLA and back this off with separate SLAs to each component provider carrier.

Operational Support Software (OSS) Vendors challenges

An Operations Support System is the software that handles workflows, management, inventory details, capacity planning and repair functions for service providers. Typically, an OSS uses an underlying Network Management System to actually communicate with network devices. There are literally hundreds of OSS vendors providing software to carriers today, but it is interesting to note that the vast majority of these only provide software to help carriers manage their network inside the cloud i.e. to help them manage their own network. In practice, each carrier uses a mixture of bought in and 3rd party OSS to manage their network so each carrier has, in effect, a proprietary network and service management regime that makes it virtually impossible to inter-connect their own IP data services with those of other carriers.

As you would expect in the carrier world, there a number of industry standard organisations that are working on this issue but this is such a major challenge I would doubt that OSS environments could be standardised sufficiently to enable simple inter-connect of IP OSSs anywhere in the near future – if ever. Some of these bodies are:

  • The IETF, who work at the network level such as MPLS, IP-VPNs etc;
  • The Telecom Management Forum, who have been working in the OSS space for many years;
  • The MPLS Frame Relay Forum who “focus on advancing the deployment of multi-vendor, multi-service packet-based networks, associated applications, and interworking solutions
  • And one of the newest, IPSphere whose mission “is to deliver an enhanced commercial framework – or business layer – for IP services that preserves the fundamental ubiquity of the Internet’s technical framework and is also capable of supporting a full range of business relationships so that participants have true flexibility in how they add value to upstream service outcomes.”
  • The IT Information Library which is a “framework of best practice approaches intended to facilitate the delivery of high quality information technology (IT) services. ITIL outlines an extensive set of management procedures that are intended to support businesses in achieving both quality and value, in a financial sense, in IT operations. These procedures are supplier independent and have been developed to provide guidance across the breadth of IT infrastructure, development, and operations.”

As can be imagined, working in this challenging inter- carrier, service or OSS space provides both a major opportunity and a major challenge. One company that has chosen to do just this is Nexagent. Nexagent was formed in 2000 by Charlie Muirhead – also founder of Orchestream, Chris Gare – ex Cable and Wireless and Dave Page – ex Cisco.

In my travels, I often get asked “what is it that Nexagent actually does?” so I would like to have a go at answering this question after setting the scene in a previous post – MPLS and the limitations of the Internet .

The traditional way of delivering WAN-based enterprise services or solutions based on managing multiple service providers (carriers) has a considerable number of challenges associated with it. This could be a company WAN formed by integrating IP-VPNs or simple E1 / T1 TDM bandwidth services bought in from multiple service providers around the world.

The most common approach is a proprietary solution, which is usually of an ad hoc nature built up piecemeal over a number of years from earlier company acquisitions. The strategic idea would have been to integrate and harmonise these disparate networks but there was usually never enough money to start, let alone complete, the project.

Many of these challenges can be seen listed in the panel on the right taken from Nexagent’s brochure. Anyone that has been involved in managing an enterprise’s Information and Communications Technology (ICT) infrastructure will be very well aware of these issues!

Overview of Nexagent’s software

Deploying an end to end service or application running on a WAN, or solution as it is often termed, requires a combination of:

  1. workflow management: A workflow describes the order of a set of tasks performed by various individuals or systems to complete a given procedure within an organization., and
  2. supply chain management: A supply chain represents the flow of materials, information, and finances as they move in a process – or workflow – from one organisation or activity to the next.

In the situation where every carrier or service provider has adopted entirely different OSS combinations, workflow practices and supply chain processes, it is no wonder that every multi-carrier WAN represents a complex, proprietary and bespoke solution!

Nexagent has developed a unique abstraction or Meta Layer technology and methodology that manages and monitors the performance of an end-to-end WAN IP-VPN service or solution without the need for a carrier to swap-out their existing OSS infrastructure. Nexagent’s system runs in parallel with, and integrates with, multiple carriers’ existing OSS infrastructure and enables a prime-contractor to manage multi-carrier solutions in a coherent rather than an ad hoc manner.

Let’s go through what Nexagent offers following a standard process flow for deploying multi-supplier services or solutions using Nexagent’s adopted ICT Infrastructure Management (ICTIM) model.

Service or solution modelling: In the ICTIM reference model, this is the Design stage. The is a crucial step and is focused on capturing all the enterprise service requirements and design as early as possible in the process as possible and maintaining that knowledge for the complete service lifecycle. Nexagent has developed a CAD-like modelling capability with frontend capture based on a simpletouse Excel spreadsheet as every carrier uses a different method of capturing their WAN designs. The model is created up front and acts as a reference benchmark if the design is changed at any future time. The tool provides price query capabilities for use with bought-in carrier services together with automated design rule verification.

Implementation engine: In the ICTIM reference model, this is the Deploy stage. The software automatically populates the design created at the design stage with the required service provider interconnect configurations, generates service work orders for each service provider involved with the design and provisions the network interconnect to create a physical working network. The software is based on a unified information flow to network service providers. Importantly, it schedules the individual components of the end-to-end solution to meet the enterprise roll-out and change management needs.

Experience manager: In the ICTIM reference model, this is the Operate stage. The Nexagent software compares real-life end-to-end solution performance to the expected performance level as specified in the service or solution design. Any deviation from agreed component supplier SLAs will generate alerts into existing OSS environments.

The monitoring is characterised by active in-band measurement for each site and each CoS link by application group and is closed-loop in nature by comparing actual performance to expected performance stored in the service reference model. It can detect and isolate problems and includes optimisation and conformance procedures.

Physical Network interconnect: Lastly, Nexagent developed the service interconnect template with network equipment vendors such as Cisco Systems and physically enables the interconnection of IP-VPNs that have chosen different incompatible ways of defining their CoS-based services.

Who could use Nexagent’s technology?

Nexagent provides three examples of the application of their software – ‘use cases’ – on their web site:

Hybrid Virtual Network Operator is a service provider which has some existing network assets in some geography but lacks network reach and operational efficiency to win business from enterprises requiring out of territory service and customised solutions. Such a carrier could use Nexagent to extend their reach, standardise their off-net interface to save costs and ensure that services work as designed.

Data Centre Virtualisation: Data centre services are critical components of effective enterprise application solutions. A recent trend in data centre services is to share computing resources across multiple customers. Similarly, using multiple physical data centres enables more resilient and better performing services with improved load balancing . Using Nexagent simplifies the task of swapping carriers delivering services to customers and better monitor overall service performance.

Third Party Service Delivery: One of the main obstacles for carriers to growing market share and expanding into adjacent markets is the time and money to develop and implement new services. While many service providers want a broader portfolio of services, there is growing evidence that enterprises want to use multiple companies for services as a way of maintaining supply chain negotiation leverage – what Gartner calls enterprise multi-sourcing.

Round up

This all may sound rather complicated, but the industry pain that Nexagent helps solve is quite straight forward to appreciate when you consider the complexity of multi-carrier solutions and Nexagent and have taken a pretty unique approach to solving that pain.

Although there is not too much information in the public domain about Nexagent’s commercial activities, there is a most informative presentation – MPLS Interconnection and Multi-Sourcing for the Secure Enterprise by Leo McCloskey, who was Senior Director, Network and Partner Strategy at EDS when he presented it (Leo is now Nexagent’s VP of Marketing). An element of one of the slides is shown below. You can also see a presentation by Charlie Muirhead from Nexagent – Case Study: Solving the Interconnect Challenge.

These, and other presentations from the conference can be found from the 2006 MPLScon conference proceedings at Webtorials archive site. You will need to register with Webtorials before you can access these papers.

If you are still unsure about what Nexagent actually does or how they could help your business – go visit them in Reading!

Note: I should declare a personal interest in Nexagent as a co-founder, though I am no longer involved with day to day activities.

Addendum:  March 2008, EDS hoovers up Reading networking firm


Would u like to collaborate with YuuGuu?

March 7, 2007

When I was at an event organised by FirstCapital event before Christmas, I met with two guys who told me about a new service they were launching – YuuGuu. I’m sure I probably said something along the lines of “What was that again?” By way of explanation, according to their web site, Yuuguu derives from the Japanese word for fusion.

YuuGuu was founded by Anish Kapoor and Philip Hemsted “after becoming frustrated by working together remotely and not being able to see and share each other’s computer screens in real time.” “Recognising the changing world of work, Yuuguu came about as a solution to help people work together remotely, through any firewall, across different platforms, with as many colleagues as needed, just as if they were sat right next to each other.”

Put simply, YuuGuu is a web-based collaboration service that helps teams work together. I first used such services in the mid 1990s when Intel’s ProShare came to market and have been using them occasionally on and off ever since. Collaboration services are now widely used as we all know (I have listed here some of the companies offering web based collaboration services). There is a wide variety of tools that come under the heading – some are used more than others. For example, Instant messaging (IM) has become part of all of our lives, although there is still resistance from many companies in allowing their employees to use them due to security considerations. Skype is all about voice collaboration and makes an excellent audio bridge for conferences – if there is a sufficiently good Internet link to allow it work correctly. Video conferencing is still an expensive, unreliable and a black art if my experiences over the last decade are anything to go by. Personally, I would rather have a good audio conference than a four foot video screen where you cannot see the faces of the participants clearly because of the quality provided by a 2mbit/s link.

The other interesting component of many web collaboration services, is application sharing. Application sharing is the ability to remotely access and interact with an application on a remote machine. This was seen in Intel’s Proshare and formed a component of Microsoft’s NetMeeting for example. My most recent use of such software was using the well known webex service last year to show presentations and demonstrate some network software to potential customers around the world. This generally went well (though webex is certainly not the easiest service to use) but is was being used in what I would call a non-interactive one-way manner i.e. Internet delays were not that important unlike if you were editing text remotely. My experience of doing just that has been this year when I attempted to use my blog host’s on-line HTML editor. This was a complete disaster as I lost much text when the web page failed to update correctly several times. However, this was designed for remote use as you were editing on your own machine before uploading. If you attempt to edit documents on a remote machine that are not designed for remote access such as Microsoft Word and PowerPoint , high latency on key presses can be disconcerting and disrupt thought flow. This type of application sharing demands good Internet performance with zero packet loss.

My first interaction with the software was with it availability flag described thus by YuuGuu as “Presence – instantly tell when your workmates or friends are around”. In practice, this is a simple indicator that you can set in the client to say whether you are available or not. I will be writing quite a long post on presence’ in the future so I won’t dwell on the subject too much here. YuuGuu will be adding more capabilities to this simple availability feature in the future. They do use XMPP so at least they will be able to interconnect or federate a buddy list with other IM services.

Philip Hemsted’s cluttered Desktop

YuuGuu is very straightforward to install and start and my first call was with Philip Hemsted and with a click of a button and with his permission I was able to see his desktop. At the moment you can only share a complete desktop and cannot specify just a single application so there could be security concerns if you are giving access to somebody who is not is the same company as yourself.

Philip went though a couple of PowerPoint slides and then I asked to access his copy of Word to do some remote editing. This went fine but there was around a two second delay when typing so I couldn’t recommend that you start writing the second War and Peace but this is typical of most remote access programmes. The better you are at touch typing, the easier you will find it to use!

One really nice feature is its group capability. You can call in a third party to the collaboration session by selecting them from the buddy list and they will be able to see the shared desk top as well as become part of the YuuGuu IM session. I’m sure that in most collaboration sessions Skype would be used in parallel so a VoIP capability would be a beneficial future addition.

There are lots of plans to upgrade the current service with additional capabilities that would be required for corporate use such as history storage for their audit needs and SLAs. These would most likely form the basis of a future paid-for premium service.

What makes the product interesting to me is the simplicity of its use, its ability to work on an Apple in addition to Windows and its simple group capability. Go try – you have nothing to lose! Interestingly, like Crisp Thinking, they are based in the the north of England – Manchester.


Joost’s beta – first impressions

February 26, 2007

I have received a beta account for Joost at long last. Joost is the new initiative from Niklas Zennstrom and Janus Friis of Skype fame.

I can’t describe the service any better that than the way they do so here it is:

“Joost™ is a new way of watching TV on the internet, which uses new and established technologies to provide the best of both the internet and TV worlds. We’re in the process of making it as TV-like as we can, with programmes, channels and adverts. You can also see some things that we think will enhance the TV experience: searching for programmes and channels, for example, as well as social features like chat. There are many more new features to come!”

Here are a few screens from the service itself as running on my XP machine.

Click on the picture for full size images.

This is the Preferences menu and seems to be mainly concerned with time related issues with the user interface. What is of most interest to me – the connections options – I could not find at all! It may have been just me!

Clicking on the My Channels icon on the left-hand side of the screen brings up my favourite channels menu which you can scroll up and down with your mouse. I noticed some issues with stuttering of the sound when I moved my mouse around. This is a new machine so I’m not sure what causes this. Feels like a typical Windows problem with streaming services.

I think this could be caused by the focus being on another application when you are running Joost as a background window.

When you want to add a channel, you just click the option and up pops the selection menu that overlays the video. As this is created using the downloaded Joost client, all these option selections present themselves promptly with zero hesitation. Being not network based is a surprise but is the right way to go I think.

It’s possible to bring up a featured channel catalog to see what’s new. Interestingly, much of the content I’ve brought up so far is from the USA and certainly the spelling is US – as can be seen by the spelling of the word ‘catalogue’. It’s still a beta I guess, but I would need more relevant content to attract me to use the service extensively.

This is the Indy channel which is shown in Window mode. A click on a button takes it to full screen. Again, I haven’t examined all the content, but the quality of the video on a full size screen is inadequate – it certainly seems to be less than 2mbit/s and not particularly good to look at close up as you can see in the picture.

In fact, it seems fairly typical of most streamed video that you come across on the net.

My first impressions so far are:

  • the look and feel of the channel selection menus are really great and a pleasure to use.
  • I was disappointed as I expected to be able to get a better quality video stream from Joost than other streaming services but this doesn’t seem to be the case. I don’t know whether all the channels are encoded at the same data rate and use the same compression algorithm, but if so, it will limit the time I spend with the service.
  • Joost did lose its connection with the server several times while I was fiddling around – I don’t know where the beta server is located. As I mentioned in my recent post – MPLS and the limitations of the Internet, if I sat down to watch a film and this happened I would not find it acceptable.
  • In the preferences menu, I could not find anything to do with setting up the Internet connection at all. As a consumer service this may be OK, but I find it disconcerting not to be able to manage the basics of the connection speed myself. Maybe I want to restrict the bandwidth Joost uses if I had a limited bandwidth connection. The corollary of this is that Joost always takes as much bandwidth from your connect as it needs. If so that’s a worry and a worry to businesses as well.

I’ll try and do some more tests done over the next few days to get a better feel. By the way, as I type this post up, if I type fast enough, the Joost application stalls until I stop.

Addendum #1:

Using Windows Task Manager, Joost seems to be delivering its service to me at 1mbit/s which is about what I would have guessed from the quality. As I have 8mbit/s it would have been nice to take delivery at a higher rate so that it would be possible to look at National Geographic at full screen.

Oh well, back to the TV!

Addendum #2: A recent Silicon.com post articulated the challenge that could face the Internet with the advent of IPTV quite well:

Ferguson also suggested new, bandwidth-intensive services such as IPTV have the potential to cause even slower broadband speeds. “A provider only needs around 1.5 per cent of users to make use of, say, the BT Vision product at the same time to use up all the capacity,” he said.

He added: “The UK does not yet have the infrastructure to support millions watching EastEnders via broadband at the same time,” because of the high cost of retaining spare network capacity for times when online traffic is high.

So true…