Ethernet goes carrier grade with PBT / PBB-TE (PBBTE)?

March 13, 2007

Alongside IP, Ethernet was always one of the ‘chosen’ protocols as it dominated enterprise Local Area Networks Network (LANs). I didn’t actually write about Ethernet back in the early 1990s because, at the time, it did not have a role in the Wide Area Network (WAN) space as a public data service protocol. As I mentioned in my post – The demise of ATM, it was assumed by many in the mid-1990s that Ethernet had reached the end of its life (NOT by users of Ethernet I might add!) and that ATM was the strategic direction for LAN protocols. This was so much assumed by the equipment vendor industry, that they spent vast amounts of money building ATM divisions to supply this perceived burgeoning market. But, this was just not to be (Picture credit: Nortel).

The principle reason for this was that the ATM activists failed to understand that attempting to displace a well-understood and trusted technology with an unknown and new variety was a challenge too far. Moreover, ATM was so different that it would have required a complete replacement of not only 100% of LAN related network equipment but much of the personal computer hardware and software as well.

What was ATM supposed to bring to the LAN community? A promise of complete compatibility between LANs and WANs and an increase in the speed of IEEE 802.3 10mbit/s LAN backbones that was so desperately needed at the time.

ATM and Ethernet battled it out in the LAN public arena for only a short time as it was such a one sided battle. With the arrival of the 100mbit/s standard 100 BASE-T Ethernet standard, the Keep It Simple Stupid (KISS) approach gained dominance in the minds of users yet again. Why would you throw out years of investment for a promise of ATM jam tomorrow? Upgrading Ethernet LAN backbones to 100mbit/s was so simple as it only necessitated swapping out LAN interface cards and upgrading to better coaxial cable. Most importantly, there was no major requirements to update network software or applications. It was so much a non-brainer that it is now hard to see why ATM was ever thought to be the path forward.

This enigma lives on today and is it caused by the difference in views between the private and the public network industries. ATM came out of the public WAN network word, whereas IP and Ethernet came out of the private LAN world. Chalk and cheese in reality.

Once the dust had settled on the battles between ATM and Ethernet in the LAN market, the battle moved to the telecommunications WAN space and into the telco’s home territory.

Ethernet moves into the WAN space

I first heard about Ethernet being proposed as a wide area protocol in mid-1990s when I visited Nortel’s R&D labs in Canada. It was one of those moments I still remember quite well. It was in a small back room. I don’t remember any equipment being on display but if I remember correctly, all that was being shown were some blown-up diagrams of a proposed public Ethernet architecture. Now I would not claim that Nortel invented the idea of Ethernet’s use in the public sphere as I’m sure that if I had visited 3COM’s R/D centre (the home of Ethernet) or other vendorss labs, I would have seen similar ideas being articulated.

The thought of Ethernet being used in WANs had not occurred to me before that moment and it really made me sit back and think (maybe I should have acted!). However, if Ethernet is ever going to be a credible wide area layer-2 network transport protocol it needs to be able to transparently transport any of the major protocols used in a converged network. This capability is provided in IP through the use of pseudowires and MPLS. This is where PBB-TE comes to the fore.

Over the next few years a whole new sector of the telecommunications industry started; this was termed Metropolitan Access Networks (MANs) or more simply Metro Networks. In the latter half of the 1990s – the heyday of carrier start-ups – many new telcos were set up using a new architecture paradigm based on Ethernet over optical fibre. The idea was that metro networks would sit between enterprise networks and traditional wide-area telco networks. Metro networks would inter-connect enterprise offices at the city level, aggregate data traffic that needs to be transported long distances and deliver it to telcos who could ship that traffic as required to other cities or countries using frame relay services.

Many of these metro players ceased trading during the recent challenging years but many were refinanced and were resurrected to live again.

(Picture credit: exponential-e) It is very interesting to note that throughout that phase of the industry and even up to today, Ethernet has not gained the full acceptance as a viable public service protocol from the wider telecommunications industry. Until a few years ago, outside of specialist metro players, very few traditional carriers offered wide offered Ethernet services at all. This was quite amazing when it is considered that it flew in the face of strong requests from enterprises who all wanted them. What turned this round to a degree was the deployment of MPLS backbones by carriers who could then offer Ethernet as an access protocol to enterprises but handle Ethernet services on their network through an MPLS tunnel.

Bringing ourselves up to the present time, traditional carriers are starting to offer layer-2 Ethernet-based Virtual Private Networks (VPNs) in parallel to their offerings of layer-3 MPLS-based IP-VPNs. One of the interesting companies that focused on Ethernet in the UK is exponential-e who I will be writing about soon

The wide area protocol battle restarts

What is of overt interest today, is the still open issue of Ethernet over fibre being a real alternative to a core architecture based on MPLS. It could be said that this battle is in the process of really starting today but it has been very slow in coming. Ethernet LAN <-> Ethernet WAN <-> Ethernet LAN would seem to be a natural and simple option.

Ethernet is dominant in carrier’s customers LAN networks, Ethernet is dominant in the metro networks, but Ethernet has had little impact in WAN networks. Partially, this is to do with technology politics and a distinct reluctance of traditional carriers to offer Ethernet services (in spite of insistent siren calls from their customers). It could be said that the IP and MPLS bandwagon has taken all the money, time and strategy efforts of the telcos leaving Ethernet in the wings.

It should be said that this also has to do with the technical challenges associated with delivering Ethernet services over longer distances than those seen in metro networks. This is where a new initiative called Provider Backbone Transport (PBT) comes into play which could provide a real alternative to the almost-by-default-use-of-MPLS core strategy of most of the traditional carriers. Interestingly, reminding me of my visit to Canada, Nortel along with Siemens are one of the key players in this market. Here is an interesting presentation Highly Scalable Ethernets by Paul Bottorff, Chief Architect, Carrier Ethernet, Nortel.

PBT is a group of enhancements to Ethernet that are defined in the IEEE’s Provider Backbone Bridging Traffic Engineering (PBBTE) – phew, that’s a mouthful.

PBBTE is all about separating the Ethernet service layer from the network layer thus enabling the development carrier-grade public Ethernet services such as outlined by Nortel:

  • Traffic engineering and hard QoS: Provider Backbone transport enables service provider’s to traffic engineer their Ethernet networks. PBT tunnels reserve appropriate bandwidth and support the provisioned QoS metrics that guarantee SLAs will be met without having to overprovision network capacity.
  • Flexible range of service options: PBT supports multiplexing of any service inside PBT tunnels – including both Ethernet and MPLS services. This flexibility allows service providers to deliver native Ethernet initially and MPLS-based services (VPWS, VPLS) if and when they require.
  • Protection: PBT not only allows the service provider to provision a point-to-point Ethernet tunnel, but to provision an additional backup tunnel to provide resiliency. In combination with IEEE 802.1ag these working and protection paths enable PBT to provide sub 50 ms recovery.
  • Scalability: By turning off MAC learning features we remove the undesirable broadcast functionality that creates MAC flooding and limits the size of the network. Additionally PBT offers a full 60-bit addressing scheme that enables virtually limitless numbers of tunnels to be set-up in the service provider network.
  • Service management: Because each packet is self identifying, the network knows both the source and destination address in addition to the route – enabling a enables more effective alarm correlation, service-fault correlation and service-performance correlation.

Ethernet has a long 30-year history since it was invented by Robert Metcalfe in the early 1970s. It went on to dominate enterprise networking and intra-city local networking and only wide area networks are holding out. MPLS has taken centre-stage as the ATM replacement technology providing the level of Quality of Service (QoS) needed by today’s multimedia services. But MPLS is expensive and challenging to operate on a large scale and maybe Ethernet can catch it up with the long-overdue enhancements brought about by PBBTE.

In Where now frame relay? I talked about how frame relay took off once X.25 was cleared of its cumbersome overheads that made it expensive to use as a wide area protocol. PBBTE / PBT could achieve a similar result for Ethernet. Maybe this initiative will clear the current log jam that will enable Ethernet to take up its rightful position in the public service space along side MPLS. The battle could be far from over!

The big benefit at the end of the day is that PBB-TE is a layer-2 technology, so a carrier deploying it would not need to buy additional layer-3 infrastructure and OSS in the form of routers and MPLS thu, hopefully, representing a significant cost saving for roll-out.

Addendum: BT have recently committed to deploy Nortel’s PBT technology (presentation). As can be read in the Light Reading analysis:

[BT] is keen to play down any PBT versus MPLS positioning. He says the deployment at BT shows how PBT and MPLS can co-exist. “It’s not a case of PBT versus MPLS. PBT will be deployed in the metro core and interface back into BT’s MPLS backbone network.”

It’s quite easy to understand why BT would want to say this!

Addendum #1: A competitor to PBT / PBB-TE is TMPLS – see my post
Addendum #2: Marketing Presentations of the Metro Ethernet Forum

Addendum: One of the principle industry groups promoting and supporting carrier grade Ethernet is the Metro Ethernet Forum (MEF) and in 2006 they introduced their official certification programme. The certification is currently only availably to MEF members – both equipment manufacturers and carriers – to certify that their products comply with the MEF’s carrier Ethernet technical specifications. There are two levels of certification:

MEF 9 is a service-oriented test specification that tests conformance of Ethernet Services at the UNI inter-connect where the Subscriber and Service Provider networks meet. This represents a good safeguard for customers that the Ethernet service they are going to buy will work! Presentation or High bandwidth stream overview

MEF 14: Is a new level of certification that looks at Hard QoS which is a very important aspect of service delivery not covered in MEF 9. MEF 14 provides a hard QoS backed by Service Level Specifications for Carrier Ethernet Services and hard QoS guarantees on Carrier Ethernet business services to their corporate customers and guarantees for triple play data/voice/video services for carriers. Presentation.

Addendum: Enabling PBB-TE – MPLS seamless services


Nexagent, an enigmatic company?

March 12, 2007

In a recent post, MPLS and the limitations of the Internet, I wrote about the challenge of expecting predictable performance for real-time services such as client / server applications, Voice over IP (VoIP) or video services over the public Internet.I also wrote about the challenges of obtaining predictable performance for these same applications on a Wide area Network (WAN) using IP-VPNs when the WAN straddles multiple carriers – as they most always do. This is brought about about by the fact that the majority of carriers to not currently inter-connect their MPLS networks to enable seamless end-to-end multi-carrier Class-of-Service based performance.

As mentioned in the above post, there are several companies that focus on providing this capability through a mixture of technologies, monitoring and a willingness to act as a prime contractor if they are a service provider. However today, the majority of carriers are only able to provide Service Level Agreements (SLAs) for IP traffic and customer sites that are on their own network. This forces enterprises of all sizes to either manage their own multi-carrier WANs or outsource the task to a carrier or systems integrator that is willing to offer a single umbrella SLA and back this off with separate SLAs to each component provider carrier.

Operational Support Software (OSS) Vendors challenges

An Operations Support System is the software that handles workflows, management, inventory details, capacity planning and repair functions for service providers. Typically, an OSS uses an underlying Network Management System to actually communicate with network devices. There are literally hundreds of OSS vendors providing software to carriers today, but it is interesting to note that the vast majority of these only provide software to help carriers manage their network inside the cloud i.e. to help them manage their own network. In practice, each carrier uses a mixture of bought in and 3rd party OSS to manage their network so each carrier has, in effect, a proprietary network and service management regime that makes it virtually impossible to inter-connect their own IP data services with those of other carriers.

As you would expect in the carrier world, there a number of industry standard organisations that are working on this issue but this is such a major challenge I would doubt that OSS environments could be standardised sufficiently to enable simple inter-connect of IP OSSs anywhere in the near future – if ever. Some of these bodies are:

  • The IETF, who work at the network level such as MPLS, IP-VPNs etc;
  • The Telecom Management Forum, who have been working in the OSS space for many years;
  • The MPLS Frame Relay Forum who “focus on advancing the deployment of multi-vendor, multi-service packet-based networks, associated applications, and interworking solutions
  • And one of the newest, IPSphere whose mission “is to deliver an enhanced commercial framework – or business layer – for IP services that preserves the fundamental ubiquity of the Internet’s technical framework and is also capable of supporting a full range of business relationships so that participants have true flexibility in how they add value to upstream service outcomes.”
  • The IT Information Library which is a “framework of best practice approaches intended to facilitate the delivery of high quality information technology (IT) services. ITIL outlines an extensive set of management procedures that are intended to support businesses in achieving both quality and value, in a financial sense, in IT operations. These procedures are supplier independent and have been developed to provide guidance across the breadth of IT infrastructure, development, and operations.”

As can be imagined, working in this challenging inter- carrier, service or OSS space provides both a major opportunity and a major challenge. One company that has chosen to do just this is Nexagent. Nexagent was formed in 2000 by Charlie Muirhead – also founder of Orchestream, Chris Gare – ex Cable and Wireless and Dave Page – ex Cisco.

In my travels, I often get asked “what is it that Nexagent actually does?” so I would like to have a go at answering this question after setting the scene in a previous post – MPLS and the limitations of the Internet .

The traditional way of delivering WAN-based enterprise services or solutions based on managing multiple service providers (carriers) has a considerable number of challenges associated with it. This could be a company WAN formed by integrating IP-VPNs or simple E1 / T1 TDM bandwidth services bought in from multiple service providers around the world.

The most common approach is a proprietary solution, which is usually of an ad hoc nature built up piecemeal over a number of years from earlier company acquisitions. The strategic idea would have been to integrate and harmonise these disparate networks but there was usually never enough money to start, let alone complete, the project.

Many of these challenges can be seen listed in the panel on the right taken from Nexagent’s brochure. Anyone that has been involved in managing an enterprise’s Information and Communications Technology (ICT) infrastructure will be very well aware of these issues!

Overview of Nexagent’s software

Deploying an end to end service or application running on a WAN, or solution as it is often termed, requires a combination of:

  1. workflow management: A workflow describes the order of a set of tasks performed by various individuals or systems to complete a given procedure within an organization., and
  2. supply chain management: A supply chain represents the flow of materials, information, and finances as they move in a process – or workflow – from one organisation or activity to the next.

In the situation where every carrier or service provider has adopted entirely different OSS combinations, workflow practices and supply chain processes, it is no wonder that every multi-carrier WAN represents a complex, proprietary and bespoke solution!

Nexagent has developed a unique abstraction or Meta Layer technology and methodology that manages and monitors the performance of an end-to-end WAN IP-VPN service or solution without the need for a carrier to swap-out their existing OSS infrastructure. Nexagent’s system runs in parallel with, and integrates with, multiple carriers’ existing OSS infrastructure and enables a prime-contractor to manage multi-carrier solutions in a coherent rather than an ad hoc manner.

Let’s go through what Nexagent offers following a standard process flow for deploying multi-supplier services or solutions using Nexagent’s adopted ICT Infrastructure Management (ICTIM) model.

Service or solution modelling: In the ICTIM reference model, this is the Design stage. The is a crucial step and is focused on capturing all the enterprise service requirements and design as early as possible in the process as possible and maintaining that knowledge for the complete service lifecycle. Nexagent has developed a CAD-like modelling capability with frontend capture based on a simpletouse Excel spreadsheet as every carrier uses a different method of capturing their WAN designs. The model is created up front and acts as a reference benchmark if the design is changed at any future time. The tool provides price query capabilities for use with bought-in carrier services together with automated design rule verification.

Implementation engine: In the ICTIM reference model, this is the Deploy stage. The software automatically populates the design created at the design stage with the required service provider interconnect configurations, generates service work orders for each service provider involved with the design and provisions the network interconnect to create a physical working network. The software is based on a unified information flow to network service providers. Importantly, it schedules the individual components of the end-to-end solution to meet the enterprise roll-out and change management needs.

Experience manager: In the ICTIM reference model, this is the Operate stage. The Nexagent software compares real-life end-to-end solution performance to the expected performance level as specified in the service or solution design. Any deviation from agreed component supplier SLAs will generate alerts into existing OSS environments.

The monitoring is characterised by active in-band measurement for each site and each CoS link by application group and is closed-loop in nature by comparing actual performance to expected performance stored in the service reference model. It can detect and isolate problems and includes optimisation and conformance procedures.

Physical Network interconnect: Lastly, Nexagent developed the service interconnect template with network equipment vendors such as Cisco Systems and physically enables the interconnection of IP-VPNs that have chosen different incompatible ways of defining their CoS-based services.

Who could use Nexagent’s technology?

Nexagent provides three examples of the application of their software – ‘use cases’ – on their web site:

Hybrid Virtual Network Operator is a service provider which has some existing network assets in some geography but lacks network reach and operational efficiency to win business from enterprises requiring out of territory service and customised solutions. Such a carrier could use Nexagent to extend their reach, standardise their off-net interface to save costs and ensure that services work as designed.

Data Centre Virtualisation: Data centre services are critical components of effective enterprise application solutions. A recent trend in data centre services is to share computing resources across multiple customers. Similarly, using multiple physical data centres enables more resilient and better performing services with improved load balancing . Using Nexagent simplifies the task of swapping carriers delivering services to customers and better monitor overall service performance.

Third Party Service Delivery: One of the main obstacles for carriers to growing market share and expanding into adjacent markets is the time and money to develop and implement new services. While many service providers want a broader portfolio of services, there is growing evidence that enterprises want to use multiple companies for services as a way of maintaining supply chain negotiation leverage – what Gartner calls enterprise multi-sourcing.

Round up

This all may sound rather complicated, but the industry pain that Nexagent helps solve is quite straight forward to appreciate when you consider the complexity of multi-carrier solutions and Nexagent and have taken a pretty unique approach to solving that pain.

Although there is not too much information in the public domain about Nexagent’s commercial activities, there is a most informative presentation – MPLS Interconnection and Multi-Sourcing for the Secure Enterprise by Leo McCloskey, who was Senior Director, Network and Partner Strategy at EDS when he presented it (Leo is now Nexagent’s VP of Marketing). An element of one of the slides is shown below. You can also see a presentation by Charlie Muirhead from Nexagent – Case Study: Solving the Interconnect Challenge.

These, and other presentations from the conference can be found from the 2006 MPLScon conference proceedings at Webtorials archive site. You will need to register with Webtorials before you can access these papers.

If you are still unsure about what Nexagent actually does or how they could help your business – go visit them in Reading!

Note: I should declare a personal interest in Nexagent as a co-founder, though I am no longer involved with day to day activities.

Addendum:  March 2008, EDS hoovers up Reading networking firm


SONET – SDH, the great survivors

March 8, 2007

When I first wrote about Synchronous Digital Hierarchy (SDH) and SONET (SDH is the European version SONET) back in 1992, it was seen to be truly transformational for the network service provider industry. It marked a clear boundary from just continually enhancing an old asynchronous technology belatedly called Plesiochronous Digital Hierarchy (PDH) to a new approach that could better utilise and manage the ever increasing bandwidths then becoming available through the use of optical fibre. An up-to-date overview of SDH / SONET technology can be found in Wikipedia.

SONET was initially developed in the USA and adapted to the rest of world a little later which called SDH. This was needed as the rest of the world used different data rates to those used in the USA – this later caused interesting inter-connect issues when connecting SONET to SDH networks. For the sake of this post, I will only use the term SDH from now on as, by installation base, SDH far outweighs SONET.

Probably even more amazing was that when it was launched, following many years of standardisation efforts, it was widely predicted that along with ATM it would become a major transmission technology. It has achieved just that. Although ATM hit the end stop pretty quickly and the dominance of IP was unforeseen at that time, SDH and SONET went on to be deployed by almost all carriers that offered traditional Public Switched Telephone Network (PSTN ) voice services.

The benefits that were used to justify rollout of synchronous networking at the time pretty much panned out in practice.

  • Clock rates tightly synchronised within a network through the use of atomic clocks
  • Synchronisation enabled easier network inter-connect between carriers
  • Considerably simplified and reduced costs of extracting low data rate channels from high-data rate backbone fibre optic cables
  • Considerable reduction in management costs and overheads compared to PDH systems.

In the late 1990s, as SDH came out of the telecommunications world rather than the IT world, it was often considered to be a legacy technology along with ATM. This was driven by the fact that SDH was a Time Division Multiplexed (TDM) based protocol with its roots deeply embedded in the voice world whereas the new IP driven data world was packet based.

In reality, carriers had by this time had made hefty commitments to SDH and they were not about to throw that money away as they had done with ATM. What carriers wanted was to have a network infrastructure that could deliver both tradition TDM based voice and data services along with the newer packet based services i.e. a true multi-service network. In many ways SDH has been a technology that has not only survived the IP onslaught but will be around for many years to come. It will certainly be very hard to displace.

From a layer perspective, IP packets are now generally delivered using an MPLS infrastructure that was put in place to replace ATM switching. MPLS sits on top of SDH, which in turn sits on top of Dense Wave Division Multiplexing (DWDM) optical fibre. DWDM will be the subject of a future post.

One interesting aspect of all this. is that quite a few carriers that started up in the late 1990s (many didn’t survive the telecommunications implosion) looked to a future packet-based world and did not wish to provide traditional TDM-based voice and data services. To this breed of carrier, the deployment of SDH did not seem in any way sensible and they looked to remove this seemingly redundant layer from their architecture by building a network where MPLS sat straight on top of DWDM. This is a common architecture today for a green field network start-up looking to deliver legacy voice and data services purely using an IP network.

A number of ‘improved’ SDH alternatives sprang up in the late 1990s. The most visible one being Cisco’s Dynamic Packet Transport (DPT) / resilient packet ring (RPR) technology. To quote Cisco at the time:

DPT is a Cisco-developed, IP+Optical innovation which combines the intelligence of IP with the bandwidth efficiencies of optical rings. By connecting IP directly to fiber, DPT eliminates unnecessary equipment layers thus enabling service providers to optimize their networks for IP traffic with maximum efficiencies.

DPT never really caught on with carriers for a variety of technical and political reasons.

Another European initiative came from a small start-up in Sweden at the time – net insight. This was called Dynamic Synchronous Transfer Mode (DTM). To quote net insight at the time:

DTM combines the advantages of guaranteed throughput, channel isolation, and inherent QoS found in SDH/SONET with the flexibility found in packet-based networks such as ATM and Gigabit Ethernet. DTM, first conceived in 1985 at Ericsson and developed by a team of network researchers including the three founders of Net Insight, uses innovative yet simple variable bandwidth channels.

Again, DTM failed to gain market traction.

SDH has a massive installed base in 2007 and continues to grow at an albeit steady pace. For those carriers that have already deployed SDH, it is pretty much of a no-brainer to carry on using it, while new carriers who focus on all services being delivered on a converged IP network, would never deploy SDH.

SDH has always managed to keep up with the exploding data rates available on DWDM fibre systems so it will maintain its position in carrier networks until incumbent carriers really decide to throw everything away and build a fully converged networks based on IP. There are a lot of eyes on BT at present!

SDH extensions

In recent years, there have been a number of extensions to basic SDH to help it migrate to a packet oriented world:

Generic Framing Procedure (GFP): To make SDH more packet-friendly, the ITU, ANSI, and IETF have specified standards for transporting various services such as IP, ATM and Ethernet over SONET/SDH networks. GFP is a protocol for encapsulating packets over SONET/SDH networks.

Virtual Concatenation (VCAT): Packets in data traffic such as Packet over SONET (POS) are concatenated into larger SONET / SDH payloads to transport them more efficiently.

Link Capacity Adjustment Scheme (LCAS): When customers’ needs for capacity change, they want the change to occur without any disruption in the service. LCAs a VCAT control mechanism, provides this capability.

These standards have helped SDH / SONET to adapt to a packet based world which was missing in the original protocol standards of the early 1990s.

A more detailed over view of these SDH extensions is provided by Cisco .

At the end of the day there seems to be four core transmission technologies that lie at the core of networks: IP, MPLS, optical transport hierarchy (OTH) and, if the carrier was a traditional telco, SDH / SONET. It will be interesting to see how this pans out in the next decade. Have we reached the end game now? Are there other approaches that will start to come to the fore? What is the role of Ethernet? These are some interesting questions I will attempt to tackle in future posts.

The follow on to this post is: Making SDH, DWDM and packet friendy


Panel Defies VC Wisdom

March 7, 2007

This video came across my desk recently from AlwaysOn and it is certainly entertaining an informative!

According to Guy Kawasaki, the panel’s moderator:

At the CommunityNext conference I moderated this panel with the founders of six very successful web properties: Akash Garg of hi5, Sean Suhl of Suicide Girls, Max Levchin of Slide, James Hong of HotorNot, Markus Frind of PlentyofFish, and Drew Curtis of Fark.

This is the most amusing panel that I’ve ever moderated, and the speakers defied many conventions of tech entrepreneurship—in particular the ones that venture capitalists believe are “proven.” If you’d like to learn how these companies became successful without “proven teams, proven technology, and proven business models,” you’ll love this video.Here’s a little factoid that blew my mind: Both Fark and PlentyofFish have only one employee!


Would u like to collaborate with YuuGuu?

March 7, 2007

When I was at an event organised by FirstCapital event before Christmas, I met with two guys who told me about a new service they were launching – YuuGuu. I’m sure I probably said something along the lines of “What was that again?” By way of explanation, according to their web site, Yuuguu derives from the Japanese word for fusion.

YuuGuu was founded by Anish Kapoor and Philip Hemsted “after becoming frustrated by working together remotely and not being able to see and share each other’s computer screens in real time.” “Recognising the changing world of work, Yuuguu came about as a solution to help people work together remotely, through any firewall, across different platforms, with as many colleagues as needed, just as if they were sat right next to each other.”

Put simply, YuuGuu is a web-based collaboration service that helps teams work together. I first used such services in the mid 1990s when Intel’s ProShare came to market and have been using them occasionally on and off ever since. Collaboration services are now widely used as we all know (I have listed here some of the companies offering web based collaboration services). There is a wide variety of tools that come under the heading – some are used more than others. For example, Instant messaging (IM) has become part of all of our lives, although there is still resistance from many companies in allowing their employees to use them due to security considerations. Skype is all about voice collaboration and makes an excellent audio bridge for conferences – if there is a sufficiently good Internet link to allow it work correctly. Video conferencing is still an expensive, unreliable and a black art if my experiences over the last decade are anything to go by. Personally, I would rather have a good audio conference than a four foot video screen where you cannot see the faces of the participants clearly because of the quality provided by a 2mbit/s link.

The other interesting component of many web collaboration services, is application sharing. Application sharing is the ability to remotely access and interact with an application on a remote machine. This was seen in Intel’s Proshare and formed a component of Microsoft’s NetMeeting for example. My most recent use of such software was using the well known webex service last year to show presentations and demonstrate some network software to potential customers around the world. This generally went well (though webex is certainly not the easiest service to use) but is was being used in what I would call a non-interactive one-way manner i.e. Internet delays were not that important unlike if you were editing text remotely. My experience of doing just that has been this year when I attempted to use my blog host’s on-line HTML editor. This was a complete disaster as I lost much text when the web page failed to update correctly several times. However, this was designed for remote use as you were editing on your own machine before uploading. If you attempt to edit documents on a remote machine that are not designed for remote access such as Microsoft Word and PowerPoint , high latency on key presses can be disconcerting and disrupt thought flow. This type of application sharing demands good Internet performance with zero packet loss.

My first interaction with the software was with it availability flag described thus by YuuGuu as “Presence – instantly tell when your workmates or friends are around”. In practice, this is a simple indicator that you can set in the client to say whether you are available or not. I will be writing quite a long post on presence’ in the future so I won’t dwell on the subject too much here. YuuGuu will be adding more capabilities to this simple availability feature in the future. They do use XMPP so at least they will be able to interconnect or federate a buddy list with other IM services.

Philip Hemsted’s cluttered Desktop

YuuGuu is very straightforward to install and start and my first call was with Philip Hemsted and with a click of a button and with his permission I was able to see his desktop. At the moment you can only share a complete desktop and cannot specify just a single application so there could be security concerns if you are giving access to somebody who is not is the same company as yourself.

Philip went though a couple of PowerPoint slides and then I asked to access his copy of Word to do some remote editing. This went fine but there was around a two second delay when typing so I couldn’t recommend that you start writing the second War and Peace but this is typical of most remote access programmes. The better you are at touch typing, the easier you will find it to use!

One really nice feature is its group capability. You can call in a third party to the collaboration session by selecting them from the buddy list and they will be able to see the shared desk top as well as become part of the YuuGuu IM session. I’m sure that in most collaboration sessions Skype would be used in parallel so a VoIP capability would be a beneficial future addition.

There are lots of plans to upgrade the current service with additional capabilities that would be required for corporate use such as history storage for their audit needs and SLAs. These would most likely form the basis of a future paid-for premium service.

What makes the product interesting to me is the simplicity of its use, its ability to work on an Apple in addition to Windows and its simple group capability. Go try – you have nothing to lose! Interestingly, like Crisp Thinking, they are based in the the north of England – Manchester.


#2 My 1993 predictions for 2003 – hah!

March 6, 2007

The Importance of Data and Multimedia.

After looking at my 1992 forecasts for 2003 for Traditional Telephony: Advanced Services in #1 My 1993 predictions for 2003 – hah! let’s look at Importance of Data and Multimedia. I’ll mark the things I got right in green, things I got wrong in red and maybe right in orange.

The public network operators have still not written down all their 64kbit/s switching networks, but all now have the capability of transporting integrated data in the form of voice, image, data, and video. At least 50% of the LAN-originated ATM packets carried by the public operators is non-voice traffic. Video traffic such as video mail, video telephone calls, and multimedia is common. Information delivered to the home, business, and individuals while on the move is managed by advanced network services integrated with customers’ equipment whether that be a simple telephone, smart telephone, PC, or PDA.

Well it’s certainly the case that carriers have still not written off their 64kbit/s switching networks and I guess this will take several decades more to happen! Video is still not that common, but with the advent of YouTube and Joost, maybe video nirvana is just around the corner. I’m not sure that we have seen advanced network services embedded in devices either! However, on consideration maybe Wi-Fi fits the prediction rather nicely?

Most public operators are now not only transporting video but also delivering and originating information, business video, and entertainment services. Telecommunications operators have strong alliances or joint ventures with information providers (IPs), software houses, and equipment manufacturers, as it is now realised that none by themselves can succeed or can invest sufficient skills and capital to succeed alone. Telecommunications operators have developed strong software skills to support these new businesses. Many staff, who were previously working in the computer industry, have now moved to the new sunrise companies of telecommunications.

Telecommunication operators have not really developed strong software skills from the perspective of developing applications themselves, but they have certainly embraced integration! The last prediction was interesting as it could be said that it pertained to the Internet bubble where many telecommunications staff moved to the telecoms industry from the computing industry. However, they were forced to leave just as rapidly when the bubble burst!

  • the network should be able to store the required information
  • the network should have the capability to transfer and switch multiple media instead of just voice
  • multimedia network services need to be integrated with desktop equipment to form a seamless feature-rich application
  • rapidly changing customer requirements means that operators should be able to reduce the new product development cycle and launch products quickly and effectively, ahead of competition; being proactive instead of reactive to competitive moves would offer a considerable edge.

Information storage on the network is pretty much of a reality when you consider on-line back-up storage services, though it has hardly been pervasive. Few are generally willing to pay for on-line storage when hard disk prices have been plummeting while their capacities have been exploding.

But, what I think I had in mind was the storage of information in the network that was then held (and still is) on personal computers and also network-based productivity tools. This has happened in a variety of ways

  • The rise and fall of Application Service Providers in the bubble
  • The success of certain network-based such as SalesForce.com
  • The recent interest in pushing network-based productivity tools by the large media companies such as Google and Yahoo.

The comment about converged networks is still a theme in progress, but the integration between network-based services and desktop equipment is certainly the pretty much the norm these days with the Internet.

This is a mixed bag of success really and seems really dated in its language. This shows just how the Internet has truly transformed our view of network based data services.


GSM phone jammers and detectors

March 2, 2007

In an earlier post – The mobile or cell phone appears normal, but… I mentioned the subject of mobile or cell phones that could be used for spying on their owners. In a similar vein many people have not heard about another category of illegal electronic device called GSM blockers or jammers.

Although there are actually are times when it can be quite entertaining to listen into other people’s phone calls, most experiences can be really annoying. One humorous example of the former is a recent case is mention by Peter Cochrane on Silicon.com where he listened into the conversation from a guy who had visited one those ranches in Texas where the main activities are not concerned with rodeo riding… Peter has a great web site for those interested in a contrarian view of the technology and telecommunications world – in the late 1990s, Peter used to be BT’s Chief Technologist.

Anyway, getting back to the subject of the unconscious misuse of mobile or cell phones, we all come across examples of this on a regular basis and is a popular subject for blogs. I have collected numerous examples for quite some time and I’ll put a list up as a future post. In fact, there are even several books about the subject – this is one of the most entertaining: The Jerk with the Cell Phone: A Survival Guide for the Rest of Us by Barbara Pachter and Susan F. Magee:

In The Jerk with the Cell Phone discover the smartest – and funniest – ways to deal with cell phone jerks without becoming a jerk yourself. Included are the world’s most unbelievable but true cell phone horror stories, revenge testimonials, and cartoons highlighting just how brainless we’ve all become about the technology we all love to hate but can’t live without.

The above book talks about personal intervention when someone starts talking loudly of a phone in an inappropriate place and I have certainly done this on several occasions on a train. However, there are electronic solutions to the problem although I would certainly not condone their use as their use as they are illegal in the United Kingdom as in many other counties. I’m talking about GSM phone blockers or jammers.

Like spy phones, these little devices are available if you know where to go. First stop is eBay of course! Here is one that can be shipped to you from the far east. However, I would warn you that there could be consequences if you were caught importing one into the UK…

One outlet in the UK is Global Gadget who sell a wide variety of jammers, but their home page says:

We sell all types of cell phone jammers to suit all needs. From the small handheld personal mini jammers to the mega power Y2000 model. Whatever your requirement is, we have a unit to deal with your problem. Whether it is to restore some peace and quiet or to stop the unauthorised use of the mobile phones in restricted areas including anti terrorism measures, then we have a cell phone jammer to provide the solution.

Please note that we do not sell jammers to UK customers due to Ofcom regulations. Also, we do not sell jammers to any end users in EU countries due to lack of CE approval of the jammer products.

One model covers 800/900/1800/1900/3G networks.

Another interesting and more legitimate gadget is the cell phone detector.

“This pocket sized device is ideal for use for detecting people using mobile phones in use in offices, prisons or hospitals etc, anywhere that people are not supposed to be using phones or are to be discouraged. This model is super sensitive and will detect cellular phones up to 40 feet away, once detected the sensitivity can be adjusted to home in on the signal, making this device particularly suitable for use in prisons and other establishments with concealed rooms.”

The dark side of mobile or cell phone usage is an interesting area to delve into occasionally!