Content transcoding hits mobiles

October 18, 2007

Content transcoding hits mobiles

Content adaptation and transcoding is high on the agenda of many small mobile content or services companies at the moment and is causing more bad language and angst than anything else I can remember in the industry in recent times. Before I delve into that issue what is content adaptation?

Content translation and the need for it on the Internet is as old as the invention of the browser and is caused by standards, or I should say the interpretation of them. Although HTML, the language of the web page, transformed the nature of the Internet by enabling anyone to publish and access information through the World Wide Web, there were many areas of the specification that left a sufficient degree of fogginess for browser developers to ‘fill in’ with their interpretation of how content should be displayed.

In the early days, most of us engaged with the WWW through the use of the Netscape Navigator browser. Indeed Netscape epitomised all the early enthusiasm for the Internet and their IPO on August 9, 1995 set in play the fabulously exciting ‘bubble’ of the late 1990s. Indeed, The Netscape browser held over a 90% market share in the years post their IPO.

This inherent market monopoly made it very easy for early web page developers to develop content as it only needed to run on one one browser. However that did not make life particularly easy because the Netscape Navigator browser had so many problems in how it arbitrarily interpreted HTML standards. In practice, a browser is only an interpreter after all and, like human interpreters, are prone to misinterpretation when there are gaps in the standards.

Browser market shares. Source Wikipedia

Content Adaptation

Sometimes the drafted HTML displayed in Navigator fine but at other times it didn’t. This led to whole swathes of work-abounds that made the the task of developing interesting content a rather hit and miss affair. A good example of this is the HTML standard that says that the TABLE tag should support a CELLSPACING attribute to define the space between parts of the table. But standards don’t define the default value for that attribute, so unless you explicitly define CELLSPACING when building your page, two browsers may use different amounts of white space in your table.

(Credit: NetMechanic) This type of problem was further complicated by the adoption of browser-specific extensions. The original HTML specifications were rather basic and it was quite easy to envision and implement extensions that enabled better presentation of content. Netscape did this with abandon and even invented a web page scripting language that is universal to day – JavaScript (This has nothing to do with Sun’s Java language).

Early JavaScript was ridden with problems and from my limited experience of writing in the language most of the time was spent trying to iunderstand why code that looked correct according to the rule book failed to work in practice!

Around this time I remember attending a Microsoft presentation in Reston where Bill Gates spent an hour talking about why Microsoft were not in favour of the internet and why they were not going to create a create a browser themselves. Oh how times change when within a year BG announced that the whole company was going to focus on the Internet and that their browser would be given away free to “kill Netscape”.

In fact, I personally lauded Internet Explorer when it hit the market because, in my opinion, it actually worked very well. It was faster than Navigator but more importantly, when you wrote the HTML or JavaScript, the code worked as you expected it to. This made life so much easier. The problem was that you now had to write pages that would run on both browsers or you risked alienating a significant sector of your users. As there still are today, there were many users who blankly refused to change from using Navigator to IE because of their emotional dislike of Microsoft.

From that point on it was downhill for a decade as you had to include browser detection on your web site so that appropriately coded browser-specific and even worse version specific content could be sent to users. Without this, it was just not possible to guarantee that users would be able to see your content. Below is the typical code you had to use:

var browserName=navigator.appName;
if (browserName=="Netscape")
{
 alert("Hi Netscape User!");
}
else
{
 if (browserName=="Microsoft Internet Explorer")
 {
  alert("Hi, Explorer User!");
 }

If we now fast forward to 2007 the world of browsers has changed tremendously but the problem has not gone away. Although it is less common to detect browser types and send browser-specific code considerable problems still exist in making content display in the same way on all browsers. I can say from practical experience that making an HTML page with extensive style sheets display correctly on Firefox, IE 6 and IE 7 is not a particularly easy and definitely frustrating task!

The need to adapt content to a particular browser was the first example of what is now called content adaptation. Another technology in this space is called content transcoding.

Content transcoding

I first came across true content transcoding when I was working with the first real implementation of a Video on Demand service in Hong Kong Telecom in the mid 1990s. This was based based on proprietary technology and myself and a colleague were of the the opinion that it should be based on IP technologies to be future proof. Although we lost that battle we did manage to get Mercury in the UK to base its VoD developments on IP. Mercury went on to sell its consumer assets to NTL so I’m pleased that the two of us managed to get IP as the basis of broadband video services in the UK at the time.

Around this time, Netscape were keen to move Navigator into the consumer market but it was too bloated to be able to run on a set top box so Netscape created a new division called Navio which created a cut down browser for the set top box consumer market. Their main aim however was to create a range of non-PC Internet access platforms.

This was all part of the anti-PC / Microsoft community that then existed (exists?) in Silicon Valley. Navio morphed into Network Computer Inc. owned by Oracle and went on to build another icon of the time – the network computer. NCI changed its name to Liberate when it IPOed in 1999. Sadly, Liberate went into receivership in the early 2000s but lives on today in the form of SeaChange who bought their assets.

Anyway, sorry for the sidetrack, but it was through Navio that I first came across the need to transcode content as a normal web page just looked awful on a TV set. TV Navigator also transcoded HTML seamlessly into MPEG. The main problems on presenting a web page on a TV were:

Fonts: Text that could be read easily on a PC could often not be read on a TV because the font size was too small or the font was too complex. So, fonts were increased in size and simplified.

Images: Another issue was that as the small amount of memory on an STB meant that the browser needed to be cut down in size to run. One way of achieving this was cut out the number of content types that could be supported. For example, instead of the browser being able to display all picture formats e.g. BMP, GIF, JPG etc it would only render JPG pictures. This meant that pictures taken off the web needed to be converted to JPG at the server or head-end before being sent to the STB.

Rendering and resizing: Liberate automatically resized content to fit on the television screen.

Correcting content: For example, horizontal scrolling is not considered a ‘TV-like’ property, so content was scaled to fit the horizontal screen dimensions. If more space is needed, vertical scrolling is enabled to allow the viewer to navigate the page. The transcoder would also automatically wrap text that extends outside a given frame’s area. In the case of tables, the transcoder would ignore widths specified in HTML if the cell or the table is too wide to fit within the screen dimensions.

In practice, most VoD or IPTV services only offered closed wall garden services at the time so most of the content was specifically developed for an operators VoD service.

WAP and the ‘Mobile Internet ‘comes along

Content adaptation and transcoding trundled along quite happily in the background as a requirement for displaying content on non-PC platforms for many years until 2007 and the belated advent of open internet access on mobile or cell phones.

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet (WAP, GPRS, HSDPA on the move!). Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

WAP was in essence simplified HTML and if a content provider wanted to created a service that could be accessed from a mobile phone then they needed to write it in WAP. Services were very simple as shown in the picture above and could quite easily be navigated using a thumb.

The main point was that is was quite natural for developers to specifically create a web site that could be easily used on a mobile phone. Content adaptation took place in the authoring itself and there was no need for automated transcoding of content. If you accessed a WAP site, it may have been a little slow because of the reliance on GPRS, but services were quite easy and intuitive to use. WAP was extremely basic so it was updated to XHTML which provided improved look and feel features that could be displayed of the quickly improving mobile phones.

In 2007 we are beginning to see phones with full-capability browsers backed up by broadband 3G bearers making Internet access a reality on phones today. Now you may think this is just great, but in practice phones are not PCs by a long chalk. Specifically, we are back to browsers interpreting pages differently and more importantly, the screen sizes on mobile phones are too small to display standard web pages that allow a user to navigate it with ease (Things are changing quite rapidly with Apple’s iPhone technology).

Today, as in the early days of WAP, most companies who seriously offer mobile phone content will create a site specifically developed for mobile phone users. Often these sites will have URLs such as m.xxxx.com or xxxx.mobi so that a user can tell that the site is intended for use on a mobile phone.

Although there was a lot of frustration about phones’ capabilities everything at the mobile phone party was generally OK.

Mobile phone operators have been under a lot of criticism for as long as anyone can remember about their lack of understanding of the Internet and focusing on providing closed wall-garden services, but that seems to be changing at long last. They have recognised that their phones are now capable of being a reasonable platform to access to the WWW. They have also opened their eyes and realised that there is real revenue to be derived from allowing their users to access the web – albeit in a controlled manner.

When they opened their browsers to the WWW, they realised what this was not without its challenges. In particular, there are so few web sites that have developed sites that could be browsed on a mobile phone. Even more challenging is that the mobile phone content industry can be called embryonic at best with few service providers that are well known. Customers naturally wanted to use the web services and visit the web sites that they use on their PCs. Of course, most of these look dreadful on a mobile phone and cannot be used in practice. Although many of the bigger companies are now beginning to adapt their sites to the mobile, Google and MySpace to name but two, 99.9999% (as many 9s as you wish) of sites are designed for a PC only.

This has made mobile phone operators turn to using content transcoding to keep their users using their data services and hence keep their revenues growing. The transcoder is placed in the network and intercepts users’ traffic. If a web page needs to be modified so that it will display ‘correctly’ on a particular mobile phone, the transcoder will automatically change the web page’s content to a layout that it thinks will display correctly. Two of the largest transcoding companies in this space are Openwave and Novarra.

This issue came to the fore recently (September 2007) in a post by Luca Passani on learning that Vodafone had implemented content transcoding by intercepting and modifying the User Agent dialogue that takes place between mobile phone browsers and web sites. From Luca’s page, this dialogue is along the lines of:

  • I am a Nokia 6288,
  • I can run Java apps MIDP2-CDLC 1,
  • I support MP3 ringtones
  • …and so on

His concern, quite rightly, is that this is an standard dialogue that goes on across the whole of the WWW that enables a web site to adapt and provide appropriate content to the device requesting it. Without it, they are unable to ensure that their users will get a consistent experience no matter what phone they are using. Incidentally, Luca, provides an open-source XML file called WURFL that contains the capability profile of most mobile phones. This is used by content providers, following a user agent dialogue, to ensure that the content they sent to a phone will run – it contains the core information needed to enable content adaptation.

It is conjectured that, if every mobile operator in the world uses transcoders – and it looks like this is going to be the case – then this will add another layer of confusion to already high challenge of providing content to mobile phones. Not only will content providers have to understand the capabilities of each phone but they will need to understand when and how each operator uses transcoding.

Personally I am against transcoding in this market and reason why can be seen in this excellent posting by Nigel Choi and Luca Passani. In most cases, no automatic transcoding of a standard WWW web page can be better than providing a dedicated page written specifically for a mobile phone. Yes, there is a benefit for mobile operators in that no matter what page a user selects, something will always be displayed. But will that page be usable?

Of course, transcoders should pass through untouched and web site that is tagged by the m.xxxx or the xxxx.mobi URL as that site should be capable of working on any mobile phone, but in these early days of transcoding implementation this is not always happening it seems.

Moreover, the mobile operators say that this situation can be avoided by the 3rd party content providers applying to be on the operators’ white list of approved services. If this turns out to be a universal practice then content providers would need to gain approval and get on all the lists of mobile operators in the world – wow! Imagine an equivalent situation on the PC if content providers needed to get approval from all ISPs. Well, you can’t can you?

This move represents another aspect of how the control culture of the mobile phone industry comes to the fore in placing their needs before those of 3rd party content providers. This can only damage the 3rd party mobile content and service industry and further hold back the coming of an effective mobile internet. A sad day indeed. Surely, it would be better to play a long game and encourage web sites to create mobile versions of their services?


The Bluetooth standards maze

October 2, 2007

This posting focuses on low-power wireless technologies that enable communication between devices that are located within a few feet of each other. This can apply to both voice communications as well as data communication.

This whole area is becoming quite complex with a whole raft of standards being worked on – ULB, UWB, Wibree, Zigbee etc. This may seem rather strange bearing in mind the wide-scale use of the key wireless technology in this space – Bluetooth.

We are all familiar with Bluetooth as it is now as ubiquitous in use as Wi-Fi but it has had a chequered history by any standard and this has negatively affected its take-up across many market sectors.

Bluetooth first saw the light of day as an ‘invention’ by Ericsson in Sweden back in 1994 and was intended as a wireless standard for use as a low-power inter-’gadget’ communication mechanism (Ericsson actually closed the Bluetooth division in 2004). This initially meant hands-free ear pieces for use with mobile phones. This is actually quite a demanding application as there is no room for drop outs as in an IP network as this would be a cause for severe dissatisfaction from users.

Incidentally, I always remember buying my first Sony Ericsson hands-free earpiece that I bought in 2000 as everyone kept giving me weird looks when I wore it in the street – nothing much has changed I think!

Standardisation of Bluetooth was taken over by the Bluetooth Special Interest Group (SIG) following its formation in 1998 by Sony Ericsson, IBM, Intel, Toshiba, and Nokia. Like many new technologies, it was launched with great industry fanfare as the up-and-coming new thing. This was pretty much at the same time as WAP (Covered in a previous post: WAP, GPRS, HSDPA on the move!) was being evangelised. Both of these initiatives initially failed to live up to consumer expectations following the extensive press and vendor coverage.

Bluetooth’s strength lies in its core feature set:

  • It operates in the ‘no licence’ industrial, scientific and medical (ISM) spectrum of 2.4 to 2.485 GHz (as does Wi-Fi of course)
  • It uses a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec
  • Power can be altered from 100mW (Class 1) down to 1mW (Class 3), thus effectively reducing the distance of transmission from 10 metres to 1 metre
  • It uses adaptive frequency hopping (AFH) capability with the transmission hopping between 79 frequencies at 1 MHz intervals to help reduce co-cannel interference from other users of the ISM band. This is key to giving Bluetooth a high degree of interference immunity
  • Bluetooth pairing occurs when two Bluetooth devices agree to communicate with each other and establish a connection. This works because each Bluetooth device has a unique name given it by the user or as set as the default

Several issues beset early Bluetooth deployments:

  • A large lack of compatibility between devices meant that Bluetooth devices from different vendors failed to work with each other. This caused quite a few problems both in the hands-free mobile world and the personal computer peripheral world and led to several quick updates.
  • In the PC world, user interfaces were poor forcing ordinary users to become experts in finding their way around arcane set-up menus.
  • There were also a considerable number of issues arising in the area of security. There was much discussion about Bluejacking where an individual could send unsolicited messages to nearby phones that were ‘discoverable’. However, people that turned off discoverability needed an extra step to receive legitimate data transfers thus complicated ‘legitimate’ use.

Early versions of the standard were fraught with problems and the 1Mbit/s v1.0 release was rapidly updated to v1.1 which overcame many of the early problems. This was followed up by v1.2 in 2003 which helped reduce co-channel interference from non-Bluetooth wireless technologies such as Wi-Fi.

In 2004, V2.0 + Enhanced Data Rate (EDR) was announced that offered higher data rates – up to 3Mbit/s – and reduced power consumption.

To bring us up to date, V2.1 + Enhanced Data Rate (EDR) was released in August 2007 which offered a number of enhancements the major of which seems to be an improved and easier-to-use mechanism for pairing devices.

The next version of Bluetooth is v3.0 which will be based on ultra-wideband (UWB) wireless technology. This is called high speed Bluetooth while there is another proposed variant, announced in June 2007, called Ultra Low Power Bluetooth (ULB).

During this spread of updates, most of the early days problems that plagued Bluetooth have been addressed but it cannot be assumed that Bluetooth’s market share is unassailable as there are a number of alternatives on the table as it is viewed that Bluetooth does not meet all the market’s needs – especially the automotive market.

Low-power wireless

Ultra Low-power Bluetooth (ULB)

Before talking about ULB, we need to look at one of its antecedents, Wibree.

This must be one of the shortest lived ‘standards’ of all time! Wibree was announced in October 2006 by Nokia though they did indicate that they would be willing to merge its activities with other standards activities if that made sense.

“Nokia today introduced Wibree technology as an open industry initiative extending local connectivity to small devices… consuming only a fraction of the power compared to other such radio technologies, enabling smaller and less costly implementations and being easy to integrate with Bluetooth solutions.”

Nokia felt that there was no agreed open standard for ultra-low power communications so it decided that it was going to develop one. One of the features that consumes power in Bluetooth is its frequency hopping capability so Wibree would not use it. Wibree is also more tuned to data applications as it used variable packet lengths unlike the fixed packet length of Bluetooth. This looks similar to the major argument that took place when ATM (The demise of ATM) was first mooted. The voice community wanted short packets while the data community wanted long or variable packets – the industry ended up with a compromise that suited neither application.

More on Wibree can be found at wibree.com . According to this site:

“Wibree and Bluetooth technology are complementary technologies. Bluetooth technology is well-suited for streaming and data-intensive applications such as file transfer and Wibree is designed for applications where ultra low power consumption, small size and low cost are the critical requirements … such as watches and sports sensors”.

On June 12th 2007 Wibree merged with the Bluetooth SIG and the webcast of the event can be seen here. This will result in Wibree becoming part of the Bluetooth specification as an ultra low-power extension of Bluetooth known as ULB.

ULB is intended to complement the existing Bluetooth standard by incorporating Wibree’s original target of reducing the power consumption of devices using it – it aims to consume only a fraction of the power current Bluetooth devices consume. ULB will be designed to operate in a standalone mode or in a dual-mode as a bolt-on to Bluetooth. ULB will reuse existing Bluetooth antennas and needs just a small bit of addition logic when operating in dual-mode with standard Bluetooth so it should not add too much to costs.

When announced, the Bluetooth SIG said that NLB was aimed at wireless enabling small personal devices such as sports sensors (heart rate monitors), healthcare monitors (blood pressure monitors), watches (remote control of phones or MP3 players) and automotive devices (tyre pressure monitors).

Zigbee

The Zigbee standard is managed by the Zigbee Alliance and was developed by the IEEE as standard 802.15.4 It was ratified in 2004.

According to the Alliance site:

“ZigBee was created to address the market need for a cost-effective, standards-based wireless networking solution that supports low data-rates, low-power consumption, security, and reliability.

ZigBee is the only standards-based technology that addresses the unique needs of most remote monitoring and control and sensory network applications.”

This puts the Bluetooth ULB standard in competition with Zigbee as it aims to be cheaper and simpler to implement than Bluetooth itself. In a similar way to the ULB team announcements, Zigbee uses about 10% of the software and power required to run a Bluetooth node..

A good overview can be found here – ZigBee Alliance Tutorial – which talks about all the same applications as outlined in the joint Wibree / Bluetooth NLB announcement above. Zigbee’s characteristics are:

  • Low power compared to Bluetooth
  • High resilience as iill operate in a much noisier environment that Bluetooth or Wi-Fi
  • Full mesh working between nodes
  • 250kbit/s data rate
  • Up to 65,536 nodes.

The alliance says this makes Zigbee ideal for both home automation and industrial applications.

It’s interesting to see that one of Zigbee’s standard competitors has posted an article entitled New Tests Cast Doubts on ZigBee . All’s fair in love and war I guess!

So there we have it. It looks like Bluetooth ULB is being defined to compete with Zigbee.


High-
speed wireless

High Speed Bluetooth 3.0

There doesn’t seem to be too much information to be found on the proposed Bluetooth version 3.0. However on the WiMedia Alliance site I found the statement by Michael Foley, Executive Director, Bluetooth SIG. WiMedia is the organisation that lies behind Ultra Wide-band (UWB) wireless standards.

“Having considered the UWB technology options, the decision ultimately came down to what our members want, which is to leverage their current investments in both UWB and Bluetooth technologies and meet the high-speed demands of their customers. By working closely with the WiMedia Alliance to create the next version of Bluetooth technology, we will enable our members to do just that.”

According to a May 2007 presentation entitled High-Speed Bluetooth on the Wimedia site, the Bluetooth SIG will reference the WiMedia Alliance [UWB] specification and the solution will be branded with Bluetooth trademarks. The solution will be backwards compatible with the current 2.0 Bluetooth standard.

It also talks about a combined Bluetooth/UWB stack:

  • With high data rate mode devices containing two radios initially
  • Over time, the radios will become more tightly integrated sharing components

The specification will be completed in Q4 2007 and first silicon prototyping complete in Q3 2008. I have to say that this approach does not look to be either elegant or low cost to me. However, time will tell.

That completes the Bluetooth camp of wireless technologies. Let’s look at some others.


Ultra-wide Bandwidth (UWB)

As the Bluetooth SIG has adopted UWB as the base of Bluetooth 3.0 what actually is UWB. A good UWB overview presentation can be found here. Essentially, UWB is a wireless protocol that can deliver a high bandwidth over short distances.

It’s characteristics are:

  • UWB uses spread spectrum techniques over a very wide bandwidth in the 3.1 to 10GHz spectrum in the US and 6.0 to 8.5GHz in Europe
  • It uses very low power so that it ‘co-exist’ with other services that use the same spectrum
  • It aims to deliver 480Mbit/s at distances of several metres

The following diagram from the presentation describes it well:

In theory, there should never be an instance where UWB interferes with an existing licensed service. In some ways, this has similarities to BPL (The curse of BPL), though it should not be so profound in its effects. To avoid interference it uses Detect and Avoid (DAA) technology which I guess is self defining in its description without going into too much detail here.

One company that is making UWB chips is Artimi based in Cambridge, UK.
Wireless USB (WUSB)

In the same way that the Bluetooth SIG has adopted UWB, the USB Implementers Forum has adopted WiMedia’s UWB specification as the basis of Wireless USB. According to Jeff Ravencraft, President and Chairman, USB-IF and Technology Strategist, Intel:

“Certified Wireless USB from the USB-IF, built on WiMedia’s UWB platform, is designed to usher in today’s more than 2 billion wired USB devices into the area of wireless connectivity while providing a robust wireless solution for future implementations. The WiMedia Radio Platform meets our objective of using industry standards to ensure coexistence with other WiMedia UWB connectivity protocols.”

A presentation on Wireless USB can be downloaded here

Wireless USB will deliver around the same bandwidth as Bluetooth 3.0 – 480Mbit/s at 3 metres because it is based on the same technology and will be built into Microsoft Vista.™.

One is bound to ask, what the difference is between Wireless USB and Bluetooth as they are going to be based on the same standard. Well one answer is that Wireless USB products are being shipped today as seen in the Belkin Wireless USB Adapter as shown on the right.

A real benefit of both standards adopting UWB will be that both standards will use the same underlying radio. Manufacturers can choose whatever which ever standard they want and there is no need to change hardware designs. This can only help both standard’s adoption.

However, because of the wide spectrum required to run UWB – multiple GHz – different spectrum ranges in each region are being allocated. This is a very big problem as it means that radios in each country or region will need to be different to accommodate the disparate regulatory requirements.

In the same way that Bluetooth ULB will compete with Zigbee (an available technology), Bluetooth 3.0 will compete with Wireless USB (also an available technology).

Round up

So there you have it – the relationships between Bluetooth 2.0, Bluetooth 3.0, Wibree, Bluetooth ULB, Zigbee, High speed Bluetooth, UWB and Wireless USB. So things are clear now right?

So what about Wi-Fi’s big brother WIMAX? And don’t let us forget about HSPDA (WAP, GPRS, HSDPA on the move!), the 3G answer to broadband services? At least these can be put in a category of wide area wireless services to separate them from near distance wireless technologies. I have to say I find all these standards very confusing and makes any decision that relies on a bet about which technology will win out in the long run exceedingly risky. At least Bluetooth 3.0 and Wireless USB use the same radio!

At an industry conference I attended this morning, a speaker talked about an “arms war” between telcos and technology vendors. If you add standards bodies to this mix, I really do wonder where we consumers are placed in their priorities. Can you see PC manufacturers building all these standards onto their machines?

I could also write about WIMAX, Near Field Communications, Z-wave and RF-ID but I think that is better left for another day!


WAP, GPRS, HSDPA on the move!

September 4, 2007

Over the last few months I have written many posts about Internet technologies but they have been pretty much focussed on terrestrial rather than wireless networks (other than dabbling in Wi-Fi with my overview of The Cloud. – The Cloud hotspotting the planet). This exercise was rather interesting as I needed to go back to the beginning and look at how the technologies evolved starting with The demise of ATM.

Back in 1994 a colleague of mine, Gavin Thomas, wrote about Mobile Data protocols and it’s interesting to glance back to see how ‘crude’ mobile data services were at the time. Of course, you would expect that to be the case as GSM Digital Cellular Radio was a pretty new concept at the time as well. In that 1993 post I ended with the statement that “GSM has a bright future”. Maybe it should have read ” the future is Orange”! No one foresaw in those days the up and coming explosive growth of GSM and mobile phone usage. Certainly n one predicted the surge in use of SMS.

Acronym hell has extended itself to mobile services over the last few years and the market has become littered with three, four and even five letter acronyms. In particular, wireless Internet started with a three letter acronym back in the late 1990s – WAP (Wireless Access Protocol), progressing through a four letter acronym, GPRS (General Packet Radio Service) and Enhanced Data GSM Environment (EDGE) and is now moving to a five letter broadband 3G acronym – HSDPA (High-Speed Downlink Packet Access). Phew!

The history of mobile data services has been littered with undelivered hype over the years that still lives on today. However, that hype led to the development of services that really do work unlike some of the early initiatives like WAP.

Ah, WAP, now that was interesting. I would probably put this at the top of my list of over-hyped protocols of all time. At least when ATM was hyped this only took place within the telecommunications community whereas WAP was hyped to the world’s consumers which created much more visibility of ‘egg on the face’ for mobile operators and manufacturers.

So what was WAP?

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity (whether it was right or wrong) to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users – more on this later.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet. Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

There were several aspects that needed standardising under the WAP banner:

  • Transmission protocols. WAP defined how packets were handled on a 2G wireless network and consisted of wireless versions TCP and UDP as seen on the Internet and also used WTP (Wireless transaction protocol) to control communications between the mobile phone and the base station. WTP itself contained an error correction capability to better help cope with unreliable wire bearer.
  • Mobile HTML: It was immediately recognised that due to the limited screen size and the low data rates achievable on a mobile phone a very simplified version of HTML was required for use with mobile web sites. This led to the development of WML (Wireless Markup Language). This was a a VERY cut down version of HTML with very little capability and any graphic used being tiny as well. Towards the end of the 90s WAP 2.0 was defined which improved things somewhat and was based on a cut down of XHTML.

WAP clearly did not live up to its promise of a mobile version of the Internet with it’s crude and constrained user interface, high latency, the need to struggle with arcane menu structures (has anything changed here in ten years?) and to access services using exceedingly slow data rates experienced on the mobile networks of the day.

However, this did not stop mobile service operators from over hyping WAP services with endless hoarding and TV adverts extolling Internet access from mobiles. At one time it looked as if mobile operator advertising departments never talked to their engineering departments and were living in a world of their own that bore little relation to reality.

It all had to crash and it did along with the ‘Internet bubble’ in 2001. Many mobile operators sold their WAP service as an ‘open’ service similar to the Internet. In reality, they were closed garden services that forced users to visit their company portal as their first port of call making it well nigh impossible for small application developers to get their services in front of users. One could ask how much this has changed by 2007?

I should not forget to also mention that the cost of using WAP services was very high based as it was on bits transmitted. This led to shockingly high bills and low usage and provided one of the great motivators behind the ‘unforeseen’ growth of SMS services.

I believe that much of this still lives on in the conscious and unconscious memory of consumers and held back major usage of mobile data services for many years.

Along comes the ‘always-on’ GPRS service

After licking the WAP wounds for several years, it was clearly recognised that something better was required if data services were take off for mobile operators. One of the big issues for WAP were the poor data transmission speeds achieved so GPRS (General Packet Radio Service) was born.

GPRS is an IPv4-based packet switched based protocol where data users share the same data channel in a cell. Increased data rates in GPRS derives from the knitting together of multiple TDMA time slots where each individual GSM time slot can manage between 9.6 to 21.4 Kbps. Linking together slots can deliver greater than 40kbit/s ( up to 80kbit/s) depending on the configuration implemented.

GPRS users are connected all the time and have access to the maximum upstream bandwidth available if no other users in their cell are recieving data at the same time.

The improved data rate (that is in the range of an old dial-up modem) and improved reliability experienced when using GPRS has definitely led to a wider use of data services on the internet. Incidentally, a shared packet service should mean lowered cost but as users are still billed on a kilobits transmitted basis, GPRS bills are still shockingly high if the service is used a lot.

GPRS services are so reliable that there is wide spread availability of GPRS routers as shown in the picture above (Linksys) which are often used for LAN back up capabilities.

GPRS was definitely a step in the right direction.

Gaining an EDGE

EDGE (Enhanced Data rates for GSM Evolution) is an upgrade to GPRS that has gained some popularity in the USA and Europe and is known as a 2.5G service (although it is derives from 3G standards).

EDGE can be deployed by any carrier who offers GPRS services and represents an upgrade to GPRS by requiring a swap-out to an EDGE compatible transceiver and base station subsystem.

By using an 8PSK (8 phase shift keying) modulation scheme on each time slot it’s possible to increase data rates within a single time slot to 48kbit/s. Thus, in theory, it would be be possible, by combining all 8 times slots, to deliver an aggregate 384kbit/s data service. In practice this would not be possible as there would be no spare bandwidth available for voice services!

All in all EDGE achieves what it set out to achieve – higher data rates without an upgrade to full 3G capability and has been widely deployed.

The promise of the HSDA family

Following on from WAP, GPRS and EDGE have been the dominant protocols used for mobile data access for a number of years now. Achieved data rates are still slow by ADSL standards and this has put off many users after they have played with them for a bit.

With the tens of billions of $ spent on 3G licences at the end of the last century one would have imagined that we all would have access to megabit data rates on our mobile or cell phones by now, but that has just not been the case. 3G has been slow to be deployed and presented many operational issues that needed be resolved.

The Universal Mobile Telecommunications System (UMTS) known as 3GSM uses W-CDMA spread spectrum technology as its air interface and delivers its data services under the standards known as HSDPA (High-Speed Downlink Packet Access) and HSUPA (High-Speed Uplink Packet Access) known collectively as HSDA (High-Speed Data Access).

Unlike the TDMA technology used in GSM, W-CDMA is a spread spectrum technology where all users transmit ‘on the top’ of each other over a wide spectrum, in this case 5MHz radio channels. The equipment identifies individual users in the aggregate stream of data through the use of unique user codes] that can be detected. (I explained how spread spectrum radio works in 1992 in Spread Spectrum Radio). The use of this air interface adopted makes a 3G service incompatible with GSM.

In theory, W-CDMA is able to support data rates up to 14mbit/s but in reality offered rates are in the 384Kbit/s to 3.6Mbit/s and is delivered using a dedicated down link channel called the HS-DSCH, (High-Speed Downlink Shared Channel) which allows higher bit rate transmission than ordinary channels. Control functions are carried on sister channels. The HS-DSCH channel is shared between all users in a cell so in practice it would not be possible to deliver the ceiling data rate to any more than a single subscriber which makes me wonder how the industry is going to support lots of mobile TV users on a single cell? More on this issue in a future post.

Standardisation of HSDPA is carried out by the 3rd Generation Partnership Project (3GPP).

Inevitably, because of the ultra slow roll out of UMTS 3G networks, HSDPA will take a long time to get to your front door although this is happening is quite a few countries. Here in the UK, the 3 network is currently launching (August 2007) its HSDPA data service which will be followed by a HSUPA capability at a later date. Initially it will only offer HSDPA data cards for PCs.

Interestingly, The Register reports that 3 will offer 2.8Mbit/s and the the tariff will start at £10 Sterling a month for the Broadband Lite service providing 1Gbytes of data rising to £25 for 7Gbytes with the Broadband Max service.

You can pre-order a broadband modem now as shown on the right.

Incidentally, Vodafone’s UK HSDPA service can be found here and their 7.2Mbit/s service here.

The future is LTE

Another project within 3GPP is the Long Term Evolution (LTE) activity as a part of Release 8. The core focus of the LTE team is, as you would expect, on increasing available bandwidths but there are a number of other concerns they are working on.

  • Reduction of latency: Latency is not an issue for streamed services but is a prime concern for interactive services. There is no point post-WAP launching advanced interactive services if users have to wait around like in the early days of the Internet. Users have been there before.
  • Cost reduction: This is pretty self evident but the activity is focussed on reducing operator’s deployment costs not reducing consumer charge rates!
  • QoS capability: The ubiquitous need for policy and QoS capability and I’ve explored in depth on fixed networks.

The System Architecture Evolution (SAE) is another project that is running in parallel with but behind the LTE. It comes as little surprise that the SAE is looking at creating a flat all-IP network core which will (supposedly) be the key mechanism by which operators will reduce their operating costs. This still debatable to my mind.

Details of this new architecture can be found under the auspices of the Telecoms & Internet Services & Protocols for Advanced Neworks or TISPAN (a six letter acronym!) which is a joint activity between ETSI and 3GPP. To quote from the web site:

Building upon the work already done by 3GPP in creating the SIP-based IMS (IP Multimedia Subsystem), TISPAN and 3GPP are now working together to define a harmonized IMS-centric core for both wireless and wireline networks.

This harmonized ALL IP network has the potential to provide a completely new telecom business model for both fixed and mobile network operators. Access independent IMS will be a key enabler for fixed/mobile convergence, reducing network installation and maintenance costs, and allowing new services to be rapidly developed and deployed to satisfy new market demands.

Based as it is on IMS (which I wrote about in IP Multimedia Subsystem or bust!) this could turn out to be a project and a half. Saying that the “devil is in the detail” would seem to be a bit of an understatement when considering TISPAN.

A recent informative PowerPoint presentation about the benefits of NGN, convergence and TISPAN an be found here.

Roundup

We seem to have come a long way since the early days of WAP with HSDA now starting to deliver the speed of fixed line ADSL to the mobile world. Transfer rates are indeed important but high latency can be every bit as frustrating when using interactive services so it is important to focus on its reduction. The challenge with 3G is its limited coverage and this could cause slowness of uptake – as long as flat rate access charges are the norm and NOT per megabit charging as we have seen in the past. And boy, I bet the inter-operator roaming charges will be high!

However, bandwidth and service accessibility is not the only issue that needs addressing for the mobile Internet market to sky rocket. The platform itself is still a fundamental challenge, limited screen size and arcane menus to name but two. The challenge of writing of applications that are able to run on the majority of phones is definitely one the other major issues (I touched on this in Mobile apps: Java just doesn’t cut the mustard?).

I reviewed a book earlier this year entitled Mobile Web 2.0! that talks extensively about the walled-garden and protectionist attitudes still exhibited by many of the mobile operators. This has to change and there are definite signs that this is beginning to happen with fully open Internet access now being offered by the more enlightened operators.

Maybe, just maybe, if it all comes together over the next decade then the prediction in the above book “The mobile phone network is the computer. Of course, when we say ‘phone network’ we do not mean the ‘Mobile operator network. Rather we mean an open, Web driven application…” could just come about.


The Cloud hotspotting the planet

July 25, 2007

I first came across the first across the embryonic idea behind The Cloud in 2001 when I first met its Founder, George Polk. In those days George was the ‘Entrepreneur in Residence’ at iGabriel, an early stage VC formed in the same year.

One of his first questions was “how can I make money from a Wi-Fi hotspot business?” I certainly didn’t claim that I knew at the time but sure as eggs is eggs I guess that George, his co-founder Niall Murphy and The Cloud team are world experts by now! George often talked about environmental issues but I was sorry to hear that he had stepped down from his CEO position (he’s still on the Board) to work on climate change issues.

The vision and business model behind The Cloud is based on the not unreasonable idea that we all now live in a connected world where we use multiple devices to access the Internet. We all know what these are: PCs, notebooks, mobile phones, PDAs and games consoles etc. etc. Moreover, we want to transparently use any transport bearer that is to hand to access the Internet, no matter where we are or what we are doing. This could be DSL in the home, a LAN in the office, GPRS on a mobile phone or a Wi-Fi hotspot.

The Cloud focuses on the creation and enablement of public Wi-Fi so that consumers and business people are able to connect to the Internet where ever they may be located when out and about.

One of the big issues with Wi-Fi hotspots back in the early years of the decade (and it still is but less so these days), was that Wi-Fi hotspot provision industry was highly fractured with virtually every public hotspot being managed by a different provider. When these providers wanted to monetise their activities it seemed that you needed to set up a different account at each site you visited. This cast a big shadow over users and slowed down market growth considerably.

What was needed in the market place was Wi-Fi aggregators or market consolidation that would allow a roaming user to seamlessly access the Internet from lots of different hotspots without having to having multiple accounts.

Meeting this need for always on connectivity is where The Cloud is focused and their aim is to enable wide-scale availability of public Wi-Fi access through four principle methods:

  1. Direct deployment of hot spots:(a) In coffee shops, airports public houses etc. in partnership with the owners of these assets.(b) In wide area locations such as city centre in partnership with local councils.
  2. Wi-Fi extensions of existing public fixed IP networks .
  3. Wi-Fi extension of existing private enterprise networks – “co-opting networks”
  4. Roaming relationships with other Wi-Fi operators and service providers, such as with iPass in 2006.

The Cloud’s vision is to stitch together all these assets and create a cohesive and ubiquitous Wi-Fi network to enable Internet access at any location using the most appropriate bearer available.

It’s The Cloud’s activities in 1(a) above that is getting much publicity at the moment as back in April the company announced coverage of the City of London in partnership with City of London Corporation. The map below shows the extent of the network.

Note: However, The Cloud will not have everything all to itself in London as a ‘free’ WiFi Thames based network has just been launched (July 2007) by Meshhopper.

On July 18th 2007 The Cloud announced coverage of Manchester city centre as per the map below:

These network roll-outs are very ambitious and are some largest deployments of wide-area Wi-Fi technology in the world so I was intrigued as to how this was achieved and what challenges were encountered during the roll out.

Last week I talked with Niall Murphy, The Cloud’s Co-Founder and Chief Strategy Officer, to catch up with what they were up to and to find out what he could tell me about the architecture of these big Wi-Fi networks.

One of my first questions in respect of the city-centre networks was about in-building coverage as even high power GSM telephony has issues with this and Wi-Fi nodes are limited to a maximum power of 100mW.

I think I already knew the answer to this, but I wanted to see what The Cloud’s policy was. As I expected, Niall explained that “this is a challenge” and consideration of this need was not part of the objective of the deployments which are focused on providing coverage in “open public spaces“. This has to be right in my opinion as the limitation in power would make this an unachievable objective in practice.

Interestingly, Niall talked about The Cloud’s involvement in OFCOM‘s investigation to evaluate whether there would be any additional commercial benefit by allowing transmit powers greater tha 100mW. However, The Cloud’s recommendation was not to increase power for two reasons:

  1. Higher power would create a higher level of interference over a wider area which would negate the benefits of additional power.
  2. Higher power would negatively impact battery life in devices.

In the end, if I remember correctly, the recommendation by OFCOM was to leave the power limits as they were.

I was interested in the architecture of the city-wide networks as I really did not know how they had gone about the challenge. I am pretty familiar with the concept of mesh networks as I tracked the path of one of the early pioneers in the UK of this technology, Radiant Networks. Unfortunately, Radiant went to the wallRadiant Networks flogged – in 2004 for reasons I assume to be concerned with the use of highly complex, proprietary and expensive nodes (as shown on the left) and the use of the 26, 28 and 40Ghz bands which would severely impact economics due to small cell sizes.

Fortunately, Wi-Fi is nothing like those early proprietary approaches to mesh networks and the technology has come of age due to wide-scale global deployment. More importantly, this has also led to considerably lower equipment costs. The reason that this is that Wi-Fi uses the 2.4GHz ‘free band’ and most countries around the world have standardised on the use of this band giving Wi-Fi equipment manufacturers access to a truly global market.

Anyway getting back to The Cloud, Niall, said that “the aims behind the City of London network was to provide ubiquitous coverage in public spaces to a level of 95% which we have achieved in practice“.

The network uses 127 nodes which are located on street lights, video surveillance poles or other street furniture owned by their partner, the City of London Corporation. Are 127 nodes enough I ask? Niall’s answer was an emphatic “yes” although “the 150 metre cell radius and 100mW power limitation of Wi-Fi definitely provides a significant challenge“.

Interestingly, Niall observed that deploying a network in the UK was much harder than in the US due to the lower power levels of the 2.4Ghz band than in the USA. The Cloud’s experience has shown that a cell density two or three times greater is required in a UK city – comparing London to Philadelphia for example. This raises a lot of interesting questions about hotspot economics!

Much time was spent on hotspot planning and this was achieved in partnership with a Canadian company called Belair Networks. One of the interesting aspects of this activity was that there was “serious head scratching” by Belair as being a Canadian company they were used to nice neat square grids of streets and not the no-straight-line topology mess of London!

Data traffic from the 127 nodes that form The Cloud’s City of London network are back-hauled to seven 100Mbit/s fibre PoPs (Points of Presence) using 5.6GHz radio. Thus each node has two transceivers. The first is the Wi-Fi transceiver with a 2.4GHz antenna trained on the appropriate territory. The second is a 5.6GHz transceiver pointing to the next node where the traffic daisy chains back to the fibre PoP effectively creating a true mesh network (Incidentally, backhaul is one of the main uses of WiMax technology). I won’t talk about the strengths and weaknesses of mesh radio networks here but will write post on this subject at a future date.

According to Niall, the tricky part of the build was to find appropriate sites for the nodes. You might think this was purely due to radio propagation issues but there was also the issue that the physical assets they were using didnt always turn out to be where they appeared to be on the maps! “We ended up arriving at the street lamp indicated on the map and it was not there!” This is the same as many carriers who also do not know where some of their switches are located or do not know how many customer leased lines they have in place.

Another interesting anecdote was concerned with the expectations of journalists at the launch of the network. “Because we were talking about ubiquitous coverage, many thought they could jump in a cab and watch Joost streaming video as they weaved their way around the city“. Oh, it didn’t work then I say to Niall expecting him to say that they were disappointed.. “No” he said, “it absolutely worked!

Niall says the network is up and running and working according to their expectations. “there is still a lot of tuning and optimisation to do but we are comfortable with the performance.

Incidentally, The Cloud owns the network and works with the Corporation of London as the landlord.

Round up

The Cloud has seemingly really achieved a lot this year with the roll out of the city centre networks and the sign up of 6 to 7 thousand users in London alone. This was backed up by the launch of UltraWiFi, a flat rate service costing £11.99 pounds per month.

Incidentally, The Cloud do not see themselves in competition with cable companies or mobile operators concentrating as they do on providing pure Wi-Fi access to individuals on the move. Although in many ways it actually does.

They operate in the UK, Sweden, Denmark, Norway, Germany and The Netherlands. Theyre also working with a wide array of service providers, including O2, Vodafone, Telenor, BT, iPass, Vonage, Nintendo amongst others.

The big challenge ahead, as I’m sure they would acknowledge, is how they are going to ramp up revenues and take their business into the big time. I am confident that they are well able to accept this challenge and exceed it. All I know is that public Wi-Fi access is a crucial capability in this connected world and without it the Internet world will be a much less exciting and usable place.


IP Multimedia Subsystem or bust!

May 10, 2007

I have never felt so uncomfortable about writing about a subject as I am now while contemplating IP Multimedia Subsystem (IMS). Why this should be I’m not quite sure.

Maybe it’s because one of the thoughts it triggers is the subject of Intelligent Networks (IN) that I wrote about many years ago – The Magic of Intelligent Networks. I wrote at the time:

“Looking at Intelligent Networks from an Information Technology (IT) perspective can simplify the understanding of IN concepts. Telecommunications standards bodies such as CCITT and ETSI have created a lot of acronyms which can sometimes obfuscate what in reality is straightforward.”

This was an initiative to bring computers and software to the world voice switches that would enable carriers to develop advanced consumer services on their voice switches and SS7 signalling networks. To quote an old article:

“Because IN systems can interface seamlessly between the worlds of information technology and telecommunications equipment, they open the door to a wide range of new, value added services which can be sold as add-ons to basic voice service. Many operators are already offering a wide range of IN-based services such as non-geographic numbers (for example, freephone services) and switch-based features like call barring, call forwarding, caller ID, and complex call re-routing that redirects calls to user-defined locations.”

Now there was absolutely nothing wrong with that vision and the core technology was relatively straightforward (database lookup number translation). The problem in my eyes was that it was presented as a grand take-over-the-world strategy and a be-all-and-and-all vision when in reality it was a relatively simple idea. I wouldn’t say IN died a death, it just fizzled out. It didn’t really disappear as such, as most of the IN related concepts became reality over time as computing and telephony started to merge. I would say it morphed into IP telephony.

Moreover, what lay at the heart of IN was the view that intelligence should be based in the network, not in applications or customer equipment. The argument about dumb networks versus Intelligent networks goes right back to the early 1990s and is still raging today – well at least simmering.

Put bluntly, carriers laudably want intelligence to be based in the network so they are able to provide, manage and control applications and derive revenue that will compensate for plummeting Plain Old Telephony Services (POTS) services. Whereas most IT and Internet people do not share this vision as they believe it holds back service innovation which generally comes from small companies. There is a certain amount of truth in this view as there are clear examples of where this is happening today if we look at the fixed and mobile industries.

Maybe I feel uncomfortable with the concept of IMS as it looks like the grandchild of IN. It certainly seems to suffer from the same strengths and weaknesses that affected its progenitor. Or, maybe it’s because I do not understand it well enough?

What is IP Multimedia Subsystem (IMS)?

IMS is an architectural framework or reference architecture - not a standard – that provides a common method for IP multiple media ( I prefer this term to multimedia) services to be delivered over existing terrestrial or wireless networks. In the IT world – and the communications world come to that – a good part of this activity could be encompassed using the term middleware. Middleware is an interface (abstraction) layer that sits between the networks and applications / services that provides a common Application Programming Interface (API).

The commercial justification of IMS is to enable the development of advanced multimedia applications whose revenue would compensate for dropping telephony revenues and the reduce customer churn.

The technical vision of IMS is about delivering seamless services where customers are able to access any type of service, from any device they want to use, with single sign-on, with common contacts and fluidity between wire line and wireless services. IMS has ambitions about delivering:

  • Common user interfaces for any service
  • Open application server architecture to enable a ‘rich’ service set
  • Separate user data from services for cross service access
  • Standardised session control
  • Inherent service mobility
  • Network independence
  • Inter-working with legacy IN applications

One of the comments I came across on the Internet from a major telecomms equipment vendor was that IMS was about the “Need to create better end-user experience than free-riding Skype, Ebay, Vonage, etc.”. This, in my opinion, is an ambition too far as innovative services such as those mentioned generally do not come out of the carrier world.

Traditionally each application or service offered by carriers sit alone in their own silos calling on all the resources they need, using proprietary signalling protocols, and running in complete isolation to other services each of which sit in their own silo. In many ways this reflects the same situation that provided the motivation to develop a common control plane for data services called GMPLS. Vertical service silos will be replaced with horizontal service, control and transport layers.


Removal of service silos
Source: Business Communications Review, May 2006

As with GMPLS, most large equipment vendors are committed to IMS and supply IMS compliant products. As stated in the above article:

“Many vendors and carriers now tout IMS as the single most significant technology change of the decade… IMS promises to accelerate convergence in many dimensions (technical, business-model, vendor and access network) and make “anything over IP and IP over everything” a reality.

Maybe a more realistic view is that IMS is just an upgrade to the softswitch VoIP architecture outlined in the 90s – albeit being a trifle more complex. This is the view of Bob Bellman, in an article entitled From Softswitching To IMS: Are We There Yet? Many of the  core elements of a softswitch architecture are to be found in the IMS architecture including the separation of the control and data planes.

VoIP SoftSwitch Architecture
Source: Business Communications Review, April 2006

Another associated reference architecture that is aligned with IMS and is being popularly pushed by software and equipment vendors in the enterprise world is Service Oriented Architecture (SOA) an architecture that focuses on services as the core design principle.

IMS has been developed by an industry consortium and originated in the mobile world in an attempt to define an infrastructure that could be used to standardise the delivery of new UMTS or 3G services. The original work was driven by 3GPP2 and TISPAN. Nowadays, just about every standards body seems to be involved including Open Mobile Alliance, ANSI, ITU, IETF, Parlay Group and Liberty Alliance – fourteen in total.

Like all new initiatives, IMS has developed its own mega-set of of T/F/FLAs (Three, four and five letter acronyms) which makes getting to grips with the architectural elements hard going without a glossary. I won’t go into this much here as there are much better Internet resources available: The reference architecture focuses on a three layer model:

#1 Applications layer:

The application layer contains Application Servers (AS) which host each individual service. Each AS communicated to the control plane using Session Initiation Protocol (SIP).  Like GSM, an AS can interrogate a database of users to check authorisation. The database is called the Home Subscriber Server (HSS) or an HSS in a 3rd party network if the user is roaming 9In GSM this is called the Home Location Register (HLR).

(Source: Lucent Technologies)

The application layer also contains Media Servers for storing and playing announcements and other generic applications not delivered by individual ASs, such as media conversion.

Breakout Gateways provide routing information based on telephone number looks-ups for services accessing a PSTN. This is similar functionality to that was found in IN systems discussed earlier.

PSTN gateways are used to interface to PSTN networks and include signalling and media gateways.

#2 Control layer:

The control plane hosts the HSS which is the master database of user identities and the individual calls or service sessions currently being used by each user. There are several roles that a SIP call / session controller can undertake:

  • P-CSCF (Proxy-CSCF) This provides similar functionality as a proxy server in an Intranet
  • S-CSCF (Serving-CSCF) This is the core SIP server always located in the home node
  • I-CSCF (Interrogating-CSCF) This is a SIP server located at a network’s edge and it’s address can be found in DNS servers by 3rd party SIP servers.

#3 Transport layer:

IMS encompasses any services that uses IP / MPLS as transport and pretty much all of the fixed and mobile access technologies including ADSL, cable modem DOCSIS, Ethernet, Wi-Fi, WIMAX and CDMA wireless. It has little choice in this matter as if IMS is to be used it needs to incorporate all of the currently deployed access technologies. Interestingly, as we saw in the DOCSIS post – The tale of DOCSIS and cable operators, IMS is also focusing on the of IPv6 with IPv4 ‘only’ being supported in the near term.

Roundup

IMS represents a tremendous amount of work spread over six years and uses as many existing standards as possible such as SIP and Parlay. IMS is work in progress and much still needs to be done – security and seamless inter-working of services are but two.

All the major telecommunications software, middleware and integrators are involved and just thinking about the scale of the task needed to put in place common control for a whole raft of services makes me wonder about just how practical the implementation of IMS actually is? Don’t take me wrong, I am a real supporter of the these initiatives because it is hard to come up with an alternative vision that makes sense, but boy I’m glad that I’m not in charge of a carrier IMS project!

The upsides of using IMS in the long term are pretty clear and focus around lowering costs, quicker time to market, integration of services and, hopefully, single log-in.

It’s some of the downsides that particularly concern me:

  • Non-migration of existing services: Like we saw in the early days of 3G, there are many services that would need to come under the umbrella of an IMS infrastructure such as instant conferencing, messaging, gaming, personal information management, presence, location based services, IP Centrex, voice self-service, IPTV, VoIP and many more. But, in reality, how do you commercially justify migrating existing services in the short term onto a brand new infrastructure – especially when that infrastructure is based on a non-completed reference architecture?

    IMS is a long term project that will be redefined many times as technology changes over the years. It is clearly an architecture that represents a vision for the future that can be used to guide and converge new developments but it will many years before carriers are running seamless IMS based services – if they ever will.

  • Single vendor lock-in: As with all complicated software systems, most IMS implementations will be dominated by a single equipment supplier or integrator. “Because vendors won’t cut up the IMS architecture the same way, multi-vendor solutions won’t happen, Moreover, that single supplier is likely to be an incumbent vendor.” This was quoted by Keith Nissen from InStat in a BCR article.
  • No launch delays: No product manager would delay the launch of a new service on the promise of jam tomorrow. While the IMS architecture is incomplete, services will continue to be rolled out without IMS further inflaming the Non-migration of existing services issue raised above.
  • Too ambitious: Is the vision of IMS just too ambitious? Integration of nearly every aspect of service delivery will be a challenge and a half for any carrier to undertake. It could be argued that while IT staff are internally focused getting IMS integration sorted they should be working on externally focused services. Without these services, customers will churn no matter how elegant a carrier’s internal architecture may be. Is IMS, Intelligent Networks reborn to suffer the same fate?
  • OSS integration: Any IMS system will need to integrate with carrier’s often proprietary OSS systems. This compounds the challenge of implementing even a limited IMS trial.
  • Source of innovation: It is often said that carriers are not the breeding ground of new, innovative services. This lies with small companies on the Internet creating Web 2.0 services that utilise such technologies as presence, VoIP and AJAX today. Will any of these companies care whether a carrier has an IMS infrastructure in place?
  • Closed shops – another walled garden?: How easy will it be for external companies to come up with a good idea for a new service and be able to integrate with a particular carrier’s semi-proprietary IMS infrastructure?
  • Money sink: Large integration projects like IMS often develop a life of their own once started and can often absorb vast amounts of money that could be better spent elsewhere.

I said at the beginning of the post that I felt uncomfortable about writing about IMS and now that I’m finished I am even more uncomfortable. I like the vision – how could I not? It’s just that I have to question how useful it will be at the end of the day and does it divert effort, money and limited resource away from where they should be applied – on creating interesting services and gaining market share. Only time will tell.

Addendum:  In a previous post, I wrote about the IETF’s Path Computation Element Working Group and it was interesting to come across a discussion about IMS’s Resource and Admission Control Function (RACF) which seems to define a ‘similar’ function. The RACF includes a Policy Decision capability and a Transport Resource Control capability. A discussion can be found here starting at slide 10. Does RACF compete with PCE or could PCE be a part of RACF?


iotum’s Talk-Now is now available!

April 4, 2007

In a previous post The magic of ‘presence’, I talked about the concept of presence in relation to telecommunications services and looked at different examples of how it had been implemented in various products.

One of the most interesting companies mentioned was iotum, a Canadian company. iotum had developed what they called a relevance engine which enabled the provision of ability to talk and willingness to talk information into a telecom service by attaching it to appropriate equipment such as a Private Branch Xchanges (PBX) or a call centre Automatic Call Distribution (ACD) managers.

One of the biggest challenges for any company wanting to translate presence concepts into practical services is how to make it useable rather than just being just a fancy concept that is used to describe a of a number peripheral and often unusable features of a service. Alec Saunders, iotum’s founder, has been articulating his ideas about this in his blog Voice 2.0: A Manifesto for the Future. Like all companies that have their genesis in the IT and applications world, Alec believes that “Voice 2.0 is a user-centric view of the world… “it’s all about me” — my applications, my identity, my availability.

And rather controversially, if you come from the network or the mobile industry: “Voice 2.0 is all about developers too — the companies that exploit the platform assets of identity, presence, and call control. It’s not about the network anymore.” Oh by the way, just to declare my partisanship, I certainly go along with this view and often find that the stove-pipe and closed attitudes sometimes seen in mobile operators is one the biggest hindrances to the growth of data related applications on mobile phones.

There is always a significant technical and commercial challenge to OEMing platform-based services to service providers and large mobile operators so the launch of a stand-alone service that is under complete control of iotum is not a bad way to go. Any business should have to full control of their own destiny and the choice of the relatively open Blackberry platform gives iotum a user base they can clearly focus on to develop their ideas.

iotum launched the beta version of Talk-Now in January and provides a set of features that are aimed at helping Blackberry users to make better use of the device that the world has become addicted to using in the last few years. Let’s talk turkey, what does the Talk-Now service do?

According to web site, as seen in the picture on the left, it provides a simple-in-concept bolt-on service for Blackberry phone users to see and share their availability status to other users.

At the in-use end of the service, the Talk-Now service interacts with a Blackberry user’s address book by adding colour coding to contact names to show the individual’s availability. On initial release only three colours were used, white, red and green.

Red and and green clearly show when a contact is either Not-Available or Available, I’ll talk about white in a minute. Yellow was added later, based on user feedback, to indicate an Interruptible status.

The idea behind Talk-Now is that helps users reduce the amount of time they waste in non-productive calls and leaving voicemails. You may wonder how this availability guidance is provided by users. A contact with a white background provides the first indication of how this is achieved.

Contacts with a white background are not Talk-Now users so their availability information is not available (!) so one of the key features of the service is an Invite People process to get them to use Talk-Now and see your availability information.

If you wish a non-Talk-Now contact to see your availability, you can select their name from the contact list and send them an “I want to talk with you” email. This email will provide a link to an Availability Page as shown below. This email talks about the benefits of using the service (I assume) and asks you to use the service. This is a secure page that is only available to that contact and for a short time only.

Once a contact accepts the invite and signs up to the service, you will be able to see their availability – assuming that they set up the service.

So, how do you indicate your availability? This is set up with a small menu as shown on the left. Using this you can set up status information.

Busy: set your free/busy status manually from your BlackBerry device

In a meeting: iotum Talk-Now synchronizes with your BlackBerry calendar to know if you are in a meeting.

At night: define which hours you consider to be night time.

Blocked group: you can add contacts to the “blocked” group.

You can also set up VIPs (Very Important Persons) who are individuals who receive priority treatment. This category needs to be used with care. Granting VIP status to a group overrides the unavailability settings you have made. You can also define Workdays. Some groups might be VIPs during work hours, while other groups might get VIP status outside of work. This is designed to help you better manage your personal and business communications.

There is also a feature whereby you can be alerted when a contact becomes available by a message being posted on your Blackberry as shown on the right.

 

Many of the above setting can be set up via a web page, for example:

Setting your working week

Setting contact groups

However, it should be remembered that like Plaxo and LinkedIn, this web based functionally does require you to upload – ‘synchronise’ – your Blackberry contact list to the iotum server and many Blackberry users might object to this. It should be noted as well that the calendar is accessed as well to determine when you are in meetings and deemed busy.

If you want to hear more, then take a look at the video that was posted after a visit with Alec Saunders and the team by Blackberry Cool last month:

Talk-Now looks to be an interesting and well thought out service. Following traditional Web 2.0 principles, the service is provided for free today with the hope that iotum will be able to charge for additional features at a future date.

I wish them luck in their endeavours and will be watching intensely to see how they progress in coming months.


GSM pico-cell’s moment of fame

March 21, 2007

Back in May 2006, the DECT – GSM guard bands, 1781.7-1785 MHz and 1876.7-1880 MHz, originally set up to protect cordless phones from interference by GSM mobiles were made available to a number of licensees. As is the fashion, these allocations were offered to industry by holding an auction with a reserve price of £50,000 per license. In fact, it was Ofcom’s first auction and I guess they were happy with the results although I would not have liked to have been in charge of Colt’s bidding team when it came to report to their Board post the auction!

For those of a technical bent, you can see Ofcom’s technical study here in a document entitled Interference scenarios, coordination between licensees and power limits and there is a good overview of cellular networks on Wikipedia also

The most important restriction on the use of this spectrum was outlined by the study:

This analysis confirms that a low power system based on GSM pico cells operating at the 23dBm power (200mW) level can provide coverage in an example multi-storey office scenario. Two pico cells per floor would meet the coverage requirements in the example 50m × 120m office building. For a population of 300 people per floor, the two pico cells would also meet the traffic demand.

It was all done using sealed bids so there was inevitably a wide spectrum (sorry for the pun!) of responses ranging from just over £50,000 to the highest, Colt, who bid £1,513,218. The 12 companies winning licenses were:

British Telecommunications £275,112
Cable & Wireless £51,002
COLT Mobile Telecommunications £1,513,218
Cyberpress Ltd £151,999
FMS Solutions Ltd £113,000
Mapesbury Communications £76,660
O2 £209,888
Opal Telecom £155,555
PLDT £88,889
Shyam Telecom UK £101,011
Spring Mobil £50,110
Teleware £1,001,880

One company that focuses on the supply of GSM and 3G picocells is IP Access based in Cambridge. Their nanoBTS base station can be deployed in buildings, shopping centres, transport terminals, at home, underground stations, rural and remote deployments; in fact, almost anywhere – according to their web site.

Many of the bigger suppliers of GSM and 3G equipment manufacture pico cell platforms as well, Nortel for example.

Of course, even though the pico cell base station (BTS) is lower cost compared to standard base stations, that is not the end to the costs. A pico cell operator still needs to install a base station controller (BSC) which can control a number of base stations plus an home location register (HLR) which stores the current state of mobile phone in a database database. If the network needs to support roaming customers, a virtual location register (VLR) is also required. On top of this, interconnect equipment to other mobile operators is required. All this does not come cheap!

Pico GSM cells are low power versions of their big brothers and are usually associated with a lower-cost backhaul technology based on IP in place of traditional point to point microwave links. GSM Pico cell technology can be used in a number of application scenarios.

In-building use as the basis of ‘seamless’ fixed-mobile voice services. The use of mobile phones as a replacement to fixed telephones has always been a key ambition for mobile operators. But, as we all know, in-building coverage by cellular operators is often not too good leading to the necessity of taking calls near windows or on balconies. The installation of an in-building pico-cell is one way of providing this coverage and comes under the heading of fixed mobile integration. One challenge in this scenario is the possible need to manually swap SIM cards when entering or exiting the building if a different operator is used inside the building to that outside. Of course, nobody would be willing to do this physically so a whole industry has been born to support dual SIM cards which can be selected from a menu option.

From a usability and interoperability perspectives fixed-mobile integration still represents a major industry challenge. Not the least of the problems is that a swap from one operator to another could trigger roaming charges. This is probably an application area that only the bigger license winners will participate in.

On ships using satellite backhaul: This has always been an obvious application for pico cells, especially for cruise ships.

On aeroplanes: In-cabin use of mobile phones is much more contentious than use on ships and I could write a complete post on this particular subject! But, I guess this is inevitable no matter how irritating it would be to fellow passengers – no bias here! e.g. OnAir with their agreement with Airbus.

Overseas network extensions: I was interested in finding out how some of the winners of the OFCOM auction were getting on now that they held some prime spectrum in the UK so I talked with Magnus Kelly, MD at Mapesbury (MCom). I’m sure they were happy with what they paid as they were at the ‘right end’ of the price spectrum.

Mapesbury are a relatively small service provider set up in 2002 to offer data, voice and wireless connectivity services to local communities. In 2003, MCom acquired the assets of Forecourt Television Limited, also known as FTV. FTV had a network of advertising screens on a selection of petrol station forecourts, among them Texaco. This was when I first met Magnus. Using this infrastructure, they later signed a contract with T-Mobile UK, to offer a W-Fi service in selected Texaco service stations across the UK.

More recently, they opened their first IP.District, providing complete Wi-Fi coverage over Watford, Herts using 5.8GHz spectrum. MCom has had pilot users testing the service for the last 12 months.

Magnus was quite ebullient about how things were going on the pico-cell front although there were a few sighs when talking about organising and negotiating the allocation of the required number blocks and point codes necessitated by the trials that they have been running.

He emphasised that that the technology seemed to work well and the issues they now had were the same as any company in the circumstances; creating a business that made money. They have looked at a number of applications and decided that the fixed-mobile integration is probably best left to the major mobile operators.

They are enamoured by the opportunities presented by what is called Overseas network extensions. In essence, this is creating what can envisioned as ‘bubble’ extensions to non-UK mobile networks in the UK. The traffic generated in these extensions can then be backhauled using low cost IP pipes. The core value proposition is low-cost mobile telephone calls aimed at dense local clusters of people using mobile phones. For example, these could be clusters of UK immigrants who would like the ability to make low-cost calls from their mobile phones back to their home countries. Clearly in these circumstances, these pico-cell GSM bubbles would be focused on selected city suburbs as they are following the same subscriber density logic that drives Wi-Fi cell deployment.

In the large mobile operator camp, O2 announced in November 2006 that they will offer indoor, low-power GSM base stations connected via broadband as part of its fixed-mobile convergence (FMC) strategy, an approach that will let customers use standard mobile phones in the office. I’ll be writing a post about FMC in the future.

Although it is early days for GSM pico cell deployment in the UK, it looks like it could have have a healthy future, although this should not be taken for granted. There are a host of technical, commercial and regulatory and political challenges to seamless use of phones inside and outside of buildings. There are also other technology solutions – IT based rather than wireless – for reducing mobile phone bills. An example of such an innovative approach is supplied by OnRelay.


Follow

Get every new post delivered to your Inbox.