Content transcoding hits mobiles

October 18, 2007

Content transcoding hits mobiles

Content adaptation and transcoding is high on the agenda of many small mobile content or services companies at the moment and is causing more bad language and angst than anything else I can remember in the industry in recent times. Before I delve into that issue what is content adaptation?

Content translation and the need for it on the Internet is as old as the invention of the browser and is caused by standards, or I should say the interpretation of them. Although HTML, the language of the web page, transformed the nature of the Internet by enabling anyone to publish and access information through the World Wide Web, there were many areas of the specification that left a sufficient degree of fogginess for browser developers to ‘fill in’ with their interpretation of how content should be displayed.

In the early days, most of us engaged with the WWW through the use of the Netscape Navigator browser. Indeed Netscape epitomised all the early enthusiasm for the Internet and their IPO on August 9, 1995 set in play the fabulously exciting ‘bubble’ of the late 1990s. Indeed, The Netscape browser held over a 90% market share in the years post their IPO.

This inherent market monopoly made it very easy for early web page developers to develop content as it only needed to run on one one browser. However that did not make life particularly easy because the Netscape Navigator browser had so many problems in how it arbitrarily interpreted HTML standards. In practice, a browser is only an interpreter after all and, like human interpreters, are prone to misinterpretation when there are gaps in the standards.

Browser market shares. Source Wikipedia

Content Adaptation

Sometimes the drafted HTML displayed in Navigator fine but at other times it didn’t. This led to whole swathes of work-abounds that made the the task of developing interesting content a rather hit and miss affair. A good example of this is the HTML standard that says that the TABLE tag should support a CELLSPACING attribute to define the space between parts of the table. But standards don’t define the default value for that attribute, so unless you explicitly define CELLSPACING when building your page, two browsers may use different amounts of white space in your table.

(Credit: NetMechanic) This type of problem was further complicated by the adoption of browser-specific extensions. The original HTML specifications were rather basic and it was quite easy to envision and implement extensions that enabled better presentation of content. Netscape did this with abandon and even invented a web page scripting language that is universal to day – JavaScript (This has nothing to do with Sun’s Java language).

Early JavaScript was ridden with problems and from my limited experience of writing in the language most of the time was spent trying to iunderstand why code that looked correct according to the rule book failed to work in practice!

Around this time I remember attending a Microsoft presentation in Reston where Bill Gates spent an hour talking about why Microsoft were not in favour of the internet and why they were not going to create a create a browser themselves. Oh how times change when within a year BG announced that the whole company was going to focus on the Internet and that their browser would be given away free to “kill Netscape”.

In fact, I personally lauded Internet Explorer when it hit the market because, in my opinion, it actually worked very well. It was faster than Navigator but more importantly, when you wrote the HTML or JavaScript, the code worked as you expected it to. This made life so much easier. The problem was that you now had to write pages that would run on both browsers or you risked alienating a significant sector of your users. As there still are today, there were many users who blankly refused to change from using Navigator to IE because of their emotional dislike of Microsoft.

From that point on it was downhill for a decade as you had to include browser detection on your web site so that appropriately coded browser-specific and even worse version specific content could be sent to users. Without this, it was just not possible to guarantee that users would be able to see your content. Below is the typical code you had to use:

var browserName=navigator.appName;
if (browserName=="Netscape")
{
 alert("Hi Netscape User!");
}
else
{
 if (browserName=="Microsoft Internet Explorer")
 {
  alert("Hi, Explorer User!");
 }

If we now fast forward to 2007 the world of browsers has changed tremendously but the problem has not gone away. Although it is less common to detect browser types and send browser-specific code considerable problems still exist in making content display in the same way on all browsers. I can say from practical experience that making an HTML page with extensive style sheets display correctly on Firefox, IE 6 and IE 7 is not a particularly easy and definitely frustrating task!

The need to adapt content to a particular browser was the first example of what is now called content adaptation. Another technology in this space is called content transcoding.

Content transcoding

I first came across true content transcoding when I was working with the first real implementation of a Video on Demand service in Hong Kong Telecom in the mid 1990s. This was based based on proprietary technology and myself and a colleague were of the the opinion that it should be based on IP technologies to be future proof. Although we lost that battle we did manage to get Mercury in the UK to base its VoD developments on IP. Mercury went on to sell its consumer assets to NTL so I’m pleased that the two of us managed to get IP as the basis of broadband video services in the UK at the time.

Around this time, Netscape were keen to move Navigator into the consumer market but it was too bloated to be able to run on a set top box so Netscape created a new division called Navio which created a cut down browser for the set top box consumer market. Their main aim however was to create a range of non-PC Internet access platforms.

This was all part of the anti-PC / Microsoft community that then existed (exists?) in Silicon Valley. Navio morphed into Network Computer Inc. owned by Oracle and went on to build another icon of the time – the network computer. NCI changed its name to Liberate when it IPOed in 1999. Sadly, Liberate went into receivership in the early 2000s but lives on today in the form of SeaChange who bought their assets.

Anyway, sorry for the sidetrack, but it was through Navio that I first came across the need to transcode content as a normal web page just looked awful on a TV set. TV Navigator also transcoded HTML seamlessly into MPEG. The main problems on presenting a web page on a TV were:

Fonts: Text that could be read easily on a PC could often not be read on a TV because the font size was too small or the font was too complex. So, fonts were increased in size and simplified.

Images: Another issue was that as the small amount of memory on an STB meant that the browser needed to be cut down in size to run. One way of achieving this was cut out the number of content types that could be supported. For example, instead of the browser being able to display all picture formats e.g. BMP, GIF, JPG etc it would only render JPG pictures. This meant that pictures taken off the web needed to be converted to JPG at the server or head-end before being sent to the STB.

Rendering and resizing: Liberate automatically resized content to fit on the television screen.

Correcting content: For example, horizontal scrolling is not considered a ‘TV-like’ property, so content was scaled to fit the horizontal screen dimensions. If more space is needed, vertical scrolling is enabled to allow the viewer to navigate the page. The transcoder would also automatically wrap text that extends outside a given frame’s area. In the case of tables, the transcoder would ignore widths specified in HTML if the cell or the table is too wide to fit within the screen dimensions.

In practice, most VoD or IPTV services only offered closed wall garden services at the time so most of the content was specifically developed for an operators VoD service.

WAP and the ‘Mobile Internet ‘comes along

Content adaptation and transcoding trundled along quite happily in the background as a requirement for displaying content on non-PC platforms for many years until 2007 and the belated advent of open internet access on mobile or cell phones.

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet (WAP, GPRS, HSDPA on the move!). Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

WAP was in essence simplified HTML and if a content provider wanted to created a service that could be accessed from a mobile phone then they needed to write it in WAP. Services were very simple as shown in the picture above and could quite easily be navigated using a thumb.

The main point was that is was quite natural for developers to specifically create a web site that could be easily used on a mobile phone. Content adaptation took place in the authoring itself and there was no need for automated transcoding of content. If you accessed a WAP site, it may have been a little slow because of the reliance on GPRS, but services were quite easy and intuitive to use. WAP was extremely basic so it was updated to XHTML which provided improved look and feel features that could be displayed of the quickly improving mobile phones.

In 2007 we are beginning to see phones with full-capability browsers backed up by broadband 3G bearers making Internet access a reality on phones today. Now you may think this is just great, but in practice phones are not PCs by a long chalk. Specifically, we are back to browsers interpreting pages differently and more importantly, the screen sizes on mobile phones are too small to display standard web pages that allow a user to navigate it with ease (Things are changing quite rapidly with Apple’s iPhone technology).

Today, as in the early days of WAP, most companies who seriously offer mobile phone content will create a site specifically developed for mobile phone users. Often these sites will have URLs such as m.xxxx.com or xxxx.mobi so that a user can tell that the site is intended for use on a mobile phone.

Although there was a lot of frustration about phones’ capabilities everything at the mobile phone party was generally OK.

Mobile phone operators have been under a lot of criticism for as long as anyone can remember about their lack of understanding of the Internet and focusing on providing closed wall-garden services, but that seems to be changing at long last. They have recognised that their phones are now capable of being a reasonable platform to access to the WWW. They have also opened their eyes and realised that there is real revenue to be derived from allowing their users to access the web – albeit in a controlled manner.

When they opened their browsers to the WWW, they realised what this was not without its challenges. In particular, there are so few web sites that have developed sites that could be browsed on a mobile phone. Even more challenging is that the mobile phone content industry can be called embryonic at best with few service providers that are well known. Customers naturally wanted to use the web services and visit the web sites that they use on their PCs. Of course, most of these look dreadful on a mobile phone and cannot be used in practice. Although many of the bigger companies are now beginning to adapt their sites to the mobile, Google and MySpace to name but two, 99.9999% (as many 9s as you wish) of sites are designed for a PC only.

This has made mobile phone operators turn to using content transcoding to keep their users using their data services and hence keep their revenues growing. The transcoder is placed in the network and intercepts users’ traffic. If a web page needs to be modified so that it will display ‘correctly’ on a particular mobile phone, the transcoder will automatically change the web page’s content to a layout that it thinks will display correctly. Two of the largest transcoding companies in this space are Openwave and Novarra.

This issue came to the fore recently (September 2007) in a post by Luca Passani on learning that Vodafone had implemented content transcoding by intercepting and modifying the User Agent dialogue that takes place between mobile phone browsers and web sites. From Luca’s page, this dialogue is along the lines of:

  • I am a Nokia 6288,
  • I can run Java apps MIDP2-CDLC 1,
  • I support MP3 ringtones
  • …and so on

His concern, quite rightly, is that this is an standard dialogue that goes on across the whole of the WWW that enables a web site to adapt and provide appropriate content to the device requesting it. Without it, they are unable to ensure that their users will get a consistent experience no matter what phone they are using. Incidentally, Luca, provides an open-source XML file called WURFL that contains the capability profile of most mobile phones. This is used by content providers, following a user agent dialogue, to ensure that the content they sent to a phone will run – it contains the core information needed to enable content adaptation.

It is conjectured that, if every mobile operator in the world uses transcoders – and it looks like this is going to be the case – then this will add another layer of confusion to already high challenge of providing content to mobile phones. Not only will content providers have to understand the capabilities of each phone but they will need to understand when and how each operator uses transcoding.

Personally I am against transcoding in this market and reason why can be seen in this excellent posting by Nigel Choi and Luca Passani. In most cases, no automatic transcoding of a standard WWW web page can be better than providing a dedicated page written specifically for a mobile phone. Yes, there is a benefit for mobile operators in that no matter what page a user selects, something will always be displayed. But will that page be usable?

Of course, transcoders should pass through untouched and web site that is tagged by the m.xxxx or the xxxx.mobi URL as that site should be capable of working on any mobile phone, but in these early days of transcoding implementation this is not always happening it seems.

Moreover, the mobile operators say that this situation can be avoided by the 3rd party content providers applying to be on the operators’ white list of approved services. If this turns out to be a universal practice then content providers would need to gain approval and get on all the lists of mobile operators in the world – wow! Imagine an equivalent situation on the PC if content providers needed to get approval from all ISPs. Well, you can’t can you?

This move represents another aspect of how the control culture of the mobile phone industry comes to the fore in placing their needs before those of 3rd party content providers. This can only damage the 3rd party mobile content and service industry and further hold back the coming of an effective mobile internet. A sad day indeed. Surely, it would be better to play a long game and encourage web sites to create mobile versions of their services?


The Bluetooth standards maze

October 2, 2007

This posting focuses on low-power wireless technologies that enable communication between devices that are located within a few feet of each other. This can apply to both voice communications as well as data communication.

This whole area is becoming quite complex with a whole raft of standards being worked on – ULB, UWB, Wibree, Zigbee etc. This may seem rather strange bearing in mind the wide-scale use of the key wireless technology in this space – Bluetooth.

We are all familiar with Bluetooth as it is now as ubiquitous in use as Wi-Fi but it has had a chequered history by any standard and this has negatively affected its take-up across many market sectors.

Bluetooth first saw the light of day as an ‘invention’ by Ericsson in Sweden back in 1994 and was intended as a wireless standard for use as a low-power inter-’gadget’ communication mechanism (Ericsson actually closed the Bluetooth division in 2004). This initially meant hands-free ear pieces for use with mobile phones. This is actually quite a demanding application as there is no room for drop outs as in an IP network as this would be a cause for severe dissatisfaction from users.

Incidentally, I always remember buying my first Sony Ericsson hands-free earpiece that I bought in 2000 as everyone kept giving me weird looks when I wore it in the street – nothing much has changed I think!

Standardisation of Bluetooth was taken over by the Bluetooth Special Interest Group (SIG) following its formation in 1998 by Sony Ericsson, IBM, Intel, Toshiba, and Nokia. Like many new technologies, it was launched with great industry fanfare as the up-and-coming new thing. This was pretty much at the same time as WAP (Covered in a previous post: WAP, GPRS, HSDPA on the move!) was being evangelised. Both of these initiatives initially failed to live up to consumer expectations following the extensive press and vendor coverage.

Bluetooth’s strength lies in its core feature set:

  • It operates in the ‘no licence’ industrial, scientific and medical (ISM) spectrum of 2.4 to 2.485 GHz (as does Wi-Fi of course)
  • It uses a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec
  • Power can be altered from 100mW (Class 1) down to 1mW (Class 3), thus effectively reducing the distance of transmission from 10 metres to 1 metre
  • It uses adaptive frequency hopping (AFH) capability with the transmission hopping between 79 frequencies at 1 MHz intervals to help reduce co-cannel interference from other users of the ISM band. This is key to giving Bluetooth a high degree of interference immunity
  • Bluetooth pairing occurs when two Bluetooth devices agree to communicate with each other and establish a connection. This works because each Bluetooth device has a unique name given it by the user or as set as the default

Several issues beset early Bluetooth deployments:

  • A large lack of compatibility between devices meant that Bluetooth devices from different vendors failed to work with each other. This caused quite a few problems both in the hands-free mobile world and the personal computer peripheral world and led to several quick updates.
  • In the PC world, user interfaces were poor forcing ordinary users to become experts in finding their way around arcane set-up menus.
  • There were also a considerable number of issues arising in the area of security. There was much discussion about Bluejacking where an individual could send unsolicited messages to nearby phones that were ‘discoverable’. However, people that turned off discoverability needed an extra step to receive legitimate data transfers thus complicated ‘legitimate’ use.

Early versions of the standard were fraught with problems and the 1Mbit/s v1.0 release was rapidly updated to v1.1 which overcame many of the early problems. This was followed up by v1.2 in 2003 which helped reduce co-channel interference from non-Bluetooth wireless technologies such as Wi-Fi.

In 2004, V2.0 + Enhanced Data Rate (EDR) was announced that offered higher data rates – up to 3Mbit/s – and reduced power consumption.

To bring us up to date, V2.1 + Enhanced Data Rate (EDR) was released in August 2007 which offered a number of enhancements the major of which seems to be an improved and easier-to-use mechanism for pairing devices.

The next version of Bluetooth is v3.0 which will be based on ultra-wideband (UWB) wireless technology. This is called high speed Bluetooth while there is another proposed variant, announced in June 2007, called Ultra Low Power Bluetooth (ULB).

During this spread of updates, most of the early days problems that plagued Bluetooth have been addressed but it cannot be assumed that Bluetooth’s market share is unassailable as there are a number of alternatives on the table as it is viewed that Bluetooth does not meet all the market’s needs – especially the automotive market.

Low-power wireless

Ultra Low-power Bluetooth (ULB)

Before talking about ULB, we need to look at one of its antecedents, Wibree.

This must be one of the shortest lived ‘standards’ of all time! Wibree was announced in October 2006 by Nokia though they did indicate that they would be willing to merge its activities with other standards activities if that made sense.

“Nokia today introduced Wibree technology as an open industry initiative extending local connectivity to small devices… consuming only a fraction of the power compared to other such radio technologies, enabling smaller and less costly implementations and being easy to integrate with Bluetooth solutions.”

Nokia felt that there was no agreed open standard for ultra-low power communications so it decided that it was going to develop one. One of the features that consumes power in Bluetooth is its frequency hopping capability so Wibree would not use it. Wibree is also more tuned to data applications as it used variable packet lengths unlike the fixed packet length of Bluetooth. This looks similar to the major argument that took place when ATM (The demise of ATM) was first mooted. The voice community wanted short packets while the data community wanted long or variable packets – the industry ended up with a compromise that suited neither application.

More on Wibree can be found at wibree.com . According to this site:

“Wibree and Bluetooth technology are complementary technologies. Bluetooth technology is well-suited for streaming and data-intensive applications such as file transfer and Wibree is designed for applications where ultra low power consumption, small size and low cost are the critical requirements … such as watches and sports sensors”.

On June 12th 2007 Wibree merged with the Bluetooth SIG and the webcast of the event can be seen here. This will result in Wibree becoming part of the Bluetooth specification as an ultra low-power extension of Bluetooth known as ULB.

ULB is intended to complement the existing Bluetooth standard by incorporating Wibree’s original target of reducing the power consumption of devices using it – it aims to consume only a fraction of the power current Bluetooth devices consume. ULB will be designed to operate in a standalone mode or in a dual-mode as a bolt-on to Bluetooth. ULB will reuse existing Bluetooth antennas and needs just a small bit of addition logic when operating in dual-mode with standard Bluetooth so it should not add too much to costs.

When announced, the Bluetooth SIG said that NLB was aimed at wireless enabling small personal devices such as sports sensors (heart rate monitors), healthcare monitors (blood pressure monitors), watches (remote control of phones or MP3 players) and automotive devices (tyre pressure monitors).

Zigbee

The Zigbee standard is managed by the Zigbee Alliance and was developed by the IEEE as standard 802.15.4 It was ratified in 2004.

According to the Alliance site:

“ZigBee was created to address the market need for a cost-effective, standards-based wireless networking solution that supports low data-rates, low-power consumption, security, and reliability.

ZigBee is the only standards-based technology that addresses the unique needs of most remote monitoring and control and sensory network applications.”

This puts the Bluetooth ULB standard in competition with Zigbee as it aims to be cheaper and simpler to implement than Bluetooth itself. In a similar way to the ULB team announcements, Zigbee uses about 10% of the software and power required to run a Bluetooth node..

A good overview can be found here – ZigBee Alliance Tutorial – which talks about all the same applications as outlined in the joint Wibree / Bluetooth NLB announcement above. Zigbee’s characteristics are:

  • Low power compared to Bluetooth
  • High resilience as iill operate in a much noisier environment that Bluetooth or Wi-Fi
  • Full mesh working between nodes
  • 250kbit/s data rate
  • Up to 65,536 nodes.

The alliance says this makes Zigbee ideal for both home automation and industrial applications.

It’s interesting to see that one of Zigbee’s standard competitors has posted an article entitled New Tests Cast Doubts on ZigBee . All’s fair in love and war I guess!

So there we have it. It looks like Bluetooth ULB is being defined to compete with Zigbee.


High-
speed wireless

High Speed Bluetooth 3.0

There doesn’t seem to be too much information to be found on the proposed Bluetooth version 3.0. However on the WiMedia Alliance site I found the statement by Michael Foley, Executive Director, Bluetooth SIG. WiMedia is the organisation that lies behind Ultra Wide-band (UWB) wireless standards.

“Having considered the UWB technology options, the decision ultimately came down to what our members want, which is to leverage their current investments in both UWB and Bluetooth technologies and meet the high-speed demands of their customers. By working closely with the WiMedia Alliance to create the next version of Bluetooth technology, we will enable our members to do just that.”

According to a May 2007 presentation entitled High-Speed Bluetooth on the Wimedia site, the Bluetooth SIG will reference the WiMedia Alliance [UWB] specification and the solution will be branded with Bluetooth trademarks. The solution will be backwards compatible with the current 2.0 Bluetooth standard.

It also talks about a combined Bluetooth/UWB stack:

  • With high data rate mode devices containing two radios initially
  • Over time, the radios will become more tightly integrated sharing components

The specification will be completed in Q4 2007 and first silicon prototyping complete in Q3 2008. I have to say that this approach does not look to be either elegant or low cost to me. However, time will tell.

That completes the Bluetooth camp of wireless technologies. Let’s look at some others.


Ultra-wide Bandwidth (UWB)

As the Bluetooth SIG has adopted UWB as the base of Bluetooth 3.0 what actually is UWB. A good UWB overview presentation can be found here. Essentially, UWB is a wireless protocol that can deliver a high bandwidth over short distances.

It’s characteristics are:

  • UWB uses spread spectrum techniques over a very wide bandwidth in the 3.1 to 10GHz spectrum in the US and 6.0 to 8.5GHz in Europe
  • It uses very low power so that it ‘co-exist’ with other services that use the same spectrum
  • It aims to deliver 480Mbit/s at distances of several metres

The following diagram from the presentation describes it well:

In theory, there should never be an instance where UWB interferes with an existing licensed service. In some ways, this has similarities to BPL (The curse of BPL), though it should not be so profound in its effects. To avoid interference it uses Detect and Avoid (DAA) technology which I guess is self defining in its description without going into too much detail here.

One company that is making UWB chips is Artimi based in Cambridge, UK.
Wireless USB (WUSB)

In the same way that the Bluetooth SIG has adopted UWB, the USB Implementers Forum has adopted WiMedia’s UWB specification as the basis of Wireless USB. According to Jeff Ravencraft, President and Chairman, USB-IF and Technology Strategist, Intel:

“Certified Wireless USB from the USB-IF, built on WiMedia’s UWB platform, is designed to usher in today’s more than 2 billion wired USB devices into the area of wireless connectivity while providing a robust wireless solution for future implementations. The WiMedia Radio Platform meets our objective of using industry standards to ensure coexistence with other WiMedia UWB connectivity protocols.”

A presentation on Wireless USB can be downloaded here

Wireless USB will deliver around the same bandwidth as Bluetooth 3.0 – 480Mbit/s at 3 metres because it is based on the same technology and will be built into Microsoft Vista.™.

One is bound to ask, what the difference is between Wireless USB and Bluetooth as they are going to be based on the same standard. Well one answer is that Wireless USB products are being shipped today as seen in the Belkin Wireless USB Adapter as shown on the right.

A real benefit of both standards adopting UWB will be that both standards will use the same underlying radio. Manufacturers can choose whatever which ever standard they want and there is no need to change hardware designs. This can only help both standard’s adoption.

However, because of the wide spectrum required to run UWB – multiple GHz – different spectrum ranges in each region are being allocated. This is a very big problem as it means that radios in each country or region will need to be different to accommodate the disparate regulatory requirements.

In the same way that Bluetooth ULB will compete with Zigbee (an available technology), Bluetooth 3.0 will compete with Wireless USB (also an available technology).

Round up

So there you have it – the relationships between Bluetooth 2.0, Bluetooth 3.0, Wibree, Bluetooth ULB, Zigbee, High speed Bluetooth, UWB and Wireless USB. So things are clear now right?

So what about Wi-Fi’s big brother WIMAX? And don’t let us forget about HSPDA (WAP, GPRS, HSDPA on the move!), the 3G answer to broadband services? At least these can be put in a category of wide area wireless services to separate them from near distance wireless technologies. I have to say I find all these standards very confusing and makes any decision that relies on a bet about which technology will win out in the long run exceedingly risky. At least Bluetooth 3.0 and Wireless USB use the same radio!

At an industry conference I attended this morning, a speaker talked about an “arms war” between telcos and technology vendors. If you add standards bodies to this mix, I really do wonder where we consumers are placed in their priorities. Can you see PC manufacturers building all these standards onto their machines?

I could also write about WIMAX, Near Field Communications, Z-wave and RF-ID but I think that is better left for another day!


WAP, GPRS, HSDPA on the move!

September 4, 2007

Over the last few months I have written many posts about Internet technologies but they have been pretty much focussed on terrestrial rather than wireless networks (other than dabbling in Wi-Fi with my overview of The Cloud. – The Cloud hotspotting the planet). This exercise was rather interesting as I needed to go back to the beginning and look at how the technologies evolved starting with The demise of ATM.

Back in 1994 a colleague of mine, Gavin Thomas, wrote about Mobile Data protocols and it’s interesting to glance back to see how ‘crude’ mobile data services were at the time. Of course, you would expect that to be the case as GSM Digital Cellular Radio was a pretty new concept at the time as well. In that 1993 post I ended with the statement that “GSM has a bright future”. Maybe it should have read ” the future is Orange”! No one foresaw in those days the up and coming explosive growth of GSM and mobile phone usage. Certainly n one predicted the surge in use of SMS.

Acronym hell has extended itself to mobile services over the last few years and the market has become littered with three, four and even five letter acronyms. In particular, wireless Internet started with a three letter acronym back in the late 1990s – WAP (Wireless Access Protocol), progressing through a four letter acronym, GPRS (General Packet Radio Service) and Enhanced Data GSM Environment (EDGE) and is now moving to a five letter broadband 3G acronym – HSDPA (High-Speed Downlink Packet Access). Phew!

The history of mobile data services has been littered with undelivered hype over the years that still lives on today. However, that hype led to the development of services that really do work unlike some of the early initiatives like WAP.

Ah, WAP, now that was interesting. I would probably put this at the top of my list of over-hyped protocols of all time. At least when ATM was hyped this only took place within the telecommunications community whereas WAP was hyped to the world’s consumers which created much more visibility of ‘egg on the face’ for mobile operators and manufacturers.

So what was WAP?

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity (whether it was right or wrong) to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users – more on this later.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet. Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

There were several aspects that needed standardising under the WAP banner:

  • Transmission protocols. WAP defined how packets were handled on a 2G wireless network and consisted of wireless versions TCP and UDP as seen on the Internet and also used WTP (Wireless transaction protocol) to control communications between the mobile phone and the base station. WTP itself contained an error correction capability to better help cope with unreliable wire bearer.
  • Mobile HTML: It was immediately recognised that due to the limited screen size and the low data rates achievable on a mobile phone a very simplified version of HTML was required for use with mobile web sites. This led to the development of WML (Wireless Markup Language). This was a a VERY cut down version of HTML with very little capability and any graphic used being tiny as well. Towards the end of the 90s WAP 2.0 was defined which improved things somewhat and was based on a cut down of XHTML.

WAP clearly did not live up to its promise of a mobile version of the Internet with it’s crude and constrained user interface, high latency, the need to struggle with arcane menu structures (has anything changed here in ten years?) and to access services using exceedingly slow data rates experienced on the mobile networks of the day.

However, this did not stop mobile service operators from over hyping WAP services with endless hoarding and TV adverts extolling Internet access from mobiles. At one time it looked as if mobile operator advertising departments never talked to their engineering departments and were living in a world of their own that bore little relation to reality.

It all had to crash and it did along with the ‘Internet bubble’ in 2001. Many mobile operators sold their WAP service as an ‘open’ service similar to the Internet. In reality, they were closed garden services that forced users to visit their company portal as their first port of call making it well nigh impossible for small application developers to get their services in front of users. One could ask how much this has changed by 2007?

I should not forget to also mention that the cost of using WAP services was very high based as it was on bits transmitted. This led to shockingly high bills and low usage and provided one of the great motivators behind the ‘unforeseen’ growth of SMS services.

I believe that much of this still lives on in the conscious and unconscious memory of consumers and held back major usage of mobile data services for many years.

Along comes the ‘always-on’ GPRS service

After licking the WAP wounds for several years, it was clearly recognised that something better was required if data services were take off for mobile operators. One of the big issues for WAP were the poor data transmission speeds achieved so GPRS (General Packet Radio Service) was born.

GPRS is an IPv4-based packet switched based protocol where data users share the same data channel in a cell. Increased data rates in GPRS derives from the knitting together of multiple TDMA time slots where each individual GSM time slot can manage between 9.6 to 21.4 Kbps. Linking together slots can deliver greater than 40kbit/s ( up to 80kbit/s) depending on the configuration implemented.

GPRS users are connected all the time and have access to the maximum upstream bandwidth available if no other users in their cell are recieving data at the same time.

The improved data rate (that is in the range of an old dial-up modem) and improved reliability experienced when using GPRS has definitely led to a wider use of data services on the internet. Incidentally, a shared packet service should mean lowered cost but as users are still billed on a kilobits transmitted basis, GPRS bills are still shockingly high if the service is used a lot.

GPRS services are so reliable that there is wide spread availability of GPRS routers as shown in the picture above (Linksys) which are often used for LAN back up capabilities.

GPRS was definitely a step in the right direction.

Gaining an EDGE

EDGE (Enhanced Data rates for GSM Evolution) is an upgrade to GPRS that has gained some popularity in the USA and Europe and is known as a 2.5G service (although it is derives from 3G standards).

EDGE can be deployed by any carrier who offers GPRS services and represents an upgrade to GPRS by requiring a swap-out to an EDGE compatible transceiver and base station subsystem.

By using an 8PSK (8 phase shift keying) modulation scheme on each time slot it’s possible to increase data rates within a single time slot to 48kbit/s. Thus, in theory, it would be be possible, by combining all 8 times slots, to deliver an aggregate 384kbit/s data service. In practice this would not be possible as there would be no spare bandwidth available for voice services!

All in all EDGE achieves what it set out to achieve – higher data rates without an upgrade to full 3G capability and has been widely deployed.

The promise of the HSDA family

Following on from WAP, GPRS and EDGE have been the dominant protocols used for mobile data access for a number of years now. Achieved data rates are still slow by ADSL standards and this has put off many users after they have played with them for a bit.

With the tens of billions of $ spent on 3G licences at the end of the last century one would have imagined that we all would have access to megabit data rates on our mobile or cell phones by now, but that has just not been the case. 3G has been slow to be deployed and presented many operational issues that needed be resolved.

The Universal Mobile Telecommunications System (UMTS) known as 3GSM uses W-CDMA spread spectrum technology as its air interface and delivers its data services under the standards known as HSDPA (High-Speed Downlink Packet Access) and HSUPA (High-Speed Uplink Packet Access) known collectively as HSDA (High-Speed Data Access).

Unlike the TDMA technology used in GSM, W-CDMA is a spread spectrum technology where all users transmit ‘on the top’ of each other over a wide spectrum, in this case 5MHz radio channels. The equipment identifies individual users in the aggregate stream of data through the use of unique user codes] that can be detected. (I explained how spread spectrum radio works in 1992 in Spread Spectrum Radio). The use of this air interface adopted makes a 3G service incompatible with GSM.

In theory, W-CDMA is able to support data rates up to 14mbit/s but in reality offered rates are in the 384Kbit/s to 3.6Mbit/s and is delivered using a dedicated down link channel called the HS-DSCH, (High-Speed Downlink Shared Channel) which allows higher bit rate transmission than ordinary channels. Control functions are carried on sister channels. The HS-DSCH channel is shared between all users in a cell so in practice it would not be possible to deliver the ceiling data rate to any more than a single subscriber which makes me wonder how the industry is going to support lots of mobile TV users on a single cell? More on this issue in a future post.

Standardisation of HSDPA is carried out by the 3rd Generation Partnership Project (3GPP).

Inevitably, because of the ultra slow roll out of UMTS 3G networks, HSDPA will take a long time to get to your front door although this is happening is quite a few countries. Here in the UK, the 3 network is currently launching (August 2007) its HSDPA data service which will be followed by a HSUPA capability at a later date. Initially it will only offer HSDPA data cards for PCs.

Interestingly, The Register reports that 3 will offer 2.8Mbit/s and the the tariff will start at £10 Sterling a month for the Broadband Lite service providing 1Gbytes of data rising to £25 for 7Gbytes with the Broadband Max service.

You can pre-order a broadband modem now as shown on the right.

Incidentally, Vodafone’s UK HSDPA service can be found here and their 7.2Mbit/s service here.

The future is LTE

Another project within 3GPP is the Long Term Evolution (LTE) activity as a part of Release 8. The core focus of the LTE team is, as you would expect, on increasing available bandwidths but there are a number of other concerns they are working on.

  • Reduction of latency: Latency is not an issue for streamed services but is a prime concern for interactive services. There is no point post-WAP launching advanced interactive services if users have to wait around like in the early days of the Internet. Users have been there before.
  • Cost reduction: This is pretty self evident but the activity is focussed on reducing operator’s deployment costs not reducing consumer charge rates!
  • QoS capability: The ubiquitous need for policy and QoS capability and I’ve explored in depth on fixed networks.

The System Architecture Evolution (SAE) is another project that is running in parallel with but behind the LTE. It comes as little surprise that the SAE is looking at creating a flat all-IP network core which will (supposedly) be the key mechanism by which operators will reduce their operating costs. This still debatable to my mind.

Details of this new architecture can be found under the auspices of the Telecoms & Internet Services & Protocols for Advanced Neworks or TISPAN (a six letter acronym!) which is a joint activity between ETSI and 3GPP. To quote from the web site:

Building upon the work already done by 3GPP in creating the SIP-based IMS (IP Multimedia Subsystem), TISPAN and 3GPP are now working together to define a harmonized IMS-centric core for both wireless and wireline networks.

This harmonized ALL IP network has the potential to provide a completely new telecom business model for both fixed and mobile network operators. Access independent IMS will be a key enabler for fixed/mobile convergence, reducing network installation and maintenance costs, and allowing new services to be rapidly developed and deployed to satisfy new market demands.

Based as it is on IMS (which I wrote about in IP Multimedia Subsystem or bust!) this could turn out to be a project and a half. Saying that the “devil is in the detail” would seem to be a bit of an understatement when considering TISPAN.

A recent informative PowerPoint presentation about the benefits of NGN, convergence and TISPAN an be found here.

Roundup

We seem to have come a long way since the early days of WAP with HSDA now starting to deliver the speed of fixed line ADSL to the mobile world. Transfer rates are indeed important but high latency can be every bit as frustrating when using interactive services so it is important to focus on its reduction. The challenge with 3G is its limited coverage and this could cause slowness of uptake – as long as flat rate access charges are the norm and NOT per megabit charging as we have seen in the past. And boy, I bet the inter-operator roaming charges will be high!

However, bandwidth and service accessibility is not the only issue that needs addressing for the mobile Internet market to sky rocket. The platform itself is still a fundamental challenge, limited screen size and arcane menus to name but two. The challenge of writing of applications that are able to run on the majority of phones is definitely one the other major issues (I touched on this in Mobile apps: Java just doesn’t cut the mustard?).

I reviewed a book earlier this year entitled Mobile Web 2.0! that talks extensively about the walled-garden and protectionist attitudes still exhibited by many of the mobile operators. This has to change and there are definite signs that this is beginning to happen with fully open Internet access now being offered by the more enlightened operators.

Maybe, just maybe, if it all comes together over the next decade then the prediction in the above book “The mobile phone network is the computer. Of course, when we say ‘phone network’ we do not mean the ‘Mobile operator network. Rather we mean an open, Web driven application…” could just come about.


Follow

Get every new post delivered to your Inbox.