Content transcoding hits mobiles

October 18, 2007

Content transcoding hits mobiles

Content adaptation and transcoding is high on the agenda of many small mobile content or services companies at the moment and is causing more bad language and angst than anything else I can remember in the industry in recent times. Before I delve into that issue what is content adaptation?

Content translation and the need for it on the Internet is as old as the invention of the browser and is caused by standards, or I should say the interpretation of them. Although HTML, the language of the web page, transformed the nature of the Internet by enabling anyone to publish and access information through the World Wide Web, there were many areas of the specification that left a sufficient degree of fogginess for browser developers to ‘fill in’ with their interpretation of how content should be displayed.

In the early days, most of us engaged with the WWW through the use of the Netscape Navigator browser. Indeed Netscape epitomised all the early enthusiasm for the Internet and their IPO on August 9, 1995 set in play the fabulously exciting ‘bubble’ of the late 1990s. Indeed, The Netscape browser held over a 90% market share in the years post their IPO.

This inherent market monopoly made it very easy for early web page developers to develop content as it only needed to run on one one browser. However that did not make life particularly easy because the Netscape Navigator browser had so many problems in how it arbitrarily interpreted HTML standards. In practice, a browser is only an interpreter after all and, like human interpreters, are prone to misinterpretation when there are gaps in the standards.

Browser market shares. Source Wikipedia

Content Adaptation

Sometimes the drafted HTML displayed in Navigator fine but at other times it didn’t. This led to whole swathes of work-abounds that made the the task of developing interesting content a rather hit and miss affair. A good example of this is the HTML standard that says that the TABLE tag should support a CELLSPACING attribute to define the space between parts of the table. But standards don’t define the default value for that attribute, so unless you explicitly define CELLSPACING when building your page, two browsers may use different amounts of white space in your table.

(Credit: NetMechanic) This type of problem was further complicated by the adoption of browser-specific extensions. The original HTML specifications were rather basic and it was quite easy to envision and implement extensions that enabled better presentation of content. Netscape did this with abandon and even invented a web page scripting language that is universal to day – JavaScript (This has nothing to do with Sun’s Java language).

Early JavaScript was ridden with problems and from my limited experience of writing in the language most of the time was spent trying to iunderstand why code that looked correct according to the rule book failed to work in practice!

Around this time I remember attending a Microsoft presentation in Reston where Bill Gates spent an hour talking about why Microsoft were not in favour of the internet and why they were not going to create a create a browser themselves. Oh how times change when within a year BG announced that the whole company was going to focus on the Internet and that their browser would be given away free to “kill Netscape”.

In fact, I personally lauded Internet Explorer when it hit the market because, in my opinion, it actually worked very well. It was faster than Navigator but more importantly, when you wrote the HTML or JavaScript, the code worked as you expected it to. This made life so much easier. The problem was that you now had to write pages that would run on both browsers or you risked alienating a significant sector of your users. As there still are today, there were many users who blankly refused to change from using Navigator to IE because of their emotional dislike of Microsoft.

From that point on it was downhill for a decade as you had to include browser detection on your web site so that appropriately coded browser-specific and even worse version specific content could be sent to users. Without this, it was just not possible to guarantee that users would be able to see your content. Below is the typical code you had to use:

var browserName=navigator.appName;
if (browserName=="Netscape")
{
 alert("Hi Netscape User!");
}
else
{
 if (browserName=="Microsoft Internet Explorer")
 {
  alert("Hi, Explorer User!");
 }

If we now fast forward to 2007 the world of browsers has changed tremendously but the problem has not gone away. Although it is less common to detect browser types and send browser-specific code considerable problems still exist in making content display in the same way on all browsers. I can say from practical experience that making an HTML page with extensive style sheets display correctly on Firefox, IE 6 and IE 7 is not a particularly easy and definitely frustrating task!

The need to adapt content to a particular browser was the first example of what is now called content adaptation. Another technology in this space is called content transcoding.

Content transcoding

I first came across true content transcoding when I was working with the first real implementation of a Video on Demand service in Hong Kong Telecom in the mid 1990s. This was based based on proprietary technology and myself and a colleague were of the the opinion that it should be based on IP technologies to be future proof. Although we lost that battle we did manage to get Mercury in the UK to base its VoD developments on IP. Mercury went on to sell its consumer assets to NTL so I’m pleased that the two of us managed to get IP as the basis of broadband video services in the UK at the time.

Around this time, Netscape were keen to move Navigator into the consumer market but it was too bloated to be able to run on a set top box so Netscape created a new division called Navio which created a cut down browser for the set top box consumer market. Their main aim however was to create a range of non-PC Internet access platforms.

This was all part of the anti-PC / Microsoft community that then existed (exists?) in Silicon Valley. Navio morphed into Network Computer Inc. owned by Oracle and went on to build another icon of the time – the network computer. NCI changed its name to Liberate when it IPOed in 1999. Sadly, Liberate went into receivership in the early 2000s but lives on today in the form of SeaChange who bought their assets.

Anyway, sorry for the sidetrack, but it was through Navio that I first came across the need to transcode content as a normal web page just looked awful on a TV set. TV Navigator also transcoded HTML seamlessly into MPEG. The main problems on presenting a web page on a TV were:

Fonts: Text that could be read easily on a PC could often not be read on a TV because the font size was too small or the font was too complex. So, fonts were increased in size and simplified.

Images: Another issue was that as the small amount of memory on an STB meant that the browser needed to be cut down in size to run. One way of achieving this was cut out the number of content types that could be supported. For example, instead of the browser being able to display all picture formats e.g. BMP, GIF, JPG etc it would only render JPG pictures. This meant that pictures taken off the web needed to be converted to JPG at the server or head-end before being sent to the STB.

Rendering and resizing: Liberate automatically resized content to fit on the television screen.

Correcting content: For example, horizontal scrolling is not considered a ‘TV-like’ property, so content was scaled to fit the horizontal screen dimensions. If more space is needed, vertical scrolling is enabled to allow the viewer to navigate the page. The transcoder would also automatically wrap text that extends outside a given frame’s area. In the case of tables, the transcoder would ignore widths specified in HTML if the cell or the table is too wide to fit within the screen dimensions.

In practice, most VoD or IPTV services only offered closed wall garden services at the time so most of the content was specifically developed for an operators VoD service.

WAP and the ‘Mobile Internet ‘comes along

Content adaptation and transcoding trundled along quite happily in the background as a requirement for displaying content on non-PC platforms for many years until 2007 and the belated advent of open internet access on mobile or cell phones.

In the late 1990s the world was agog with the Internet which was accessed using personal computers via LANs or dial-up modems. There was clearly an opportunity to bring the ‘Internet’ to the mobile or cell phone. I have put quotation marks around the Internet as the mobile industry has never seen the Internet in the same light as PC users.

The WAP initiative was aimed at achieving this goal and at least it can be credited with a concept that lives on to this day - Mobile Internet (WAP, GPRS, HSDPA on the move!). Data facilities on mobile phones were really quite crude at the time. Displays were monochrome with a very limited resolution. Moreover, the data rates that were achievable at the time over the air were really very low so this necessitated WAP content standards to take this into account.

WAP was in essence simplified HTML and if a content provider wanted to created a service that could be accessed from a mobile phone then they needed to write it in WAP. Services were very simple as shown in the picture above and could quite easily be navigated using a thumb.

The main point was that is was quite natural for developers to specifically create a web site that could be easily used on a mobile phone. Content adaptation took place in the authoring itself and there was no need for automated transcoding of content. If you accessed a WAP site, it may have been a little slow because of the reliance on GPRS, but services were quite easy and intuitive to use. WAP was extremely basic so it was updated to XHTML which provided improved look and feel features that could be displayed of the quickly improving mobile phones.

In 2007 we are beginning to see phones with full-capability browsers backed up by broadband 3G bearers making Internet access a reality on phones today. Now you may think this is just great, but in practice phones are not PCs by a long chalk. Specifically, we are back to browsers interpreting pages differently and more importantly, the screen sizes on mobile phones are too small to display standard web pages that allow a user to navigate it with ease (Things are changing quite rapidly with Apple’s iPhone technology).

Today, as in the early days of WAP, most companies who seriously offer mobile phone content will create a site specifically developed for mobile phone users. Often these sites will have URLs such as m.xxxx.com or xxxx.mobi so that a user can tell that the site is intended for use on a mobile phone.

Although there was a lot of frustration about phones’ capabilities everything at the mobile phone party was generally OK.

Mobile phone operators have been under a lot of criticism for as long as anyone can remember about their lack of understanding of the Internet and focusing on providing closed wall-garden services, but that seems to be changing at long last. They have recognised that their phones are now capable of being a reasonable platform to access to the WWW. They have also opened their eyes and realised that there is real revenue to be derived from allowing their users to access the web – albeit in a controlled manner.

When they opened their browsers to the WWW, they realised what this was not without its challenges. In particular, there are so few web sites that have developed sites that could be browsed on a mobile phone. Even more challenging is that the mobile phone content industry can be called embryonic at best with few service providers that are well known. Customers naturally wanted to use the web services and visit the web sites that they use on their PCs. Of course, most of these look dreadful on a mobile phone and cannot be used in practice. Although many of the bigger companies are now beginning to adapt their sites to the mobile, Google and MySpace to name but two, 99.9999% (as many 9s as you wish) of sites are designed for a PC only.

This has made mobile phone operators turn to using content transcoding to keep their users using their data services and hence keep their revenues growing. The transcoder is placed in the network and intercepts users’ traffic. If a web page needs to be modified so that it will display ‘correctly’ on a particular mobile phone, the transcoder will automatically change the web page’s content to a layout that it thinks will display correctly. Two of the largest transcoding companies in this space are Openwave and Novarra.

This issue came to the fore recently (September 2007) in a post by Luca Passani on learning that Vodafone had implemented content transcoding by intercepting and modifying the User Agent dialogue that takes place between mobile phone browsers and web sites. From Luca’s page, this dialogue is along the lines of:

  • I am a Nokia 6288,
  • I can run Java apps MIDP2-CDLC 1,
  • I support MP3 ringtones
  • …and so on

His concern, quite rightly, is that this is an standard dialogue that goes on across the whole of the WWW that enables a web site to adapt and provide appropriate content to the device requesting it. Without it, they are unable to ensure that their users will get a consistent experience no matter what phone they are using. Incidentally, Luca, provides an open-source XML file called WURFL that contains the capability profile of most mobile phones. This is used by content providers, following a user agent dialogue, to ensure that the content they sent to a phone will run – it contains the core information needed to enable content adaptation.

It is conjectured that, if every mobile operator in the world uses transcoders – and it looks like this is going to be the case – then this will add another layer of confusion to already high challenge of providing content to mobile phones. Not only will content providers have to understand the capabilities of each phone but they will need to understand when and how each operator uses transcoding.

Personally I am against transcoding in this market and reason why can be seen in this excellent posting by Nigel Choi and Luca Passani. In most cases, no automatic transcoding of a standard WWW web page can be better than providing a dedicated page written specifically for a mobile phone. Yes, there is a benefit for mobile operators in that no matter what page a user selects, something will always be displayed. But will that page be usable?

Of course, transcoders should pass through untouched and web site that is tagged by the m.xxxx or the xxxx.mobi URL as that site should be capable of working on any mobile phone, but in these early days of transcoding implementation this is not always happening it seems.

Moreover, the mobile operators say that this situation can be avoided by the 3rd party content providers applying to be on the operators’ white list of approved services. If this turns out to be a universal practice then content providers would need to gain approval and get on all the lists of mobile operators in the world – wow! Imagine an equivalent situation on the PC if content providers needed to get approval from all ISPs. Well, you can’t can you?

This move represents another aspect of how the control culture of the mobile phone industry comes to the fore in placing their needs before those of 3rd party content providers. This can only damage the 3rd party mobile content and service industry and further hold back the coming of an effective mobile internet. A sad day indeed. Surely, it would be better to play a long game and encourage web sites to create mobile versions of their services?


The Bluetooth standards maze

October 2, 2007

This posting focuses on low-power wireless technologies that enable communication between devices that are located within a few feet of each other. This can apply to both voice communications as well as data communication.

This whole area is becoming quite complex with a whole raft of standards being worked on – ULB, UWB, Wibree, Zigbee etc. This may seem rather strange bearing in mind the wide-scale use of the key wireless technology in this space – Bluetooth.

We are all familiar with Bluetooth as it is now as ubiquitous in use as Wi-Fi but it has had a chequered history by any standard and this has negatively affected its take-up across many market sectors.

Bluetooth first saw the light of day as an ‘invention’ by Ericsson in Sweden back in 1994 and was intended as a wireless standard for use as a low-power inter-’gadget’ communication mechanism (Ericsson actually closed the Bluetooth division in 2004). This initially meant hands-free ear pieces for use with mobile phones. This is actually quite a demanding application as there is no room for drop outs as in an IP network as this would be a cause for severe dissatisfaction from users.

Incidentally, I always remember buying my first Sony Ericsson hands-free earpiece that I bought in 2000 as everyone kept giving me weird looks when I wore it in the street – nothing much has changed I think!

Standardisation of Bluetooth was taken over by the Bluetooth Special Interest Group (SIG) following its formation in 1998 by Sony Ericsson, IBM, Intel, Toshiba, and Nokia. Like many new technologies, it was launched with great industry fanfare as the up-and-coming new thing. This was pretty much at the same time as WAP (Covered in a previous post: WAP, GPRS, HSDPA on the move!) was being evangelised. Both of these initiatives initially failed to live up to consumer expectations following the extensive press and vendor coverage.

Bluetooth’s strength lies in its core feature set:

  • It operates in the ‘no licence’ industrial, scientific and medical (ISM) spectrum of 2.4 to 2.485 GHz (as does Wi-Fi of course)
  • It uses a spread spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec
  • Power can be altered from 100mW (Class 1) down to 1mW (Class 3), thus effectively reducing the distance of transmission from 10 metres to 1 metre
  • It uses adaptive frequency hopping (AFH) capability with the transmission hopping between 79 frequencies at 1 MHz intervals to help reduce co-cannel interference from other users of the ISM band. This is key to giving Bluetooth a high degree of interference immunity
  • Bluetooth pairing occurs when two Bluetooth devices agree to communicate with each other and establish a connection. This works because each Bluetooth device has a unique name given it by the user or as set as the default

Several issues beset early Bluetooth deployments:

  • A large lack of compatibility between devices meant that Bluetooth devices from different vendors failed to work with each other. This caused quite a few problems both in the hands-free mobile world and the personal computer peripheral world and led to several quick updates.
  • In the PC world, user interfaces were poor forcing ordinary users to become experts in finding their way around arcane set-up menus.
  • There were also a considerable number of issues arising in the area of security. There was much discussion about Bluejacking where an individual could send unsolicited messages to nearby phones that were ‘discoverable’. However, people that turned off discoverability needed an extra step to receive legitimate data transfers thus complicated ‘legitimate’ use.

Early versions of the standard were fraught with problems and the 1Mbit/s v1.0 release was rapidly updated to v1.1 which overcame many of the early problems. This was followed up by v1.2 in 2003 which helped reduce co-channel interference from non-Bluetooth wireless technologies such as Wi-Fi.

In 2004, V2.0 + Enhanced Data Rate (EDR) was announced that offered higher data rates – up to 3Mbit/s – and reduced power consumption.

To bring us up to date, V2.1 + Enhanced Data Rate (EDR) was released in August 2007 which offered a number of enhancements the major of which seems to be an improved and easier-to-use mechanism for pairing devices.

The next version of Bluetooth is v3.0 which will be based on ultra-wideband (UWB) wireless technology. This is called high speed Bluetooth while there is another proposed variant, announced in June 2007, called Ultra Low Power Bluetooth (ULB).

During this spread of updates, most of the early days problems that plagued Bluetooth have been addressed but it cannot be assumed that Bluetooth’s market share is unassailable as there are a number of alternatives on the table as it is viewed that Bluetooth does not meet all the market’s needs – especially the automotive market.

Low-power wireless

Ultra Low-power Bluetooth (ULB)

Before talking about ULB, we need to look at one of its antecedents, Wibree.

This must be one of the shortest lived ‘standards’ of all time! Wibree was announced in October 2006 by Nokia though they did indicate that they would be willing to merge its activities with other standards activities if that made sense.

“Nokia today introduced Wibree technology as an open industry initiative extending local connectivity to small devices… consuming only a fraction of the power compared to other such radio technologies, enabling smaller and less costly implementations and being easy to integrate with Bluetooth solutions.”

Nokia felt that there was no agreed open standard for ultra-low power communications so it decided that it was going to develop one. One of the features that consumes power in Bluetooth is its frequency hopping capability so Wibree would not use it. Wibree is also more tuned to data applications as it used variable packet lengths unlike the fixed packet length of Bluetooth. This looks similar to the major argument that took place when ATM (The demise of ATM) was first mooted. The voice community wanted short packets while the data community wanted long or variable packets – the industry ended up with a compromise that suited neither application.

More on Wibree can be found at wibree.com . According to this site:

“Wibree and Bluetooth technology are complementary technologies. Bluetooth technology is well-suited for streaming and data-intensive applications such as file transfer and Wibree is designed for applications where ultra low power consumption, small size and low cost are the critical requirements … such as watches and sports sensors”.

On June 12th 2007 Wibree merged with the Bluetooth SIG and the webcast of the event can be seen here. This will result in Wibree becoming part of the Bluetooth specification as an ultra low-power extension of Bluetooth known as ULB.

ULB is intended to complement the existing Bluetooth standard by incorporating Wibree’s original target of reducing the power consumption of devices using it – it aims to consume only a fraction of the power current Bluetooth devices consume. ULB will be designed to operate in a standalone mode or in a dual-mode as a bolt-on to Bluetooth. ULB will reuse existing Bluetooth antennas and needs just a small bit of addition logic when operating in dual-mode with standard Bluetooth so it should not add too much to costs.

When announced, the Bluetooth SIG said that NLB was aimed at wireless enabling small personal devices such as sports sensors (heart rate monitors), healthcare monitors (blood pressure monitors), watches (remote control of phones or MP3 players) and automotive devices (tyre pressure monitors).

Zigbee

The Zigbee standard is managed by the Zigbee Alliance and was developed by the IEEE as standard 802.15.4 It was ratified in 2004.

According to the Alliance site:

“ZigBee was created to address the market need for a cost-effective, standards-based wireless networking solution that supports low data-rates, low-power consumption, security, and reliability.

ZigBee is the only standards-based technology that addresses the unique needs of most remote monitoring and control and sensory network applications.”

This puts the Bluetooth ULB standard in competition with Zigbee as it aims to be cheaper and simpler to implement than Bluetooth itself. In a similar way to the ULB team announcements, Zigbee uses about 10% of the software and power required to run a Bluetooth node..

A good overview can be found here – ZigBee Alliance Tutorial – which talks about all the same applications as outlined in the joint Wibree / Bluetooth NLB announcement above. Zigbee’s characteristics are:

  • Low power compared to Bluetooth
  • High resilience as iill operate in a much noisier environment that Bluetooth or Wi-Fi
  • Full mesh working between nodes
  • 250kbit/s data rate
  • Up to 65,536 nodes.

The alliance says this makes Zigbee ideal for both home automation and industrial applications.

It’s interesting to see that one of Zigbee’s standard competitors has posted an article entitled New Tests Cast Doubts on ZigBee . All’s fair in love and war I guess!

So there we have it. It looks like Bluetooth ULB is being defined to compete with Zigbee.


High-
speed wireless

High Speed Bluetooth 3.0

There doesn’t seem to be too much information to be found on the proposed Bluetooth version 3.0. However on the WiMedia Alliance site I found the statement by Michael Foley, Executive Director, Bluetooth SIG. WiMedia is the organisation that lies behind Ultra Wide-band (UWB) wireless standards.

“Having considered the UWB technology options, the decision ultimately came down to what our members want, which is to leverage their current investments in both UWB and Bluetooth technologies and meet the high-speed demands of their customers. By working closely with the WiMedia Alliance to create the next version of Bluetooth technology, we will enable our members to do just that.”

According to a May 2007 presentation entitled High-Speed Bluetooth on the Wimedia site, the Bluetooth SIG will reference the WiMedia Alliance [UWB] specification and the solution will be branded with Bluetooth trademarks. The solution will be backwards compatible with the current 2.0 Bluetooth standard.

It also talks about a combined Bluetooth/UWB stack:

  • With high data rate mode devices containing two radios initially
  • Over time, the radios will become more tightly integrated sharing components

The specification will be completed in Q4 2007 and first silicon prototyping complete in Q3 2008. I have to say that this approach does not look to be either elegant or low cost to me. However, time will tell.

That completes the Bluetooth camp of wireless technologies. Let’s look at some others.


Ultra-wide Bandwidth (UWB)

As the Bluetooth SIG has adopted UWB as the base of Bluetooth 3.0 what actually is UWB. A good UWB overview presentation can be found here. Essentially, UWB is a wireless protocol that can deliver a high bandwidth over short distances.

It’s characteristics are:

  • UWB uses spread spectrum techniques over a very wide bandwidth in the 3.1 to 10GHz spectrum in the US and 6.0 to 8.5GHz in Europe
  • It uses very low power so that it ‘co-exist’ with other services that use the same spectrum
  • It aims to deliver 480Mbit/s at distances of several metres

The following diagram from the presentation describes it well:

In theory, there should never be an instance where UWB interferes with an existing licensed service. In some ways, this has similarities to BPL (The curse of BPL), though it should not be so profound in its effects. To avoid interference it uses Detect and Avoid (DAA) technology which I guess is self defining in its description without going into too much detail here.

One company that is making UWB chips is Artimi based in Cambridge, UK.
Wireless USB (WUSB)

In the same way that the Bluetooth SIG has adopted UWB, the USB Implementers Forum has adopted WiMedia’s UWB specification as the basis of Wireless USB. According to Jeff Ravencraft, President and Chairman, USB-IF and Technology Strategist, Intel:

“Certified Wireless USB from the USB-IF, built on WiMedia’s UWB platform, is designed to usher in today’s more than 2 billion wired USB devices into the area of wireless connectivity while providing a robust wireless solution for future implementations. The WiMedia Radio Platform meets our objective of using industry standards to ensure coexistence with other WiMedia UWB connectivity protocols.”

A presentation on Wireless USB can be downloaded here

Wireless USB will deliver around the same bandwidth as Bluetooth 3.0 – 480Mbit/s at 3 metres because it is based on the same technology and will be built into Microsoft Vista.™.

One is bound to ask, what the difference is between Wireless USB and Bluetooth as they are going to be based on the same standard. Well one answer is that Wireless USB products are being shipped today as seen in the Belkin Wireless USB Adapter as shown on the right.

A real benefit of both standards adopting UWB will be that both standards will use the same underlying radio. Manufacturers can choose whatever which ever standard they want and there is no need to change hardware designs. This can only help both standard’s adoption.

However, because of the wide spectrum required to run UWB – multiple GHz – different spectrum ranges in each region are being allocated. This is a very big problem as it means that radios in each country or region will need to be different to accommodate the disparate regulatory requirements.

In the same way that Bluetooth ULB will compete with Zigbee (an available technology), Bluetooth 3.0 will compete with Wireless USB (also an available technology).

Round up

So there you have it – the relationships between Bluetooth 2.0, Bluetooth 3.0, Wibree, Bluetooth ULB, Zigbee, High speed Bluetooth, UWB and Wireless USB. So things are clear now right?

So what about Wi-Fi’s big brother WIMAX? And don’t let us forget about HSPDA (WAP, GPRS, HSDPA on the move!), the 3G answer to broadband services? At least these can be put in a category of wide area wireless services to separate them from near distance wireless technologies. I have to say I find all these standards very confusing and makes any decision that relies on a bet about which technology will win out in the long run exceedingly risky. At least Bluetooth 3.0 and Wireless USB use the same radio!

At an industry conference I attended this morning, a speaker talked about an “arms war” between telcos and technology vendors. If you add standards bodies to this mix, I really do wonder where we consumers are placed in their priorities. Can you see PC manufacturers building all these standards onto their machines?

I could also write about WIMAX, Near Field Communications, Z-wave and RF-ID but I think that is better left for another day!


EBay paid too much for Skype

October 2, 2007

I don’t normally post news, but I couldn’t resist posting this as it so close to my heart. Ever since the deal was done everyone has been asking whether it was worth what they paid.

The  article was in the London Evening Standard today.

ONLINE auctioneer eBay today admitted it had paid too much for internet telephone service Skype in 2005.

EBay, which forked out $2.6 billion (fl.3 billion), will now take a $1.4 billion charge on the company as it fails to convert users into revenue.

Skype’s chief executive Nikias Zennström, one of eBay’s founders, will step down, but the company denies he is walking the plank.

EBay will pay some investors $530 million to settle future obligations under the disastrous Skype deal.

In a desperate bid to get the deal over the line in 2005, eBay promised an extra $L7 billion to Skype investors if the unit met certain targets including number of users.

Now it is offering those shareholders $530 million as “an early, one-time payout”. The parent company will write down $900 million in the value of Skype.

Since eBay took over, Skype’s membership accounts have risen past 220 million, but it earned just $90 million during the second quarter of 2007, far below projections.

I wonder if this will cool some of the outrageous values being put on some of the social network services?


Follow

Get every new post delivered to your Inbox.