The new network dogma: Has the wheel turned full circle?

August 26, 2008

An authoritative principle, belief, or statement of ideas or opinion, especially one considered to be absolutely true”

When innovators proposed Internet Protocol (IP) as the universal protocol for carriers in the mid 90s, they met with furious resistance from the traditional telecommunications community. This post asks whether the wheel has now turned full circle with new innovative approaches often receiving the same reception.

Like many others, I have found the telecommunications industry so very interesting and stimulating over the last decade. There have been so many profound changes that it is hard to indentify with the industry that existed prior to the new religion of IP that took hold in the late 90s. In those balmy days the industry was commercially and technically controlled by the robust world standards of the Public Switched Telecommunications Services (PSTN).

In some ways it was a gentleman’s industry where incumbent monopoly carriers ruled their own lands and had detailed inter-working agreements with other telcos to share the end-to-end revenue generated by each and every telephone call. To enable these agreements to work, the International Telecommunications Union (ITU) in Geneva spent decades defining the technical and commercial standards that greased the wheels. Life was relatively simple as there was only one standards body and one set of rules to abide by. The ITU is far from dead of course and the organisation went on to develop the highly successful GSM standard for mobile telephony and is still very active defining standards to this very day.

In those pre-IP days, the industry was believed to be at its nadir with high revenues, similarly high profits with every company having its place in the universe. Technology had not significantly moved on for decades ( though this does an injustice to the development of ATM and SDH/SONET) and there was quite a degree of complacency driven by a monopolistic mentality. Moreover, it was very much a closed industry in that individuals chose to spend their entire careers in telecommunications from a young age with few outsiders migrating into it. Certainly few individuals with an information technology background joined telcos as there was a significant mismatch in technology, skills and needs. It was not until the mid 90s, when the industry started to use computers by adopting Advanced Intelligent Networks (AIN) and Operational Software and Systems (OSS), that computer literate IT engineers and programmers saw new job opportunities and jumped aboard.

In many ways the industry was quite insular and had its own strong world view of where it was going. As someone once said, “the industry drank its own bathwater” and often chose to blinker out opposing views and changing reality. It is relatively easy to see how this came about with hindsight. How could an industry that was so insular embrace disruptive technology innovation with open arms? The management dogma was all about “We understand our business, our standards and our relationships. We are in complete control and things won’t change.”

Strong dogma dominated and was never more on show than in the debate about the adoption of Asynchronous Transfer Mode (ATM) standards that were needed to upgrade the industry’s switching networks. If ATM had been developed a decade earlier there would have never been an issue but unfortunately the timing could not have been worse as it coincided with the major uptake of IP in enterprises. When I first wrote about ATM back in 1993, IP was pretty much an unknown protocol in Europe. (The demise of ATM ). ATM and the telco industry lost that battle and IP has never looked back.

In reality it was not so much a battle but all out war. It was the telecommunications industry eyeball-to-eyeball with the IT industry. The old “we know best” dogma did not triumph and the abrupt change in industry direction led to severe trauma  in all sections of the industry. Many old-style telecommunications equipment vendors, who had focused on ATM with gusto, failed to adapt with many either writing off billions of Dollars or being sold at knock-down valuations. Of course, many companies made a killing. Inside telcos, commercial and engineering management who had spent decades at the top of their profession, found themselves floundering and over a fifteen year period a significant proportion of that generation of  management ended up leaving the industry.

The IP band wagon had started rolling and its unstoppable inertia has relentlessly driven the industry through to the current time. Interestingly, as I have covered in previous posts such as MPLS and the limitations of the Internet, not all the pre-IP technologies were dumped. This was particularly so with fundamental transmission related network technologies such as SDH / SONET (SDH, the great survivor). These technologies were 100% defined within the telecommunications world and provided capabilities that were wholly lacking in IP. IP may have been perfect for enterprises, but many capabilities were missing that were required if it was to be used as the bedrock protocol in the telecommunications industry. Such things as:

  • Unlike telecommunications protocols, IP networks were proud that their networks were non-deterministic. This meant that packets would always find their way to the required destination even if the desired path faulted. In the IP world this was seen as a positive feature. Undoubtedly it was, but it also meant that it was not possible to predict the time it would take for a packet to transit a network. Even worse, a contiguous stream of packets could arrive at a destination via different paths. This was acceptable for e-mail traffic but a killer for real-time services like voice.
  • Telecommunications networks required high reliability and resilience so that in event of any failure, automatic switchover to an alternate route would occur within several milliseconds so that even live telephone calls were not interrupted. In this situation IP would lackadaisically find another path to take and packets would eventually find their way to their destination (well maybe that is a bit of an overstatement, but it does provide a good image of how IP worked!).
  • Real time services require a very high Quality of Service (QoS) in that latency, delay, jitter and drop-out of packets need to be kept to an absolute minimum. This was, and is, a mandatory requirement for delivery of demanding voice services. IP in those days did not have the control signalling mechanisms to ensure this.
  • If PSTN voice networks had one dominant characteristic – it was reliable. Telephone networks just could not go down. There were well engineered and extensively monitored so if any fault occurred comprehensive network management systems flagged it very quickly to enable operational staff to correct it or provide a work round. IP networks just didn’t have this level of capability of operational management systems.

These gaps in capabilities in the new IP-for-everything vision needed to be corrected pretty quickly, so a plethora of standards development was initiated through the IETF that remains in full flow to this day. I can still remember my amazement in the mid 1990s when I came across a company had come up with the truly innovative idea to combine the deterministic ability of ATM with an IP router that brought together the best of the old with the new still under-powered IP protocol (The phenomenon of Ipsilon). This was followed by Cisco’s and the IETF’s development of MPLS and all its progeny protocols. (The rise and maturity of MPLS and GMPLS and common control).

Let’s be clear, without these enhancements to basic IP, all the benefits the telecommunications world gained from focusing on IP would not have been realised. The industry should be making a huge sigh of relief as many of the required enhancements were not developed until after the wholesale industry adoption of IP. If IP itself had not been sufficiently adaptable, it could be conjectured that there would have been one of the biggest industry dead ends imaginable and all the ‘Bellheads’ would have been yelling “I told you so!”.

Is this the end of story?

So, that’s it then, it’s all done. Every carrier of every description, incumbent, alternate, global, regional, mobile, and virtual has adopted IP / MPLS and everything is hunky-dory. We have the perfect set of network standards and everything works fine. The industry has a clear strategy to transport all services over IP and the Next Generation Network (NGN) architecture will last for several decades.

This may very well turn out to be the case and certainly IP /MPLS will be the mainstream technology set for a long time to come and I still believe that this was one of the best decisions the industry took in recent times. However, I cannot help asking myself whether if we have not gone back to many of the same closed industry attitudes that drove it prior to the all-pervasive adoption of IP?

It seems to me that it is now not the ‘done thing’ to propose alternative network approaches or enhancements that do not exactly coincide with the now IP way of doing things for risk of being ‘flamed’. For me the key issue that should drive network architectures should be simplicity and nobody could use the term ‘simple’ when describing today’s IP carrier networks. Simplicity means less opportunity for service failure and simplicity means lower cost operating regimes. In these days of ruthless management cost-cutting, any innovation that promises to simplify a network and thus reduce cost must have merit and should justify extensive evaluation – even if your favourite vender disagrees. To put it simply, simplicity cannot not come from deploying more and more complex protocols that micro-manage a network’s traffic.

Interestingly, in spite of there being a complete domination of public network cores by MPLS, there is still one major area where the use of MPLS is being actively questioned – edge and or metro networks. There is currently quite a vibrant discussion taking place concerning the over complexity of MPLS for use in metro and the possible benefits of using IP over Ethernet (Ethernet goes carrier grade with PBT / PBB-TE?). More on this later.

We should also not forget that telcos have never dropped other aspects of the pre-IP world. For example, the vast majority of telcos who own physical infrastructure still use that leading denizen of the pre-IP world, Synchronous Digital Hierarchy (SDH or SONET) (SDH, the great survivor). This friendly dinosaur of a technology still holds sway at the layer-1 network level even though most signalling and connectivity technologies that sit upon it have been brushed aside by the IP family of standards. SDH’s partner in crime, ATM, was absorbed by IP through the creation of standards that replicated its capabilities in MPLS (deterministic routing) and MPLS-TE (fast rerouting). The absorption of SDH into IP was not such a great success as many of the capabilities of SDH could not effectively be replaced by layer-3 capabilities (though not for the want of trying!).

SDH is based on time division multiplexing (TDM), the pre-IP packet method of sharing a defined amount of bandwidth between a number of services running over an individual wavelength on a fibre optic cable. The real benefit of this multiplexing methodology is that it had proved to be ultra-reliable and offers the very highest level of Quality of Service available. SDH also has the in-built ability par-excellence to provide restoration of an inter-city optical cable in the case of major failure. One of SDH’s limitations however, is that it only operates at very high granularity of bandwidth so smaller streams of traffic more appropriate to the needs of individuals and enterprises cannot be managed through SDH alone. This capability was provided by ATM and is now provided by MPLS.

Would a moment of reflection be beneficial?

The heresy that keeps popping up in my head when I think about IP and all of its progeny protocols, is that the telecommunications industry has spent fifteen years developing a highly complex and inter-dependent set of technical standards that were only needed to effectively replace what was a ‘simple’ standard that did its job effectively at a lower layer in the network. Indeed, pre MPLS, many of the global ISPs used ATM to provide deterministic management of the global IP networks.

Has the industry now created a highly over-engineered and over-complex reference architecture? Has a whole new generation of staff been so marinaded for a decade in deep IP knowledge, training and experience that it’s for an individual to question technical strategy? Has the wheel has turned full circle?

In my post Traffic Engineering, capacity planning and MPLS-TE, I wrote about some of the challenges facing the industry and the carriers’ need to undertake fine-grain traffic engineering to ensure that individual service streams are provided with appropriate QoS. As consumers start to use the Internet more and more for real-time isochronous services such as VoIP and video streaming, there is a major architectural concern about how this should be implemented. Do carriers really want to continue to deploy an ever increasing number of protocols that add to the complexity of live networks and hence increase risk?

It is surprising just how many carriers use only very light traffic engineering and simply rely on over-provisioning of bandwidth at a wavelength level. This may be considered to be expensive (but is it if they own the infrastructure?) and architects may worry about how long they will be able to continue to use this straightforward approach, but there does seem to be a real reticence to introduce fine-grained traffic management. I have been told several times that this is because they do not trust some of the new protocols and it would be too risky to implement them. It is industry knowledge that a router’s operating system contains many features that are never enabled and this is as true today as it was in the 90s.

It is clear that management of fine-grain traffic QoS is one of the top issues to be faced in coming years. However, I believe that many carriers have not even adopted the simplest of traffic engineering standards in the form of MPLS-TE that starts to address the issue. Is this because many see that adopting these standards could create a significant risk to their business or is it simply fear, uncertainty and doubt (FUD)?

Are these some of the questions carriers we should be asking ourselves?

Has management goals moved on since the creation of early MPLS standards?

When first created, MPLS was clearly focused on providing predictable determinability at layer-3 so that the use of ATM switching could be dropped to reduce costs. This was clearly a very successful strategy as MPLS now dominates the core of public networks. This idea was very much in line with David Isenberg’s ideas articulated in The Rise of the Stupid Network in 1997 which we were all so familiar with at the time. However ambitions have moved on, as they do, and the IP vision was considerably expanded. This new ambition was to create a universal network infrastructure that could provide any service using any protocol that any customer was likely to need or buy. This was called an NGN.

However, is that still a good ambition to have? The focus these days is on aggressive cost reduction and it makes sense to ask whether an NGN approach could ever actually reduce costs compared to what it would replace. For example, there are many carriers today who wish to exclusively focus on delivering layer-2 services. For these carriers, does it make sense to deliver these services across a layer 3 based network? Maybe not.

Are networks so ‘on the edge’ that they have to be managed every second of the day?

PSTN networks that pre-date IP were fundamentally designed to be reliable and resilient and pretty much ran without intervention once up and running. They could be trusted and were predictable in performance unless a major outside event occurred such as a spade cutting a cable.

IP networks, whether they be enterprise or carrier, have always had an well-earned image of instability and going awry if left alone for a few hours. This is much to do with the nature of IP and the challenge of managing unpredicted traffic bursts. Even today, there are numerous times when a global IP network goes down due to an unpredicted event creating knock-on consequences. A workable analogy would be that operating an IP network is similar to a parent having to control an errant child suffering from Attention Deficit Disorder.

Much of this has probably been brought about by the unpredictable nature of routing protocols selecting forwarding paths. These protocols have been enhanced over the years by so many bells and whistles that a carrier’s perception of the best choice of data path across the network will probably be not the same as the one selected by the router itself.

Do operational / planning architecture engineers often just want to “leave things as they are” because it’s working. Better the devil you know?

When a large IP network is running, there is a strong tendency to want to leave things well alone. Is this because there are so many inter-dependent functions in operation at any one time that it’s beyond an individual to understand it? Is it because when things go wrong it takes such an effort to restore service and it’s often impossible to isolate the root cause if it not down to simple hardware failure?

Is risk minimisation actually the biggest deciding factor when deciding what technologies to adopt?

Most operational engineers running a live network want to keep things as simple as possible. They have to because their job and sleep are on the line every day. Achieving this often means resisting the use of untried protocols (such as MPLS-TE) and replacing fine-grained traffic engineering with the much simpler strategy of using over-provisioned networks ( Telcos see it as a no-brainer because they already own the fibre in the ground and it is relatively easy to light an additional dark wavelength).

At the end of the day, minimising commercial risk is right at the top of everyone’s agenda, though it usually sits below operation cost reduction.

Compared to the old TDM networks they replace, are IP-based public networks getting too complex to manage when considering the ever increasing need for fine-grain service management at the edge of the network?

The spider’s web of protocols that need to perform flawlessly in unison to provide a good user experience is undoubtedly getting more and more complex as time goes by. There is only little effort to simply things and there is a view that it is all becoming too over-engineered. Even if a new standard has been ratified and is recommended for use, this does not mean it will be implemented in live networks on a wide scale basis. The protocol that heads the list of under exploited protocols is IPv6 (IPv6 to the rescue – eh?).

There is significant on-going standards development activity in the space of path provisioning automation (Path Computation Element (PCE): IETF’s hidden jewel) and of true multilayer network management. This would include seamless control of layer-3 (IP), layer-2.5 (MPLS) and layer-1 networks (SDH) (GMPLS and common control). The big question is (risking being called a Luddite) would a carrier in the near future risk the deployment of such complexity that could bringing down all layers of a network at once? Would the benefits out weigh the risk?

Are IP-based public networks more costly to run than legacy data networks such as Frame Relay?

This is a question I would really like to get an objective answer to as my current views are mostly based on empirical and anecdotal data. If anyone has access to definitive research, please contact me! I suspect, and I am comfortable with the opinion until proved wrong, that this is the case and could be due to the following factors:

  • There needs to be more operations and support staff permanently on duty than with the old TDM voice systems thus leading to higher operational costs.
  • Operational staff require a higher level of technical skill and training caused by the complex nature of IP. CCIEs are expensive!
  • Equipment is expensive as the market is dominated by only a few suppliers and there are often proprietary aspects of new protocols that will only run on a particular vendor’s equipment thus creating effective supplier lock-in. The router clone market is alive and healthy!

It should be remembered that the most important reason given to justify the convergence on IP was the cost savings resulting from collapsing layers. This has not really taken place except for the absorption of ATM into MPLS. Today, each layer is still planned, managed and monitored by separate systems. The principle goal of a Next Generation Network (NGN) architecture is still to achieve this magic result of reduced costs. Most carriers are still waiting on the fence for evidence of this.

Is there a degradation in QoS using IP networks?

This has always been a thorny question to answer and a ‘Google’ to find the answer does not seem to work. Of course, any answer lies in the eyes of the beholder as there is no clear definition of what the term QoS encompasses. In general, the term can be used at two different levels in relation to a network’s performance: micro-QoS and the macro-QoS.

Micro-QoS is concerned with individual packet issues such as order of reception of packets, number of missing packets, latency, delay and jitter. An excessive amount of any of these will severely degrade a real-time service such as VoIP or video streaming. Macro-QoS is more concerned with network wide issues such as network reliability and resilience and other areas that could affect overall performance and operational efficiency of a network.

My perspective is that on a correctly managed IP / MPLS network (with all the hierarchy and management that requires), micro-QoS degradation is minimal and acceptable and certainly no worse than IP over SDH. Indeed, many carriers deliver traditional private wire services such as E1 or T1 connectivity over an MPLS network using pseudowire tunnelling protocols such as Virtual Private LAN Service (VPLS). However this does significantly raise the bar in respect to the level of IP network design and network management quality required.

The important issue is the possible degradation at the macro-QoS level where I am comfortable with the view that using an IP / MPLS network there will always be a statistically higher risk of fault or problems due to its complexity compared to a simpler IP over SDH system. There is a certain irony in that macro-QoS performance of a network could be further degraded when additional protocols are deployed to improve-micro-QoS performance.

Is there still opportunity for simplification?

In an MPLS dominated world, there is still significant opportunity for simplification and cost reduction.

Carrier Ethernet deployment

I have written several posts (Ethernet goes carrier grade with PBT / PBB-TE?) about carrier Ethernet standards and the benefits its adoption might bring to public network. In particular, the promise of simplification. To a great extent this interesting technology is a prime example of where a new (well newish) approach that actually does make quite a lot of sense comes up against the new MPLS-for-everything-and-everywhere dogma. It is not just a question of convincing service providers of the benefit but also overcoming the almost overwhelming pressure brought on carrier management form MPLS vendors who have clear vested interests in what technologies their customers choose to use. This often one-sided debate definitely harks back to the early 90s no-way-IP culture. Religion is back with a vengeance.

Metro networks

Let me quote Light Reading from September 2007. What once looked like a walkover in the metro network sector has turned into a pitched battle – to the surprise, but not the delight, of those who saw Multiprotocol Label Switching (MPLS) as the clear and obvious choice for metro transport.” MPLS has encountered several road bumps on its way to domination and it should always be appropriate to question whether any particular technology adoption is appropriate.

To quote the column further:The carrier Ethernet camp contends that MPLS is too complex, too expensive, and too clunky for the metro environment.” Whether ‘thin MPLS’ (PBB-TE / PBT or will it be T-MPLS?) will hold off the innovative PBB intruder remains to be seen. At the end of the day, the technology that provides simplicity and reduced operational costs will win the day.

Think the unthinkable

As discussed above, the original ambition of MPLS has ballooned over the years. Originally solving the challenge of how to provide a deterministic and flexible forwarding methodology for layer-3 IP packets and replace ATM, it has achieved this objective exceptionally well. These days, however, it seems to be always assumed that some combination or mix of Ethernet (PBB-TE) and/or MPLS-TE and maybe even GMPLS is the definitive, but highly complex, answer to creating that optimum highly integrated NGN architecture that can be used to provide any service any customer might require.

Maybe, it is worth considering a complementary approach that is highly focused on removing complexity. There is an interesting new field of innovation that is proposing that path forwarding ‘intelligence’ and path bandwidth management is moved from layer-3, layer.2.5 and layer-2 back into layer-1 where it rightly belongs. By adding additional capability to SDH, it is possible to reduce complexity in the above layers. In particular deployment scenarios this could have a number of major benefits, most of which result in significantly lower costs.

This raises an interesting point to ponder. While revenues still derive from traditional telecom-oriented voice services, the services and applications that are really beginning to dominate and consume most bandwidth are real time interactive and streaming services such as IPTV, TV replays, video shorts, video conferencing, tele-presence, live event broadcasting, tele-medicine, remote monitoring etc. It could be argued that all these point-to-point and broadcast services could be delivered with less cost and complexity using advanced SDH capabilities linked with Ethernet or IP / MPLS access? Is it worth thinking about bringing SDH back to the NGN strategic forefront where it could deliver commercial and technical benefits?

To quote a colleague: “The datacom protocol stack of IP-over-Ethernet was designed for asynchronous file transfer, and Ethernet as a local area network packet-switching protocol, and these traditional datacom protocols do a fine job for those applications (i.e. for services that can tolerate uncertain delays, jitter and throughput, and/or limited-scope campus/LAN environments). IP-over-Ethernet was then assumed to become the basis protocol stack for NGNs in the early 2000s, due to the popularity of that basic datacom protocol stack for delivering the at-that-time prevailing services carried over Internet, which were mainly still file-transfer based non-real-time applications.”

SDH has really moved on since the days when it was only seen as a dumb transport layer. At least one service provider company, Optimum Communications Services offers an innovative vision whereby instead of inter-node paths being static, as is the case with the other NGN technologies discussed in this post, the network is able to dynamically determine the required inter-node bandwidth based on a fast real-time assessment of traffic demands between nodes.


So has the wheel has turned full circle?

As most carriers’ architectural and commercial strategies are wholly focused on IP with the Yellow Brick Road ending with the sun rising over a fully converged NGN, how much real willingness is there to listen to possible alternate or complementary innovative ideas?

In many ways the telecommunications industry could be considered to have returned to the closed shutter mentality that dominated before IP took over in the late 1990s – I hope that this is not the case. There is no doubt that choosing to deploy IP / MPLS was a good decision, but a decision to deploy some of the derivative QoS and TE protocols is far from clear cut.

We need to keep our eyes and minds open as innovation is alive and well and most often arises in small companies who are free to think the the unthinkable. They may might not be always right but they may not be wrong either. Just cast your mind back to the high level of resistance encountered by IP in the 90s and let’s not repeat that mistake again. There is still much scope for innovation within the IP based carrier network world and I would suspect this has everything to do with simplifying networks and not complicating them further.

Addendum #1: Optimum Communications Services – finally a way out of the zero-sum game?


The curse of BPL

August 16, 2007

I am hesitant to put pen to paper to write about Broadband over Power Lines or BPL and Power Line Communications or PLC (maybe this should be Broadband over mains in the UK!) as I have no doubt that I am biased in my views and have been for a long time. This does not derive from in-depth experience of the technology but because I have been a radio amateur or ‘ham’ since my teenage years.

In the amateur radio world BPL is seen as a ogre that could have a major impact on their ability to continue their hobby due to interference from BPL trials or deployments. More on this later.

Today, the principle technology used to deliver broadband Internet access into homes is Asynchronous Digital Subscriber Line (ADSL) technology delivered by local telephone companies or ISPs collocating equipment in their switching centres. As ADSL is delivered over the ubiquitous copper cables previously used to deliver only traditional telephony services, it’s rollout has experienced tremendous growth over the last decade throughout the world.

However, ADSL does have some inherent commercial and technical limitations. For example, The further away you are from your local telephone exchange or central office the lower the bandwidth that can be delivered. This means that ADSL works best in high population areas such as towns and their suburbs. Even in the UK, there are still country areas where ADSL is not available because BT believes it is uneconomic or technically challenging to provide the service. For many years BT ran trials using wireless (that we would probably call WIMAX these days) to test the economics of providing Internet service to remote locations or caravan parks.

As ADSL can only be offered by telecommunications companies, whether they be old telephony providers or newer ISPs, this led to other utility providers wanting to get into the act. Water companies installed fibre optic cables when they dug trenches and canal and railway operating companies allowed telecommunications companies to run cables along their facilities.

We should also not forget our very own Energis (now Cable and Wireless) who started by providing wholesale backbone services by running cables along pylons. At one time nearly every electricity company had a telecommunications division.

This neatly brings back to Broadband over Power Line technology. The logic that drove the development of BPL is quite straightforward to understand. Every home is connected to an electricity distribution network so why should that not be used to deliver a broadband Internet service? This would mean that electricity companies could participate in the Internet revolution and create additional revenues to fill their coffers! Moreover, maybe BPL could be used to deliver broadband access to remote locations where ADSL cannot reach.

There is one thing about BPL that is clearly different from all the other technologies I have written about and this may seem a little strange. There are no IETF or IEEE technical standard for BPL although there are standards activities afoot. This makes deploying a BPL service a rather hit or miss affair.

Deployment is also challenging due to the fact there is tremendous variation in the electricity distribution networks throughout the world making standardisation a tad difficult. For example, in the UK hundreds if not thousands of homes are connected to a local substation where the high transmission voltages are converted to the normal 240 volt house supply. Hence it should be possible to ‘inject’ the broadband service in front of the transformer and deliver service to many houses at the same time which helps improve service economics.

In the USA the situation is quite different because of the distances involved. It is always more efficient to carry electricity at the highest voltage possible over long distances to reduce losses, so in the USA it is common practice to have the transformation to 110 volts done at the last possibly opportunity by placing an individual transformer on a pole outside of each home. This can wreck BPL service economics. However, this has not stopped many services trials taking place.

BPL technology

A BPL service can offer similar bandwidth capabilities to ADSL in that it supports an 256kbit/s up stream and up to 2.7M/bit/s down stream,. It achieves this by encoding data utilising the medium and shortwave spectrum of 1.6 to 30MHz or higher. In-house modems connect back to the head-end located at the substation where fibre or radio can be used to connect back to a central office as used in wide-area Wi-Fi services ( see The Cloud hotspotting the planet). The modulated radio frequency carrier is injected into the local electricity distribution network using an isolation capacitor and transmitter can have a power of 100s of watts.

BPL modems use several methods of modulation depending on the service bandwidth required:

  • GMSK (Gaussian minimum-shift keying) for bandwidths less than 1Mbit/s
  • CDMA (Code division multiple access) as used in mobile 3G services for greater than 1Mbit/s, and
  • OFDM (Orthogonal frequency-division multiplexing) for bandwidths up to 45Mbit/s

Most modern BPL deployments use ODFM as higher bandwidths are required if the service operators are to compete with their local telephone companies ADSL services.

There are several organisations involved in standardisation efforts:

Consumer Electronics Powerline Communication Alliance (CEPCA): A PowerPoint introduction to the activities of the CEPCA can be found here.

Their mission and purpose is the:

  • Development of specifications enabling the coexistence
    • Between in-home PLC Systems
    • Between Access PLC Systems and in-home PLC Systems
  • Promotion of high speed PLC technologies in order to achieve world-wide adoption thereof.

Power Line Communications Forum (plcforum): A similar body to CEPCA with many equipment suppliers as members.

HomePlug Powerline Alliance (HPPA): This group focuses on home networking using home electricity wiring as the distribution network – as they say, “power outlets are almost everywhere someone might want to use a networked device at home.”

IEEE P1901: According to their scope description the P1901 project will “develop a standard for high speed (>100 Mbps at the physical layer) communication devices via alternating current electric power lines, so called Broadband over Power Line (BPL) devices. The standard will use transmission frequencies below 100 MHz.”

Powernet: The main project objective of Powernet is to develop and validate a ‘plug and play’ Cognitive Broadband over Power Lines (CBPL) communications equipment. Power net is a European Commission project.

Side effects

With other postings about communications technologies I guess I would go on to say that although there is much work to be done, BPL is a complimentary technology to ADSL and it has its place in the Internet marketplace. My commercial reservations are quite strong however in that it is difficult to see how BPL can effectively compete with the now ubiquitous ADSL utilised by every local telephone company on the planet. Maybe there are niche markets where BPL could work and these would be geographical areas where ADSL cannot reach – yet.

However, as I indicated in my opening paragraph there are other concerns about BPL that are not encountered with any of the other ways of providing Internet service to homes whether they be delivered over wires such as ADSL or wireless such as Wi-Fi or WIMAX.

BPL has a dark side which I believe to be unacceptable and could prevent other legitimate users of the shortwave radio frequency spectrum to pursue their interests and hobbies without interference.

Interference is the issue which can be better understood by looking at the following video of a BPL service trial currently taking place in Australia.

BPL interference is causing problems in other countries as well, even the USA, where the Amateur Radio Relay League (ARRL) the body that represents all US radio amateurs has been forced into legal action in May 2007: ARRL Files Federal Appeals Court Brief in Petition for Review of BPL Rules

Also in May, the US Federal Communication Committee (FCC) has called for a BPL manufacturer to show that it complies with its experimental licence due to interference complaints – FCC Demands Ambient Demonstrate Compliance with BPL License Conditions

To quote the ARRL: The Commission’s obsessive compulsion to avoid any bad news about BPL has clearly driven its multi-year inaction,” the League continued. “Had this been any other experimental authorization dealing with any technology other than BPL, the experimental authorization would have been terminated long ago.”

Many amateurs see BPL as the biggest threat to their hobby that they have ever been seen.

So why should there be this level of interference from BPL?

It might be good to start answering this question by looking at ADSL as this does not have any major interference issues despite its deployment in many millions of homes. ADSL is delivered into peoples homes via the copper telephone line. This cable is not just a single copper cable as it might have been in the early 19th century but rather it is a twisted pair.

A twisted pair cable is like a rather crude coaxial cable. It is balanced in that the signal flows forward through one wire and returns through the other. This means that the bidirectional signals cancel each other out and the cable does not radiate the signal it is carrying to the outside world. Twisted pair cable are not as lossless as coaxial cables so there is a little loss but it is quite small for the length of cable usually used to connect a home to a telephone pole.

In general ADSL has been immune from creating interference because of the use of twisted pair cables. Imagine the consumer furore that would occur if there was was interference from ADSL to FM or TV services it does work.

It’s interesting to remember that cable companies also use broad band RF encoding but as services are delivered using high quality coaxial cables or fibre there is generally no interference (The tale of DOCSIS and cable operators).

On the other hand, electricity power lines that brings electrical power into houses are not shielded and are not twisted pair. They are standard three or four core cables that we are all familiar with when we connect our kettles to plugs although they are of a heavier gauge.

BPL transmissions are spread over the shortwave spectrum with a head-end power of possibly 100s of watts and the lossy distribution cables effectively act as an antenna or aerial so the wideband BPL signal radiates quite effectively over a wide area causing the not inconsiderable interference as seen in the video above.

Surely, the regulatory bodies such as OFCOM or the FCC would not allow a service that significantly interfered with other spectrum users to go ahead – would they? That is not so easy to answer today as it would have been a decade ago when anti-interference regulations were very strong. Nowadays, in this commercial world we live in, there is far more flexibility given if there is a potential commercial benefit. For example, even in the UK the old guard band (allocated unused spectrum between services to provide isolation) have been sold off for use in picocell GSM services as discussed in GSM pico-cell’s moment of fame .

The level of interference from a service such as BPL would not – could not – have been tolerated a few years ago when everyone used the shortwave bands for entertainment. But in this modern ‘digital age’ shortwave seems an anachronism and who really cares if it not usable…

At least two groups of individuals do and they are radio amateurs and short wave listeners. BPL vendors and service providers and have attempted to suppress their criticisms of BPL by what can only be described as a sticking plaster solution. This solution is to put filters on the BPL transmitter so that notches are inserted in the broadband spectrum to coincide with the amateur bands.

However the general consensus by amateurs who have been involved in notching trials is that they do indeed reduce interference but not by a sufficient amount for workable co-existence.

Another concern is that BPL is not just used for the provision of Internet access services but it is also possible to buy modems to provide in-house LAN capabilities in competition to Wi-Fi. This could be a another worrying source of interference to shortwave services. Bearing mind there is no filtering in a mains or power socket, the use of a BPL modem in one house will radiate in all homes connected to the same substation.

Roundup

I really am unable to see any real benefit in this technology when compared to cable operator DOCSYS or telephone ADSL delivered Internet services whose access infrastructure is designed for purpose. Just slapping a broadband transmitter on a local electricity distribution network is crude and is definitely NOT fit for purpose – even if filter notches are applied.

If the electricity industry redesigned their supply cables to be coaxial or twisted pair, which in practice is not really technically or commercially achievable, then the concept may work.

I doubt that BPL is viable in the long term and my view is that it’s use will fade with time. In the mean time if I am asked for a financial contribution to fight BPL, I reckon I would dig deep into my pockets.

One example of one of the up and coming trials is TasTel in Hobart, Australia, a partnership between Aurora Energy and AAPT who say they have a unique service. To quoute their web site:

Because BPL is brought to you by TasTel and eAurora, we can give you something nobody else can offer: fast Internet access and cheap broadband phone calls through a single service, on one bill which is sent to you electronically.

Where have I heard this before – time move away from Hobart?


Business plans are overrated

July 29, 2007

There more than element of obvious insight in Paul Kedrosky‘s recent post:

“Business plans are overrated. … Why?

… Because VCs are professional nit-pickers. Give them something to find fault with, and they’ll do it with abandon. I generally tell people to come to pitch meetings with less information rather than more. Sure, you’ll get pressed for more, but finesse it.

Presenting a full and detailed plan is, nine times out of ten, a path to a ‘No’ — or at least more time-consuming than having said less.”

Paul Kedrosky, in the wake of the VC financing of Twitter, which has no business plan, no business model and no profits.


Islands of communication or isolation?

March 23, 2007

One of the fundamental tenets of the communication industry is that you need 100% compatibility between devices and services if you want to communicate. This was clearly understood when the Public Switched Telephone Network (PSTN) was dominated by local monopolies in the form of incumbent telcos. Together with the ITU, they put considerable effort into standardising all the commercial and technical aspects of running a national voice telco.

For example, the commercial settlement standards enabled telcos to share the revenue from each and every call that made use of their fixed or wireless infrastructure no matter whether the call originated, terminated or transited their geography. Technical standards included everything from compression through to transmission standards such as Synchronous Digital Hierarchy (SDH) and the basis of European mobile telephony, GSM. The IETF’s standardisation of the Internet has brought a vast portion of the world’s population on line and transformed our personal and business lives.

However, standardisation in this new century is now often driven as much by commercial businesses and business consortiums which often leads to competing solutions and standards slugging it out in the market place (e.g. PBB-TE and T-MPLS). I guess this is as it should be if you believe in free trade and enterprise. But, as mere individuals in this world of giants, these issues can cause us users real pain.

In particular, the current plethora of what I term islands of isolation means that we often unable to communicate in ways that we wish to. In the ideal world, as exemplified by the PSTN, you are able to talk to every person in the world that owns a phone as long as you know their number. Whereas, many, if not most, new media communications services we choose to use to interact with friends and colleagues are in effect closed communities that are unable to interconnect.

What are the causes these so-called islands of isolation? Here are a few examples.

Communities: There are many Internet communities including free PC-to-PC VoIP services, instant messaging services, social or business networking services or even virtual worlds. Most of these focus on building up their own 100% isolated communities. Of course, if one achieves global domination, then that becomes the de facto standard by default. But, of course, that is the objective of every Internet social network start-up!

Enterprise software: Most purveyors of proprietary enterprise software thrive on developing products that are incompatible. Lotus Notes and Outlook email systems was but one example. This is often still the case today when vendors bolt advanced features onto the basic product that are not available to anyone not using that software – presence springs to mind. This creates vendor communities of users.

Private networks: Most enterprises are rightly concerned about security and build strong protective firewalls around their employees to protect themselves from malicious activities. This means that employees of that company have full access to their own services but these are not available to anyone outside of the firewall for use on an inter-company basis. Combine this with the deployment of vendor specific enterprise software described about and you create lots of isolated enterprise communities!

Fixed network operators: It’s a very competitive world out there and telcos just love offering value-added features and services that are only offered to their customer base. Free proprietary PC-PC calls come to mind and more recently, video telephones.

Mobile operators: A classic example with wireless operators was the unwillingness to provide open Internet access and only provide what was euphemistically called ‘walled garden’ services – which are effectively closed communities.

Service incompatibilities: A perfect example of this was MMS, the supposed upgrade to SMS. Although there was a multitude of issues behind the failure of MMS, the inability to send an MMS to a friend who used another mobile network was one of the principle ones. Although this was belatedly corrected, it came too late to help.

Closed garden mentality: This idea is alive and well amongst mobile operators striving to survive. They believe that only offering approved services to their users is in their best interests. Well, no it isn’t!

Equipment vendors: Whenever a standards body defines a basic standard, equipment vendors nearly always enhance the standard feature set with ‘rich’ extensions. Of course, anyone using an extension could not work with someone who was not! The word ‘rich’ covers a multiplicity of sins.

Competitive standards: Users groups who adopt different standards become isolated from each other – the consumer and music worlds are riven by such issues.

Privacy: This is seen as such an important issue these days that many companies will not provide phone numbers or even email addresses to a caller. If you don’t know who you want, they won’t tell you! A perfect definition of a closed community!

Proprietary development:  In the absence of standards companies will develop pre-standard technologies and slug it out in the market. Other companies couldn’t care less about standards and follow a proprietary path just because they can and have the monopolistic muscle to do so. Bet – you can name one or two of those!

One take away from all this is that in the real world you can’t avoid islands of isolation and all of us have to use multiple services and technologies to interact with colleagues that are effectively islands of isolation and will probably remain so for the indefinite future in the competitive world we live in.

Your friends, family and work colleagues, by their own choice, geography and lifestyle, probably use a completely different set of services to yourself. You may use MSN, while colleagues use AOL or Yahoo Messenger. You may choose Skype but another colleague may use BT Softphone.

There are partial attempts at solving these issues with a subset of islands, but overall this remains a major conundrum that limits our ability to communicate at any time, any place and any where. The cynic in me says that if you hear about any product or initiative that relies on these islands of isolation disappearing to succeed I would run a mile – no ten miles! On the other hand, it could be seen as the land of opportunity?


webex + Cisco thoughts

March 19, 2007

I first read about the Cisco acquisition of Webex on Friday when a colleague sent me a post from SiliconValley.com – It’s more than we wanted to spend, but look how well it fits. It’s synchronicity in operation again of course because I mentioned webex in posting about a new application sharing company: Would u like to collaborate with YuuGuu? There are many other postings about this deal with a variety of views – some more relevant than others – Techcrunch for example: Cisco Buys WebEx for $3.2 Billion

Although pretty familiar with the acquisition history of Cisco, I must admit that I was surprised at this opening of the chequebook for several reasons.

I used webex quite a lot last year and really found it quite a challenge to use. My biggest area of concern was usability.

(a) When using webex there are several windows open on your desktop making its use quite confusing. At least once I closed the wrong window thus accidentally closing the conference. As I was just concluding a pitch I was more than unhappy as it clused both the video and the audio components of the conference! I broke my golden rule of not using separate audio bridging and application sharing services.

(b) When using webex’s conventional audio bridge, you have to open the conference using the a webex web site page on a beforehand. If you fail to do so, the bridge cannot be opened with everyone receiving an error message when they dial in. To correct this takes a about 5 minutes. Even worse, you cannot use an audio bridge on a standalone basis without having access to a PC! Not good when travelling.

(c) The UI is over complicated and challenging for users under the pressure of giving a presentation. Even the invite email that webex sends out it confusing – the one below is typical. Although the example is the one sent to the organiser, the ones sent to participants are little better.

Hello Chris Gare,
You have successfully scheduled the following meeting:
TOPIC: zzzz call
DATE: Wednesday, May 17, 2006
TIME: 10:15 am, Greenwich Standard Time (GMT -00:00, Casablanca ) .
MEETING NUMBER: 705 xxx xxx
PASSWORD: xxxx
HOST KEY: yyyy
TELECONFERENCE: Call-in toll-free number (US/Canada): 866-xxx-xxxx
Call-in number (US/Canada): 650-429-3300
Global call-in numbers: https://webex.com/xxx/globalcallin.php?serviceType=MC&ED=xxxx
1. Please click the following link to view, edit, or start your meeting.

https://xxx.webex.com/xxx/j.php?ED=87894897

Here’s what to do:
1. At the meeting’s starting time, either click the following link or copy and paste it into your Web browser:

https://xxx.webex.com/xxx/j.php?ED=xxxxx

2. Enter your name, your email address, and the meeting password (if required), and then click Join.
3. If the meeting includes a teleconference, follow the instructions that automatically appear on your screen.
That’s it! You’re in the web meeting!
WebEx will automatically setup Meeting Manager for Windows the first time you join a meeting. To save time, you can setup prior to the meeting by clicking this link:

https://xxx.webex.com/xxx/meetingcenter/mcsetup.php

For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
………………..end copy here………………..
For Help or Support:
Go to https://xxx.webex.com/xxx/mc, click Assistance, then Click Help or click Support.
To add this meeting to your calendar program (for example Microsoft Outlook), click this link:

https://xxx.webex.com/xxx/j.php?ED=87894897&UID=480831657&ICS=MS

To check for compatibility of rich media players for Universal Communications Format (UCF), click the following link:

https://xxx.webex.com/xxx/systemdiagnosis.php

http://www.webex.com

We’ve got to start meeting like this(TM)

Giving presentations on-line is a stressful process at the best of times and the application sharing application needs to be so simple to use that you can just concentrate on the presentation not the medium. webex, in my opinion, fails on this criteria. There are so many new and easier to use conferencing services around that I was surprised that webex provided such a poor usability experience.

Reason #2: In another posting – Why in the world would Cisco buy WebEx?, Steve Borsch talks about the inherent value of webex’s proprietary MediaTone network. This could be called a Content Distribution network (CDN) such as operated by Akamai, Mirror Image or Digital Island bought by Cable and Wireless a few years ago. You can see a flash overview of MediaTone on their web site.

The flash talks about this as an “Internet overlay network” that provides better performance than the unpredictable Internet, but as a individual user of webex I was still forced to access webex services via the Internet as this was unavoidable. I assume that MediaTone is a backbone network interconnecting webex’s data centres. It seems strange to me that an applications company like webex felt the need to spend several $bn on building their own network when perfectly adequate networks could be bought in from the likes of Level3 quite easily and at low cost. In the flash presentation, webex says that it started to build the network a decade ago and it could have been seen as a value-added differentiator at that time. More likely was that it was actually needed for the company’s applications to actually work adequately as the Internet was so poor from a performance perspective in those days.

I have no profound insights into Cisco’s M&A strategy, but this particular acquisition brings Cisco into potential competition with two of its customer sectors at a stroke – on-line application vendors and the carrier community. This does strike me as a little perverse.


#1 My 1993 predictions for 2003 – hah!

February 27, 2007

#1 Traditional Telephony: Advanced Services

Way back in 1993 I wrote a paper entitled Vision 2003 that dared to try and predict what the telecommunications would look like ten years in the future. I looked at ten core issues, telephony services was the first. I thought it might be fun to take a look at how much I got right and how much I got wrong! This is a cut down of the original version and I’ll mark the things I got right in green, things I got wrong in red.

Caveats: Although it is stating the obvious, it should be remembered that nobody knows the future and, even though we have used as many external attributable market forecasts from reputable market research companies as possible to size the opportunities, they in effect, know no more than ourselves. These forecasts should not be considered as being quantitative in the strictest sense of the word, but rather as qualitative in nature. Also, there is very little by way of available forecasts out to the year 2003 and certainly even fewer that talk about network revenues. You only need look back to 1983 and see whether the phenomenal changes that happened in the computer industry were forecasted to see that forecasting is a dangerous game.

Well I had to protect myself didn’t I?

As far as users are concerned, the differentiation between fixed and cellular networks will have completely disappeared and both will appear as a seamless whole. Although there will be still be a small percentage of the market still using the classical two-part telephone, most customers will be using a portable phone for most of their calls. Data and video services, as well as POTs, will be key business and residential services. Voice and computer capability will be integrated together and delivered in a variety of forms such as interactive TVs, smart phones, PCs, and PDAs. The use of fixed networks is cheap, so portable phones will automatically switch across to cordless terminals in the home or PABXs in the office to access the broad band services that cannot be delivered by wireless.

A good call on the dominance of mobile phones (It’s quant that I called them “portable phones”, I guess I was thinking of the house brick size phones of that era. The convergence of mobile and fixed phones still eludes us even in 2007 – now that really is amazing!

Network operators have deployed intelligent networks on a network-wide basis and utilise SDH together with coherent network-wide software management to maximise quality of service and minimise cost. As all operators have employed this technology, prices are still dropping and price is still the principal differentiator on core telephony services. Most individuals have access to advanced services such as CENTREX and network based electronic secretaries that were only previously available to large organisations in the early 1990s. Because of severe competition, most services are now designed for, and delivered to, the individual rather than being targeted at the large company. All operators are in charge of their own destiny and develop service application software in-house, rather than buying it from 3rd party switch vendors.

A real mixed bag here I think I was certainly right about the dominance of the mobile phone but way out about operators all developing their own service application software. I rabbited on for several pages about Intelligent (IN) networks bringing the the power of software to the network. This certainly happened but it didn’t lead to the plethora of new services that were all the rage at the time – electronic secretary services etc. What we really saw was a phenomenal rise in annoying services based on Automatic Call Distribution (ACD) – “Press 1 for….” then “Press n for…” and so loved by CFOs.

Customers, whether in the office or at home, will be sending significant amounts of data around the public networks. Simple voice data streams will have disappeared to be replaced with integrated data, signalling, and voice. Video telephony is taken for granted and a significant number of individuals use this by choice. There is no cost differential between voice and video.

All public network operators are involved in many joint ventures delivering services, products, software, information and entertainment services that were unimagined in 1993.

Tongue in cheek, I’m going to claim that I got a good hit predicting Next Generation Networks that integrate services on a single network. Wouldn’t it have been great if I had predicted that it would all be based on IP? It was a bit too early for this at the time. Wow, did I get video telephony wrong! This product sector has not even started, let alone taken off?

What did I really not see at all because it was way too into the future were free VoIP telephony services of course as discussed in Are voice (profits) history?

Next: #2 Integration of Information and the Importance of Data and Multimedia


Are voice (profits) history?

February 22, 2007

‘Free’ can sometimes be a pernicious word and is often the antithesis of real business.

It lies at the heart of the web 2.0 philosophy. Here it is often assumed that revenue will not be generated from the core service that a company is providing, but rather derived from ancillary or attached activities such as advertising. It seems that a service needs to get millions of customers to get sufficient revenue from low click-through rates. To achieve these high subscriber numbers the service needs to be free. There is an Alice in Wonderland type logic in play here sometimes.

It feels very peculiar to see that the kingpin of multimedia (which to me means multiple media voice video and information), voice, is currently experiencing a complete destruction of financial value. We all know that IP services are generally not profitable today due to low revenues and high costs, but I do wonder why everyone is jumping on this bandwagon and assuming that zero revenue voice is what the future is all about. This is not just a fixed wire-line issue but this will roll over into the mobile world as well.

Even though VoIP is just a technology it has always had the tag of being a particularly disruptive one from the very beginning and was highly resisted by many within the telecommunications industry for many years. We are now seeing some rather radical voice initiatives from the former telecommunications monopolies.

I read with interest the other day about AT&T’s Unity plan and wonder what the future holds. According to their press release:

AT&T Inc. today announced an unprecedented new offer, which gives subscribers the nation’s largest unlimited free calling community, including wireless and wireline phone numbers.

The AT&T UnitySM plan, which is available beginning Sunday, Jan. 21, brings together home, business and wireless calling, creating a calling community of more than 100 million AT&T wireless and wireline phone numbers.

AT&T Unity customers can call or receive calls for free from any AT&T wireless and wireline phone numbers nationwide without incurring additional wireline usage fees or using their wireless Anytime minutes. In addition to free domestic calling to and from AT&T numbers, the AT&T Unity plan includes wireless service with unlimited night and weekend minutes, as well as a package of Anytime Minutes.

Wow! If any carrier had announced such a package a few years ago they would have been thought of as rather crazy. In the 90s everyone complained about the destruction of core value from a number of carriers who undercut wholesale network charges (I guess this has somewhat stabilised in recent years) but this seems much worse to me.

This is complex issue which has its foundations, I would assume, with AT&T losing significant numbers of customers and revenue to free peer-to-peer voice services such as Skype on one hand, many more to wireless operators and still more to cable companies on the other. Most industry people believe in the mantra that the future lies in triple and even quadruple plays, as promoted by our own Virgin Media. But, if you have no customers to deliver these too then the future is going to be rather bleak!

Is this the reason why AT&T is discarding their future voice revenue? I find it hard to believe that on one hand every carrier is pursuing a converged IP-based Next Generation network strategy and biggest multimedia service of all, voice, will not contribute to revenue flow? Is this case of 2 + 2 not making 5 but 2?

I spent so much time in the 90s promoting VoIP technology and services but I never expected it to be this disruptive! Let’s look on the bright side. To me this narrowly focused strategic thinking creates opportunities for start-ups who are able to go in a different direction to the industry gestalt that is driving the majority of carriers.

Let’s also hope that the drive towards converged NGN networks based on reduced costs does not shrink revenues even more.


Amazing, from 2″ to 17″ wafer sizes in 35 years!

February 9, 2007

In January my son, Steve, popped into the Intel museum in Santa Clara, California to look at the 4004 display mentioned in a previous post.

He came back with various photos some of which showed how semiconductor wafer sizes have increased over the last 30 years. This is the most interesting one.

1969 2-inch wafer containing the 1101 static random access memory (SRAM) which stored 256 bits of data.

1972 3-inch wafer containing 2102 SRAMswhich stored 1024 bits of data.

1976 4-inch wafer containing 82586 LAN co-processors.

1983 6-inch wafer containing 1.2 million transistor 486 microprocessors

1993 8-inch wafer containing 32Mbit flash memory.

2001 12-inch wafer containing Pentium 4 microprocessors moving to 90 nanometer line widths.

Bringing the subject of wafer sizes up to date, here is a photo of an Intel engineer holding a recent 18-inch (450mm) wafer! The photo was taken by a colleague of mine at the Materials Integrity Management Symposium – sponsored by Entegris, Stanford CA June 2006.

The wafer photo on the left is a 5″ wafer from a company that I worked for in the 1980s called European Silicon Structures (ES2). They offered low-cost ASIC (Application Specific Integrated Circuit) prototypes using electron beam beam lithography rather than the more ubiquitous optical lithography of the time. The technique never really caught on as it was uneconomic, however I did come across the current use of such machines in a Chalmers University of Technology in Göteborg, Sweden if I remember rightly.

If you want to catch up with all the machinations in the semiconductor world take a look at ChipGeek.


An update on the art of pitching…

January 23, 2007

An excellent overview on the art of pitching to potential investors can be found on Guy Kawasaki’s blog written by Bill Reichert of Garage Technology Venture.

“Endless articles, books, and blogs have been written on the topic of business plan presentations and pitching to investors. In spite of this wealth of advice, almost every entrepreneur gets it wrong. Why? Because most guides to pitching your company miss the central point: The purpose of your pitch is to sell, not to teach. Your job is to excite, not to educate. “

Take a read; it contains a lot of sense.

Here are some of the points that I believe to be important:

  • What is the industry pain you are solving? This is often not articulated early enough or never at all leaving question marks in the recipients head as to what problem you are solving. The natural follow on to this is to show how your company provides an answer! The bigger the pain and the more unaddressed it is the better of course!
  • Are you just another clone? The world is already full of successful and wanabe companies in many sectors – VoIP, web conferencing and social networks to name but three. Why are you different and why will you succeed? IMHO, if you are, you have a significant challenge ahead. Uniqueness can be a problem as well, but rather that than being #75 in a particular segment.
  • Be clear right up front WHAT YOU DO. You’d be surprised in how many presentations the audience is still confused about thi9s half an hour into the flow.
  • Pick up on what the listener wants to hear. Are they looking for that YouTube world beater and nothing else is of interest to them? Are they cynical about your channel to market or the difficulty you might have selling to carriers?
  • Demonstrate that you know about the competitive landscape. Talk about competitors knowledgably and about the challenges you face in getting your product sold. Show your experience and that you are not naïve.
  • Make sure the presenter shows PASSION and ENTHUSIASM. Start-ups are principally about people and if the presentation is dull, boring and goes on and on it will not go down well.
  • Don’t spend 1/2 hour on the first slide. So easy to do, especially if questions are asked. Make sure you keep to your time slot and avoid using it all up on the first few slides. This happens time and time again. Of course if they want to spend hours and hours with that’s great!
  • Don’t use unexplained acronyms. In fact don’t use them at all if you can avoid it. You are not presenting to industry experts and they will probably not know what they mean. This is so often the case in telecoms presentations. If you do use them, always provide an explanation. I’ve seen slides stuffed with not-obvious acronyms.
  • Look for audience body language Look at your audience to gauge feedback and provide guidance for what to say. I was at a presentation last year where one of the audience left the room and the presenter didn’t even notice!
  • Never say “I’ll answer that question in a minute”. Answer it there and then but succinctly as nine times out of ten you never get round to it.
  • Get the balance right between technology and business. If you spend too much on early slides, business and finance stuff often gets forgotten.
  • Practice. It really does help.
  • Don’t rely on them having a live Internet connection for a demo. Getting this set up will be a distraction at a crucial time. Have it as a demo on a PC as getting the projector to work will be difficult enough!
  • Have a backup. For both notebook and the principle presenter.

And I thought it was all so easy. Come to think of it, maybe the above are all mistakes I’ve made over the years!


Default editor in Outlook 2007 is Word?

January 18, 2007

I’ve just been reading an interesting newsletter from Kevin Yank at sitepoint called Microsoft Breaks HTML Email Rendering in Outlook 2007.

And, according to Kevin:

While the IE team was soothing the tortured souls of web developers everywhere with the new, more compliant Internet Explorer 7, the Office team pulled a fast one, ripping out the IE-based rendering engine that Outlook has always used for email, and replacing it with … drum roll please … Microsoft Word.

That’s right. Instead of taking advantage of Internet Explorer 7, Outlook 2007 uses the very limited support for HTML and CSS that is built into Word 2007 to display HTML email messages.

Now, everyone who does any HTML editing knows that any programme within the Microsoft Office suite produces the most awful HTML code imaginable and nothing has been done in with Outlook HTML rendering for years.

The newsletter talks about many areas of incompatibility but I thought it would be interesting to take a look at actually how much bloat these MS apps add to simple HTML code. So I created a Hello world! text in a single-celled table to compare. The results are shown below:

Simple html (92)

<table border=”0″ cellpadding=”0″ cellspacing=”0″>
<tr>
<td>Hello world!</td>
</tr>
</table>

Frontpage (187 – 103% bloat)

<table border=”0″ cellpadding=”0″ cellspacing=”0″ style=”border-collapse: collapse” bordercolor=”#111111″>
<tr>
<td width=”100%”><span lang=”en-gb”>Hello world!</span></td>
</tr>
</table>

Word (517 – 461% bloat)

<style>
<!–
table.MsoTableGrid
{border:1.0pt solid windowtext;
font-size:10.0pt;
font-family:”Times New Roman”;
}
–>
</style>
<p><span lang=”en-gb”>Word</span></p>
<table class=”MsoTableGrid” border=”1″ cellspacing=”0″ cellpadding=”0″ style=”border-collapse: collapse; border: medium none”>
<tr>
<td width=”113″ valign=”top” style=”width: 3.0cm; border: 0.0pt solid windowtext; padding-left: 5.4pt; padding-right: 5.4pt; padding-top: 0cm; padding-bottom: 0cm”>
<p class=”MsoNormal”>Hello world!</td>
</tr>
</table>

Excel (572 – 521% bloat)

<table x:str border=”0″ cellpadding=”0″ cellspacing=”0″ width=”64″ style=”border-collapse:
collapse;width:48pt”>
<colgroup>
<col width=”64″ style=”width:48pt”>
</colgroup>
<tr height=”17″ style=”height:12.75pt”>
<td height=”17″ width=”64″ style=”height: 12.75pt; width: 48pt; color: windowtext; font-size: 10.0pt; font-weight: 400; font-style: normal; text-decoration: none; font-family: Arial; text-align: general; vertical-align: bottom; white-space: nowrap; border: medium none; padding-left: 1px; padding-right: 1px; padding-top: 1px”>
Hello world!</td>
</tr>
</table>

Even I’m surprised about the 500+ bloat! with Excel. Personally I would never use Word as an HTML editor and I would certainly never use it as the default editor for Outlook – it’s the 5 minute load time as much as the awful HTML editing that bugs me!

If anyone has comments or solutions about this issue, pleaselet me know and I’ll pass it along.


Follow

Get every new post delivered to your Inbox.