Posted in Industry News on January 28, 2010 by Administrator
It is perhaps a little over a year since it became obvious to everyone that Intel and ARM were starting to stamp on each other's toes. For years it was assumed that only x86 could do 'real' computers and only ARM could do battery devices. For some reason it was , a tiny market with limited volume (or even MIDs, which didn't even exist), where all the fuss began.
As the lines between smartphone and PC blur, "ARM is coming up from the portable space, and Intel is coming down from the PC space," says Joseph Byrne, a senior analyst at the Linley Group who specializes in semiconductors and processor IP. "Looking forward, these guys are going to collide."
Talk of convergence between mobile and computer has been going on for a decade, but is not much closer for all that time. A laptop is still a PC, despite the inclusion of a battery and a Nokia E90 is still a phone despite the keyboard. Many assumed that if Intel wanted to win in the mobile space then it would. It just didn't see the point of fighting over a market where the CPU costs $10-20. Those of us of a more technical bent, while not doubting Intel's capability and engineering prowess, wondered how long it would take. Some thought that Intel could play catch-up for 5 years or more, and still not win the battle.
Thin ClientThe problem for Intel, as this article describes, is not just ARM but the 'thin client' model. If I can run a 'thin' email client on my ARM-based device, and it can web browse, and show photos and videos, and open the occasional PDF, etc., what more do I need? MS Office? Well maybe, if I'm a business user but otherwise Google Apps might do the trick. On the other hand, the problem for the thin client model is that it has been talked about for even longer than convergence. It pre-dates the tech crash when Sun told anyone who would listen that Windows was doomed and the network is the computer. So people are naturally sceptical. The definition of thin client has broadened quite a bit, now encompassing anything with a decent GUI, not just a very small desktop PC. Still, the name 'netbook' has hung on, now meaning a small laptop, and I am very happy with my (x86-based) Asus Eee PC700 thanks very much. Ken Olson, founder of DEC, ironically later an ARM licensee, supposedly said in the late 1970s “There is no reason for any individual to have a computer in their home.” Now it is clear that the question is not whether, but how many, and what in fact is a computer? By processing power it would be obvious to Olson that the Nintendo DS, Apple iPod Touch, Nokia E71, Archos 700 Internet Tablet and Nintendo Wii are all computers in some sense. By that definition our household has several dozen computers. Every one of these devices has at least one ARM chip. None of them has an x86 processor, let alone an Intel. All are network connected and can be considered as thin clients, albeit very thin in some cases. But each of these devices provides an adequate experience for the user, which is all that really matters. So, whereas Intel is fighting the thin client model, this model can only benefit ARM, since ARM already rules in the thin client space. Intel hopes that thin clients will either not succeed, or will run out of legs and have to move to faster x86 processors. That might be a forlorn hope. In summary, the thin client model suggests that ARM has a bright future as consumers buy more and more of these thin client devices for particular tasks.
Processing PowerThe other side of the argument is processing power.Ten years ago you arguably needed an x86 CPU to do just about anything. [I say arguably because there was a time (around 1996) where you could buy a 200MHz ARM-based computer but the Pentiums stopped at 90MHz. But it was the Pentiums which ran Windows] Perhaps seven years ago email would have been beyond an ARM chip. Five years ago basic word processing and spreadsheets were problematic. But today the processing power and associated heat, size and expense that comes with an high-end x86 CPU is really only needed for video editing, Flash-enabled web browsing and perhaps photo editing. The list of things you can't do with an ARM CPU is getting smaller all the time, and the performance requirements of Windows and the common applications have not kept up. Those, such as Intel, who argue that you can only get a decent web browser on an x86 platform should take a look at the lowly Nokia N900. Engadget's review says this:
Almost without fail, sites were rendered faithfully (just as you'd expect them to look in Firefox on your desktop) with fully-functional, usable Flash embeds -- and it's fast. Not only is the initial rendering fast, but scrolling around complex pages (Engadget's always a good example) was effortless; you see the typical grid pattern when you first scroll into a new area, of course, but it fills in with the correct content rapidly. To say we were blown away by the N900's raw browsing power would be an understatement -- in fact, we could realistically see carrying it in addition to another phone for browsing alone, because even in areas where it gives a little ground to the iPhone or Pre in usability, it smacks everyone down in raw power and compatibility.
Steve Jobs, in launching the ARM-based iPad yesterday said the iPad offers the...
best [Web] browsing experience you’ve ever had. A whole Web page right in front of you that you can manipulate with your fingers. Way better than a laptop
That's quite a claim and if it is even halfway true if suggests that ARM it catching up fast. Some would argue that massive processing power is needed just to run Windows 7 - in fact this is suggested by . But in a similar way to Intel, Microsoft's problem is selling people an Operating System which they might not need. In any case, telling consumers that they need 1GHz of CPU power to run the Operating System and virus checker has to be a risky strategy. Part of this change is due to software becoming smarter, but most of it is simply that ARM chips are getting faster. It has all happened rather suddenly. Not much more than a year ago TI brought out a 600 MHz Cortex-A8 chip, with 1200 DMIPS of performance (as used now in the Nokia N900). Atom offered 1.6 GHz and perhaps 3900 DMIPS. No contest, although it could be pointed out that a better comparison in terms of power consumption was 1950 DMIPS for the 600mW Atom (excluding the 2-3W used by the companion chip). Then in September, ARM announced their 2GHz Osprey development. Aimed at around 2W, this claimed to offer around 10,000 DMIPS, twice that of the fast Atom but at less power since no companion chip is required. Still it was only a design, not in real chips, so Intel carried on with its original iPhone ARM11 comparisons. After all, Intel has great plans for the future also. But in the past few months we have a dual core 1 GHz Cortex-A9 from nVidia, Qualcomm's 1.5 GHz dual core Snapdragon (ARM compatible) and of course TI's next step along the path, the OMAP4440. So it's not just designs: we now have chips. Still, chips doesn't equal final product, and the iPhone 3GS and Nokia N900 are using only a now-lowly 600 MHz Cortex-A8. So Intel can be safe for another few years, surely? Initial ARM-based netbooks were cheap but not necessary stellar on the performance side. But now Acer has announced that it is looking at tablets, and hinted that this might involve ARM. Freescale is talking ARM tablets, and the Apple iPad includes an ARM core (perhaps a single core Cortex-A9 with Mali graphics). Gartner group thinks that Android on ARM is more snappy than Windows 7 on Atom. Suddenly, ABI Research is predicting that ARM PCs will outsell Intel in 2013. The performance arguement for x86 might be wearing thin. Perhaps all that junk silicon really isn't for anyone's benefit. Consumers appear to be moving fast to more portable devices - laptop sales are increasing whereas desktop PC sales are actually in decline year on year. This could explain the meaning of the iPad - an attempt to capture 80% of the 'computer time' of consumers with a new device which does most of what they need for less money and less hassle. In summary, ARM appears to be overtaking Intel's Atom on speed (this is unlikely to last!), while Intel is really struggling with the battery life. I know where I'd rather be in this race.
And the Winner Is?Things are going to be very interesting over the next few years. Both ARM and Intel really have their work cut out for them to grow their market share from their respective home bases. Of course it is too early to declare that ARM is going to take over the market for low-end computers. Perhaps it doesn't matter anyway, given that in many cases the Bluetooth, WiFi, flash media and graphics components already include an ARM. Some would argue that the takeover happened long ago. But in the headline CPU, where Intel is Inside, my suggestion is: sell Intel, buy ARM.
Posted in Industry News on November 13, 2008 by Administrator
The Atom was the reason why Intel had to sell the XScale division. Unfortunately the XScale CPU wasn't all it should have been, lacking debug capability or the performance leap promised by its StrongARM heritage. While Intel sold a few chips to people for WinCE PDAs, and even a few Motorola cellphones, the market was small compared to that available to TI and the like. Free from its ownership of a competing architecture, one which has wiped the floor with Intel, its execs obviously feel comfortable letting rip at ARM. Intel is no-doubt hugely frustrated at its inability to compete in the fast-growing cellphone market, and the Apple iPhone is just another sign of ARM's dominance in this sector. So here is the I'm referring to:
Kedia didn't just stop at the iPhone, claiming ARM was a malaise afflicting smartphones in general. "The smartphone of today is not very smart," he said. "The problem they have today is they use ARM." Wall believed the situation was unlikely to change anytime soon, saying Intel was two years ahead of the rival company. He didn't believe fast, full internet would receive a début with ARM-based devices in the near future. "Even if they do have full capability, the performance will be so poor," he said.
Of course this guy is just venting, during a trip to Taiwan. Perhaps he met with a number of potential customers there who all told him they were using ARM and very happy with it.
But also, it simply isn't true. Tom's Hardware shows Atom's power consumption (for CPU alone) of about 2.5W, with 5W including the required companion chip. We should point out, though, that the two chipsets to be used with the Atom N200s are power users: the Atom 230s use an i945GC that consumes 22W (4W for the CPU) and the Atom N270s ship with an i945GSE that burns 5.5W (2.4W for the CPU).
This is for a 1.6GHz CPU. By comparison the OMAP3530, a dual core 600MHz CPU with integrated video DSP, 3D graphics, NEON SIMD machine, DDR interface (i.e. Atom + support chip) consumes under 2W total (and that's the maximum from the datasheet and the - with power management OFF!). It is a mystery why Intel chips consume so much power. Some say it is the Byzantine x86 instruction set. Others say that Intel aims for speed rather than power. Who knows... So in terms of power consumption, Intel isn't even able to play the game yet. It is perhaps 3-5 years behind ARM on this one. The claim that the Internet isn't usable on an ARM CPU is also bogus. From what little I have seen of the iPhone it seems usable enough. My Nokia E90 certainly runs ok on the web, although I agree it could do with more speed (it is an ARM11 design). I think Intel will be shocked at the capability of the Cortex-A8 devices when they come out in the new year. Of course Intel needs to attack ARM - ARM owns the lower power market space and it is the only way that Intel can make inroads into it. But Intel needs to get its products in order first. Perhaps Intel should swallow its pride, take an ARM architectural license and put its A team on the project. The C team didn't do a great job, but everyone knows Intel has great chip engineers - just look at the Pentium range. Take away the x86 baggage and who knows what might be achieved?
Posted in Uncategorized on September 29, 2008 by Administrator
Surely most engineers are familiar with the basics of the ARM7TDMI? To recap, it has a 32-bit register set with 16 registers, RISC load/store architecture, various processor modes with a role in exception processing, private registers in each mode (particular FIQ with a very useful r8-r14), a simple 32-bit instruction set with only 21 types and an additional 16-bit instruction set with zero-penalty decode logic. Since it was originally released in 1994, several of ARM's engineers wanted to improve on perfection. Perhaps the power consumption could be made even lower, perhaps the instruction set could be made even more dense, perhaps more microcontroller features should become part of the standard design? The shrinking area of silicon required for a microcontroller, and the resulting reduction in cost, presented another opportunity. It should be possible to make an ARM chip as small as an 8-bit micro, and to consume less power. Cortex-M3 is the result of these factors. It is not radically different from ARM7TDMI, and retains most of the architectural features. But it drops several, expands others, and adds a few more. Key features of Cortex-M3 are:
- You program it entirely in C. There is no longer any need for assembler, although it is still available. This doesn't mean that it is less efficient
- It won't run Linux. It is aimed at small microcontrollers so does not include data aborts for virtual memory, etc.
- Built-in microcontroller functions, such as interrupt controller, timers, memory protection
- Drops exception modes and the ARM instruction set (only supports Thumb / Thumb 2)
Some lesser features are:
- Thumb-2 instruction, a clever extension which gives the code size benefits of Thumb with the performance benefits of 32-bit ARM code
- Single Wire Debug (SWD) option, which reduces the number of pins required for JTAG/debug functions from 5 to 2
- Integrated trace functionality, traditionally only available on higher end ARM9 devices. Cunningly, it also operates over the SWD port
- Half the interrupt latency (or better), by building the concept of state saving directly into the micro. In particular, consecutive interrupts can be processed without the traditional save/restore step in between
- Half the power consumption when running, plus an integrated sleep mode
- 30% faster operation at the same clock speed
- ARMv7-M Harvard architecture with a number of new instructions
ARM's removal of the exception processing part of ARM7TDMI was not just a clever way of getting rid of stuff that low-end microcontroller engineers don't need. It also provides a clear differentiation between the higher end micros and the new microcontroller range. This make it possible, at least in theory, for companies like Atmel to sell an ARM9 range at $6, and ARM7 range at $3-4 and a Cortex-M3 range at $1. In my view this is one of the cleverest features of Cortex-M3 - it doesn't canabalise the existing market to the extent that you might think. Bluewater Systems has completed a number of Cortex-M3 designs and we have built quite complex embedded software systems using it. When combined with the RealView tool chain, it is a formidable player in the embedded market. For the price, it is surely unbeatable. Cortex-M3 micros were first shipped by Luminary Micro who have 134 models to choose from. ST Microelectronics have a respectable 38 so far. Atmel is on board now also although I can't see any devices yet. For more details on Cortex-M3, and if you have an hour spare, take a read of the original white paper on the topic from ARM's web site. You might find this useful too.
Posted in Uncategorized on September 16, 2008 by Administrator
The secret to maximising your productivity is ensuring you have the correct tools for the job. When it comes to bringing up a new kernel for a processor that you have not supported before, an embedded ICE is a must have. Before now Windows CE has lacked a set of high quality tools for kernel bringup. Normally the first step in CE development is the creation of a the Kernel Independent Transport Layer (KITL) so that you can debug your kernel drivers over Ethernet. The problem with this is that you need a large portion of the kernel and Ethernet driver to be working before you can do any debugging. As soon as any problems occur in the kernel, the entire debugging session locks up. We have been lucky as an ARM tools distributer to be able to trial a pre-beta version of the Windows CE 6.0 eXDI2 drivers for the Real View ICE. The driver plugs directly into Visual Studio and provides source level kernel debugging of an embedded platform as if it was a desktop application. It has allowed us to single step through the kernel of our AT91SAM9260 platform before the kernel was up and running. No longer do we need to toggle LED's in a seemingly random sequence of pulses to indicate boot progress! The Windows CE 5.0 eXDI2 plugin for Platform Builder is available for general release by registering on the ARM website. http://www.arm.com/products/DevTools/eXDI2RVI.html The Windows CE 6.0 eXDI2 plugin will be available for general beta testing in the very near future.
Posted in Uncategorized on September 15, 2008 by Administrator
Two of us are attending an ARM meeting in Hong Kong this week. There is a quite a large turn-out of companies across Asia including Singapore, Taiwan, China, Korea, and of course New Zealand. What struck me as I looked at the group about to go out for dinner last night, was that there were more people there among ARM's tools distributors in just a single region, than worked for ARM in total when I joined just 14 years ago. ARM shipments passed the 10 billion mark last year and the current run rate is over 3 billion per annum. Of interest particularly is the massive growth in low-end ARM micros. The Cortex-M3 is 'only' shipping about a quarter of a billion units per annum, but volume is more than doubling each year. When the $1 micro was announced I wrote a short paper about the possible impact (available here). It seems that ARM is well on the way to making good on some of the benefits identified, particularly in terms of quality of tools, and the productivity of engineers developing with low-end microcontroller technology.
|