I don’t know if you’ve seen the pdfs yet but the whitepapers published by Nvidia last week are worth spending an hour going through if you’re interested in ultra mobile and low power computing.
The two pdfs focus on the benefits of high performance graphics and multiple cores in mobile computing. While I’m yet to be convinced that I need 1080p decoding and gaming graphics on my mobile computer, I do see that improved user interfaces and acceleration of some elements of the web page and web application process is beneficial. After reading the reports I’ve also come away with positive thoughts about multicore computing as a way to save battery life. The theory is simple – high clockrates need higher voltages and more power in exponentially rising amounts and so running two cores at a lower clock to complete the same task can result in power savings.
In podcast 63 at Meetmobility, Al Sutton of Funky Android, an Android consulting company, highlighted why he thought Honeycomb would appear on phones. His theory is based on the fact that Honeycomb is the first version of Android to be built with multicore platforms in mind and the supephones will therefore benefit. The Dalvik environment that applications run in is multicore-aware and will attempt to use multiple cores to speed up (and lower the power cost) of jobs that run in it. That feature alone could help every application running on Android without any programming changes in the application. With smartphones heading in the multicore direction, Honeycomb brings advantages and unless there’s a new multicore aware version in the 2.x branch, Honeycomb could be the way to go for multicore smartphones.
So why don’t silicon experts Intel use multiple cores in their Moorestown platform? The platform runs up to 1.8Ghz I understand so wouldn’t it be better to run 2 cores at, say, 1Ghz? Cost of silicon, size and complexity are probably in the equation and there’s probably a marketing advantage in using a higher clockrate but you would think that if this theory of more cores x lower clock=less power is true, Intel would be doing it too considering how badly they want to get into smartphones. Perhaps it is because much of the software out there isn’t truly multi-threading enabled and the advantages are limited. Where a program runs on multiple cores at a lower clockrate but only utilises one it means that the operation takes longer to run and the system can’t get into an idle state as quickly and the power used is way higher. Just leaving a wifi and screen on for a small extra time will negate any potential advantage.
It’s complex stuff but my feeling right now is that multiple cores are going to bring advantages. We’ll see, in time, if the Honeycomb-for-superphones theory is correct and we’ll see if Intel goes that route for Moorestown and Medfield too.