Now, people will tell you don’t trust a vendor’s testing if the vendor has a horse in the race.
Still, one hope there is not a big gigantic problem like the one reported by WWWery: Opera cannot run an web application call Galactic, and it was penalized very severely for that in the browser power consumption test by Microsoft.
For me, here are a few things I do not like about the power consumption test:
First, when someone spent so much time demoing the hardware setup, just like reading a journal paper/conference paper that spent a lot of time on statistics and overall benefit before touching on the actual content of the paper, one begins to get suspicious. But may be that is just me being paranoid.
Second, for each of the test, the scenario is ran for 7 minutes. I question the validity of this methodology as I cannot see how this can work correctly for a few tests. For example: How are you going to run the blank page loading test (about:blank) for seven minutes? Also, I cannot see any news site takes 7 minutes to load and for that matter, is it a news site that the tester can manipulate to favour one browser over another? A much better test would had been to run the scenario X number of time, e.g., 1000 load of about:blank, another 1000 refresh of a news website that Microsoft did not have control on. Finally, for God sake, name the news website.
Third, one needs to explain how the Galactic test affect the final result calculation. Opera cannot run Galactic, so inclusion of that test makes comparing Opera with other browser awkward. Fully penalizing Opera for it is one strategy, but common sense says it is too extreme.
Finally, how do you translate the results to a 570W battery-powered laptop? My view will be get such a laptop and repeat the test from full charge till the laptop dies. Extrapolating from the results looks stupid.