See, your 'dead OS' is not walking - it is marching bravely here and there and all over the planet, led by army of Windows admins. Who, being just as smart as their other-OS-admin-brothers, are always looking for better tools to do their job.
What a vulgar sentiment! President Obama is hardly a black Bush, whatever your thoughts about his policies. If you really believe they are the same--yet utterly fail to articulate your rationale--there probably isn't much reason to engage in a discussion with you.
Disagree w/ your specifics, strongly agree w/ the perception of a narrative being present. Agree that Romney was never in real danger of winning, disagree that he himself threw the contest. He wanted to win, it just wasn't in the cards.
Than you can't watch movies, which may mean you can't communicate with regular, standard, movie-watching people, which may mean you can't get any business/job, which may mean you die of lack of money/healthcare/etc.
MS is not a startup, that can go by with a promising product. It needs massive amounts of sales and with the current strategy Surface is not going to get them. IMHO MS should allocate an enormous amount of money, like $20T, just to get market share. Sell Surface way under component value, stop the stupid non-business use licensing of Office, take the risk to secure long-term high volume contracts with component vendors.
One of MS problems is there are no high DPI tablet panels on the market now. Apple has prepaid for all non-Samsung production and Samsung is keeping theirs for their own tablets.
> MS is not a startup, that can go by with a promising product. It needs massive amounts of sales and with the current strategy Surface is not going to get them.
Out of curiosity, why do you assume the Surface needs to sell in huge volumes? If every single Surface is sold at a profit, and the Surface division is even slightly profitable for Microsoft, I don't see why it matters if they sell 10,000 or 10,000,000 of them.
To me it seems like the genius part of what Microsoft is doing is that Windows RT and/or the Surface don't actually need to succeed in terms of market share. They just need be taken seriously for long enough to drive down prices of Intel's x86 chips, so that Windows tablets have a shot against ARM tablets long-term. As far as I can tell, that seems to be working already. (Intel Atom Z2760)
They also have a massive Enterprise business, most of which will probably upgrade to Windows 8 eventually, buying them quite a bit of time to watch this unfold. Not to mention Xbox, etc. And of course, on top of all that, there is still the chance that Windows RT will somehow be a hit.... which would be even better for Microsoft, because it would end their Intel pricing problems for good.
To me, this seems like a decent plan for a software company that until now has only been losing relevance and market share. It’s a better plan than bleeding out cash selling super cheap hardware to gain market share (Amazon’s Kindle Fire), waiting too long to give up the dying cash cow as the market changes (RIM), focusing only on extremely high quality products while your company dies around you (Nokia), or killing off the consumer version of Office for no apparent reason (which is presumably profitable as-is).
"MS is not a startup, that can go by with a promising product"
Only that they always do. There's some (older) running joke about how Microsoft products start getting useful/successful with version 3: Windows, Word (Word 6 being the 3rd major WinWord release), Internet Explorer, XBox.
The first few iterations often were "notable" at best (IE 1 couldn't even display images).
I don't know why you're getting down voted because what you said just makes sense. I mean, that's what Apple did and no one can argue that they weren't successful.
Someone else who worked at MSFT made a comment one of these days saying that the reason why they don't react more appropriately is because they are 100% right on their vision for Windows.
I agree that market share is important, but I think Microsoft will face some serious issues turning market share into profit. Both Apple and Amazon skim the profits off their huge ecosystem here, but Microsoft doesn't have any equivalent.
IMHO steveb is the type of manager that can milk existing customers/markets, but not effectively lead MS in a full scale war vs Google, Apple and Amazon. The only way I can see MS not becoming IBM 2.0 is billg return.
In 2012, Fortune ranked IBM the #2 largest U.S. firm in terms of number of employees (433,362), the #4 largest in terms of market capitalization, the #9 most profitable and the #19 largest firm in terms of revenue. Globally, the company was ranked the #31 largest in terms of revenue by Forbes for 2011. Other rankings for 2011/2012 include #1 company for leaders (Fortune), #1 green company worldwide (Newsweek), #2 best global brand (Interbrand), #2 most respected company (Barron's), #5 most admired company (Fortune), and #18 most innovative company (Fast Company).
Yes, that's exactly the case for engineers/designers/project leaders ;-) Smart people have a choice and IBM is a poor choice for anyone with ambitions, beyond a steady paycheck.
I would say that if you want your own company you wouldn't work for any company because at the end of the day you are still working on someone else's product. Doesn't matter if it's Apple, MSFT for Acme Box corp.
However when you say "Smart people", are you saying that IBM's Watson, the latest Intel chip or the latest drug are create by stupid people because their company is not mentioned on HN.
IBM is a poor choice for anybody with ambition to work with latest cool web frameworks doing 'social-X'. If your ambition is in chip design, AI, massive data processing, quantum computing and more theoretical work, then IBM probably isn't too bad a place.
The final transformation into IBM 2.0 will be complete when most of the people compelled to use MS products are not the same people who chose to buy the MS products. Are we there yet?
To become IBM 2.0, Microsoft would have to kill its R&D budget, outsource everything to India, and focus on short term gains. Really, Microsoft is really far away from that, you could say they are even more forward thinking than Apple with one of the last industrial research labs in place (IBM and its consultants has basically killed IBM Research). Disclosure: Microsoft employee.
Yes, I think MS still has room to avoid the fate of pandering to the enterprise. MS research continues to shine, but I recall that IBM research shone brightly in the 80's.
I hope that MS can stay primarily focused on people who make their own buying decisions, but I'm worried (eg [1]). "A computer in every home" would have translated very nicely to "a device in every hand." Alas, I'm not really sure what MS stands for these days beyond "backward compatibility" with its understandably addictive revenue streams -- which is also eerily reminiscent of IBM in the 80's.
Yes, that is now. But if Surface and a few more extremely expensive attempts fail, sooner or later MS will have to close the lab, fire many developers and concentrate on extracting value from it's existing shrinking markets.
Yes in a hypothetical future where microsoft loses massive amounts of money it may need cut spending and double down on it's existing competencies.
But that's a pretty damn facile statement.
It can be said of any company that has a revenue stream. The only ventures that's not true for are governments with sovereign currencies (and even then there are limits)
Well the future is hypothetical by definition, isn't it? MS has two choices now - invest in R&D and try to secure new markets or make maximum profits for it's shareholders from exiting markets. MS may not "need" to cut spending but every billion they spend on projects like Kim, Bing, Surface is a billion that could've been dividend or share's buy in.
Or Microsoft could do a reset like Apple did in the late 90s. That would still involve closing the lab, firing many workers, and focusing purely on a narrow set of products that it can make lots of money on.
No, the current rating is the maximum output of the charger. It's up to connected device to draw as much current as it needs, rather than the charger pushing the current into the device.
Afaik it's not that clear cut. If some kind of fault occurs in the device causing it to draw more current than designed then the lower power charger could be safer assuming it has some kind of current limiter or fuse. Ie high power charger would happily give faulty device more current possibly causing more damage (heating up, exploding batteries etc), while a low power charger could prevent that by tripping a fuse or limiting current.
Of course I'm not sure if this is applicable to USB chargers which all are relatively low power.
No, but it is inefficient. A larger charger will have a larger 'dead load', the internal consumption of the charger. So if you plug in a very small consumer into a such a charger then percentage wise you're losing a lot of energy to heat.
Author here - it's a bit more complicated than that. For a linear power supply (old-fashioned wall wart), all the unused power is converted to heat, so you'd waste a lot of energy. But for switching power supplies (such as USB chargers), theoretically only the power that is needed is used, so the efficiency shouldn't depend much on the load. In practice, larger chargers might have better overall efficiency since they can implement better circuits (for space and price reasons). But larger chargers might not optimize as much for low loads. So it's hard to say offhand whether a large charger or small charger would be more efficient for a smaller load.
I did a quick test to see what really happens, plugging a Samsung phone into a iPhone charger and an iPad charger. In both cases, the charger used 3.0 watts of wall power. (The phone was turned off and charging since if it is turned on, the load fluctuates a whole lot as the phone does random things.) So my conclusion is that the size of the charger doesn't affect efficiency.
For the general case, all other parameters being equal (supply mode, quality and so on): bigger charger -> larger dead load.
An iPad charger is not a 'large' charger, it's a fairly small step up from an iPhone charger, since you are reporting 3.0 watts of 'wall power' but your multiplication of scope measured values does not correct for power factor you are likely off by quite a bit on both measurements.
GP mentioned a HP touchpad charger to charge a phone, I don't have a HP touch pad charger here but the specs are quite terrible [1], you'd have to measure with that specific charger to answer the specific question or you'd have to do a comparison of a large range of chargers with accurate measurement methodology in order to really answer the general question.
As it is your conclusion contradicts practical engineering and I'm afraid it will not hold up in a better test, which would be to try a number of switched mode supplies of various sizes designs with various loads. Plugging in one device and doing a hasty (wrong, ignoring phase shift) measurement does not warrant your conclusion.
To measure efficiency you're going to have to take the power factor[2] into account, this can be quite hard to do, and theoretical efficiency doesn't matter for a practical test (you're measuring, not theorizing).
The wave forms that switched mode chargers [3] output and consequently the kind of load they represent to the grid is so irregular that most non-caloric and power factor corrected measurements will give values that are not accurate. That noise that is present on the output wires will be to some extent visible on the input side.
A normal Watt meter will work best with transformer based supplies or resistive loads, accuracy for small switched mode loads will be anywhere from 'so so' to 'terrible' depending on the make and model power meter. Good brands (for instance Fluke) do most calculations right and will be able to deal with CFLs and other phase shifted loads, bad brands (I won't name them but they're killing it in the domestic watt meter department) will give wildly in-accurate results.
But even a quality meter like a Fluke will still have trouble with this kind of spiky load, especially if it is small.
It would probably be a good idea to (properly) describe your test rig along with the results it says:
"I measured the AC input voltage and current with an oscilloscope. The oscilloscope's math functions multiplied the voltage and current at each instant to compute the instantaneous power, and then computed the average power over time. For safety and to avoid vaporizing the oscilloscope I used an isolation transformer. My measurements are fairly close to Apple's[15], which is reassuring. "
But you can't really do it that way and get accurate results, instantaneous power draw using a switching supply changes several hundred thousand times per second and is likely phase-shifted so a simple multiplication is not going to work.
Accurately measuring (low) power draw from switched mode consumers is a really tricky problem, it's easy enough to read some numbers from a display but I can assure you that this is not a simple problem to work on if you want to get meaningful results.
Thank you for your detailed comment. I went to a great deal of effort in my article to measure the power consumption accurately, accounting for the power factor, but I left out most of the details since most people don't care. I'm not multiplying the average current and average voltage to compute watts, because that obviously would not work due to the power factor. Instead, I'm multiplying the instantaneous voltage and current 50,000 times a second and summing this up, which gives the actual power, corrected for the power factor. (While the internal current changes tens of thousands of times a second, the line current changes slowly due to the input filtering, so this is plenty of resolution. I'm using a Tektronix TDS5104B 1 GHz oscilloscope, so I have a pretty accurate view of the input voltage and current.)
The main sources of error in my measurements are the cheap isolation transformer (which causes a bit of line voltage distortion under load), the current sense resistor, the tolerances of the voltage divider resistors, and noise in the measurements. So I wouldn't claim these measurements to be better than 10%.
You can take a look at one of the oscilloscope power graphs at https://picasaweb.google.com/lh/photo/pbrO8BQz38kDo9xU5ejffd...
Yellow is the input voltage, and turquoise is the input current. The non-sinusoidal current shows the non-unity power factor. Note that there's no phase shift, but instead the current flow happens only at the voltage peaks (which is a consequence of the input diode bridge, not of the switching power supply per se.) At the bottom of the image is the instantaneous power, computed from the instantaneous voltage and current.
For the iPad vs iPod measurement above, I didn't have the oscilloscope handy so I used a Kill-A-Watt, which does in fact take the power factor into account.
Going back to your statement that "bigger charger -> larger dead load". By "dead load", do you mean the power consumption under no load, which I call "vampire power" in the article? This varies widely between chargers, having more to do with the design than the size of the charger. But in any case, this wasted power is pretty much irrelevant under load. For instance, 100 mW is a typical vampire power usage. So if a hypothetical larger charger has twice that wasted power, at a 3 watt load, this is only a 3% difference.
The peaks in your scope image clearly show a phase shift, which is kind of logical if you take into account the fact that the main component in a switched mode supply is a coil.
If you look a little more carefully at your scope trace you'll see the coils reactance at work in the lower trace, the peak is where the FET in the supply is closed and is drawing real power, the purple trace past the peak and beyond the 0 crossing is inverted and drops slowly back to 0 before the next peak hits. If you use the controls on your scope to zoom in on the bottom trace by increasing the vertical sensitivity you'll get a much better idea of what I'm getting at here. You'll see '0' voltage and yet current is still flowing.
You can't correct for power factor by simply increasing the resolution and averaging. The base frequency of your oscilloscope does not enter into the discussion here, it could be 500 Hz for all I care and that would be enough.
Furthermore, the power factor of a switched mode supply changes as a function of the load applied and gets (much) worse if that load is also reactive or capacitive. Under some circumstances it is possible to draw negative power from the wall socket if you do a naive measurement, or you'll see wall socket power decrease as output current increases.
All this is possible because voltage and current are more or less out of phase with each other.
The kill-a-watt will work well with some reactive loads (such as CFLs) as long as they're of the ballast type.
A switched mode supply presents challenges that can't be met at the cost constraints of a consumer device like that.
Vampire power is a new term, I'm not familiar with it. Dead load (or simply the losses) is anything that does not end up in your consumer (the live load), I'm not sure if that is an accurate translation of the terms. It normally goes up as a function of the amount of power consumed, the base line (consumption without any load at all) is probably your 'vampire power'.
Total efficiency is 100 * ((output power)/(input power)) and will in practice be anywhere from 60% to 98% depending on how well load and supply are matched, and can vary wildly from one powersupply to another due to component variations.
Finally, classical power factor correction applies to sinusoidal wave forms, as you've already discovered switched mode supplies waveforms on both the input and the output side are anything but sinusoidal further complicating an already hairy problem.