>The striking thing is how the two companies peaked are almost precisely the same time and how that moment (end of 2010) was not related to the entry timing of their nominal disruptor[iPhone].
Could it be that it was Android that hurt Blackberry and Nokia more than the iPhone did? Low end Android phones definitely replaced a lot of Symbian phones in the marketplace. It would certainly explain the timing of the change.
>The idea that Apple is vulnerable to the low end is a relic of an idea.
Not seeing how this follows from the data and analysis and seems to be a rather bold claim. There is a real chance of smartphones getting commodotized and it has already happened to a certain extent in China and India, where local OEMs are calling the shots.
The reason it's not happening in the US to the same extent is that the carriers hide the true cost of the flagship phones into two year data contracts, unlike, say PCs. However, the contract free prepaid market is growing at a record pace [1] and we may see a real change in the marketplace if the trend continues. Is the upgrade to S4 or the 5s from the S3 or 4S worth $500?
Could it be that it was Android that hurt Blackberry and Nokia more than the iPhone did? Low end Android phones definitely replaced a lot of Symbian phones in the marketplace. It would certainly explain the timing of the change.
The point isn't whether it was Android or iPhone (for the record, the Droid campaign started fall 2009). The notion of getting disrupted by higher cost phones--not iPhone for a Symbian but a smart phone for a flip phone--was so unthinkable and invisible that for three years the two companies under discussion here made do selling to emerging markets. It's essentially a tale of how the market and leadership at these companies do not see disruptions from above coming, and if they do see them coming being in a paradoxical state of needing to reverse course on a (thus far) successful strategy.
Not seeing how this follows from the data... There is a real chance of smartphones getting commodotized
Nobody is arguing that smart phones haven't been commoditized. The question is, how viable is a business that does not serve the low end? According to critics who are not keen on the nuances between these different products, a high-end play is not viable at all. But this is ignorant of the history of the Mac and iPod. It's also ignorant of the fate of PC-era commodity players.
By the way, I love the last sentence of Horace's piece here: "That’s not likely to be the case for those who found themselves in competition with the Mac in its pocketable re-incaration."--he's hinting at the possibility that these scary "commodity players," even if they see success, set themselves up poorly for the next round of disruption. Apple's whole-widget approach uniquely set it up to introduce disruptive form factors, and might do so again in the future.
>Not seeing how this follows from the data and analysis and seems to be a rather bold claim. There is a real chance of smartphones getting commodotized and it has already happened to a certain extent in China and India, where local OEMs are calling the shots.
I spend time in both countries you mention and couple more in Asia. Local OEMs are calling the shots because market as a whole is growing. But, the smarter OEMs realize that they have to make phones that are less shoddy. That is why you see brands like Oppo. What I have seen in the last couple of years is that as people get busier, they don't have time or patience to handle rubbish phones. They will not mind spending more money to get an iPhone or a Mac. Many analysts in the west try to say something like we Asians buy these gadgets as a status symbol. It might be true to a certain extent, but more people are getting them these because they are tired of cheap and crappy devices.
Couldn't agree more with both points. Of course android ate Nokia's pie (cheap phones for everyone). Rather unsure about blackberry, i think thats more a thing that android and ios both now have excellent infrastructure integration (MS exchange etc) for corporate support.
You're also right with how cheap OEM android phones are taking over the market. iOS is losing marketshare pretty much everywhere where phones are not heavily subsidised or the comparative income is low, namely everywhere but the US and UK (and a couple of others). Unless apple drops its ios prices to more reasonable levels (they are really taking the p*ss), it's going to be the same story as the one that unfolded with mac/windows.
Whilst I agree that Nokia and Blackberry were disrupted by Android, not the iPhone, Apple is gaining customers everywhere. 'Marketshare' is a flawed concept because it assumes a unified market of undifferentiated customers. Apple is simply not in the market of commodity devices. It never had any of that market to lose. Claiming that apple is losing those customers is clearly incorrect.
If the story does unfold as it did with Mac/Windows, Apple will again be the clear winner, just as they are in the PC market.
Apple is not the 'clear winner' in the PC market, they are a small segment of it.
Working in corporate and government environments is an Apple free zone.
Lower price competitors have repeatedly caught up and outperformed higher priced vertical companies that make the entire stack. Android is well positioned to do exactly this to Apple over the next decade. Phones and cheaper devices also seem to see this happen repeatedly. Apple's phone division should see a warning in Nokia and RIM, not a comfort.
SGI, Sun, Commodore, Atari, DEC and many others are all defunct. All full stack companies.
Apple and IBM are left. IBM mainframes are extremely difficult to remove. Apple are easier but may survive with enough iOS lock in.
> Apple is not the 'clear winner' in the PC market, they are a small segment of it.
Yet Apple has the most profitable segment of the PC market, they make more profit on PCs than any if their competitors; it turns out Mac-free corp/gov zones are not very profitable.
Apple will keep dominating the high end of even developing markets, their profits will keep rising and they will continue not to be disrupted by Android, who seem to have trouble making money (sans Samsung). I would worry more about HTC going under than Apple.
Corporate and government is profitable. It's just that the profits are shared by a plethora of companies. Apple failed long ago to get a foothold and gave up.
Apple don't dominate the high end of phones by market share any more. They do make more money out of it than anyone else at the moment. Good for shareholders but consumers are largely indifferent to profitability.
If HTC and Samsung went bankrupt Android would still be OK. They'd be replaced.
Sorry - this has nothing to do with the profits being shared by a plethora of companies. Apple receives almost half of the profits in the PC market. The rest is divided between your plethora. Apple chose not to serve the unprofitable part of the market.
Apple have tried to get into corporate environments. They tried to sell Macs to everyone.
Corporate computing is bigger than just buying PCs. Oracle, IBM, service companies, MS, Intel, CA, Cisco and all these companies make masses of money in corporate environments.
Yes, but how are he PC makers doing who serve the corps: say Dell, HP, Lenovo? Unless they make a dash for services, and even that is iffy (HP should have had some advantage here), they are all pretty much toast.
You're right that low price is a great competitive edge, but one can name many now defunct computer makers that were anything but full stack -- Compaq, Gateway, eMachine, Packard Bell, Siemens (the PC maker), Olivetti. As a matter of fact, one can argue that they went under because their products didn't stand out from the competition.
Full stack can be seen as a great advantage. Otherwise Microsoft wouldn't make Surface or acquire Nokia. It all depends on specific business cases.
Non-full stack companies come and go. That's part of the strength. Most people are not fussed if they buy an HP, Sony, Compaq, ASUS, Acer, MSI or whatever for a PC. Android buyers can happily switch between a dozen manufacturers as well.
The point is for general purpose computing full stack was common but is now exceptional. If it was a great advantage surely there would be more full stack companies around.
You seemed to imply that SGI, Sun, etc are gone because they were full stack. I just offered counterexamples of non full stack companies that are also gone. There are simply no evidence that full stack is inherently bad, or vice versa. It's all specific to individual companies.
> Non-full stack companies come and go. That's part of the strength
Or more precisely, that's a benefit of hardware commodity. From the business perspective however, it isn't fun to be a loser, full stack or not.
There is evidence that full stack is hard to keep going. In general computing devices there are only 2 companies left doing it that matter. IBM & Apple.
The non-full stack companies that are gone have been replaced.
The full stack ones have been replaced by combinations of non full stack companies.
New non-full stack companies come in and get big. What new full stack company that matters that has been started in the past decade?
You are contradicting yourself. You say that fill stack companies are hard to keep going and yet you say that non-full stack companies come and go even more easily.
By your logic, non-full stack companies are even harder to keep going.
It's not a contradiction. Both are hard to keep going. Full stack ones are even harder. Hence all but 2 are gone.
The ecosystems for more open systems are also really durable.
Personally, watching for decades the most surprising thing is that MS has kept the desktop rather than losing it to Linux.
Would you say that MS has 'won' the server market because they make more money than any Linux company even though Linux has a largest market share of internet servers?
MS keeping the desktop rather than losing it to Linux contradicts your thesis. You can dismiss it as a 'surprise' or acknowledge that your reasoning doesn't accord with reality.
We're taking about people who sell PCs, not component vendors. If we include windows, we also should include intel and everyone else in the supply chain. But since this whole topic is about manufacturers who ship complete phones to purchasers, the analogy in the PC market is with companies that ship complete PCs.
>Suppose you’re on a 2-lane (each way) highway and one lane is closed up ahead due to construction. Now the flow rate of your lane is cut in half (or there are twice as many cars in line in front of you, depending on how you want to look at it).
What I observe is that the speed is reduced to one twentieth of the speed, not just half. This is because people are merging very slowly while jostling resulting in needless braking. If everyone could agree upon a proper zipper merge, the speed for everyone would go up many times.
Or imagine a traffic signal at the merge point that only lets one lane go through for a minute each. The overall flow rate would be much higher than what it is without such a signal.
Discounting this based on theoretical flow rate (as if removing one lane reduces real traffic flow by only 50%) shows that the author totally ignores real traffic scenarios completely at odds with the title of the article.
I think you're may be thinking of this as one mechanism instead of two separate mechanisms.
You are dealing with a queue as well as a through-rate at the merge-point. The through-rate with one lane can still be 1/2 of the through-rate with two lanes, but because you have a queue waiting to reach the merge-point you can end up waiting much longer. More spacious merging will not change this because of the principle stated in the first paragraph of the article.
Increasing the speed at the merge-point will not decrease the queue. It will only decrease the density of the queue but move it back further in traffic. Your time to cross the merge-point will be basically the same.
>Increasing the speed at the merge-point will not decrease the queue. It will only decrease the density of the queue but move it back further in traffic. Your time to cross the merge-point will be basically the same.
I have to disagree with this. If you increase the speed at the merge point, someone who is newly joining the queue will definitely cross the merge point in lesser time than with lesser speed at the merge point.
Sure, but when traffic jams happen you often see not just ONE lane of traffic slowing down at the merge point but ALL lanes of traffic slowing down. On a four lane freeway narrowing to three lanes it's not the case that the two right lanes are stop & go and the two left lanes are 80MPH. EVERYONE slows to 10MPH. Furthermore the notion that you'll get 2000 cars per hour at the merge point irrespective of speed is ludicrous. Once everything slows down people act really douchey and don't let each other merge, etc.
I have witnessed eight lanes of traffic slow from 70MPH to 10MPH over a single poorly designed merge when there was more than enough aggregate free space for the merge. That happened because drivers don't accelerate hard enough on onramps and people don't redistribute themselves prior to shitty merges.
Show me a society that has no traffic jams and I'll show you one that's ready for socialism. Or vice versa.
It's not about velocity it's about rate of cars through a point per hour. If you have a mergepoint with 10 lanes. Each lane can support say 20 cars per minute. If you have 200 cars approaching the merge-point per minute the cars can travel effectively at the speed of their choice.
If you close one lane, reducing the capacity of the mergepoint to 180 cars per minute while 200 cars are approaching a queue will build. The speed with which the cars mass the mergepoint is not relevant because the rate of cars per lane per minute will stay basically at 20 cars/lane/min.
As to your point about all lanes slowing down - cars will always redistribute as you can imagine. People tend to merge left as there's an additional traffic stream merging on their right. The writer made points about the capacity of the mergepoint vs. the cars approaching- not individual lanes and speeds.
In the context of the article your first point is the key question. The author argues that the flow rate is relatively constant even with varying speed. I would tend to agree that it would be fairly constant but if there was extremely efficient merging I could see the flow rate increasing by a few percent. Theoretically the difference in flow rate if cars maintain a 2 second delay from the previous car should be the difference in the length of time that it takes the actual length of the car to pass through the merge point.
So if the car is travelling at .5 carlengths per second then the duration per car should be 4 seconds whereas if the car was travelling at 10 carlengths per second the duration per car should be only 2.1 seconds.
Also, two of your links leads to the ad which wrongly compares the screen size of diagonal length is bigger.
However, when Google wrongfully claims a Chromebox with a Celeron is running a Intel Core(TM) Processor, no one cares or makes big blog post on HN about it.
The advert was apparently done by a Microsoft affiliated marketing firm, so it made sense (to me) to include it.
In regards to your second point, Google and others often make mistakes in their adverts. Microsoft creates deliberate lies and seeks to create confusion and falsehood. Much like your post, to be honest. Have you considered a job on Microsoft's marketing department? I believe they are desperately seeking talent.
Affiliated marketing firm means MSFT made the claim or guilt by association?
The Lumia 920 camera issue was wholly the responsibility of Nokia, thus why they issued the statement. It doesn't seem unusual that partners on a device might share marketing firms, so this link as an example is a bit of a stretch.
>The advert was apparently done by a Microsoft affiliated marketing firm, so it made sense (to me) to include it.
Reference?
>In regards to your second point, Google and others often make mistakes in their adverts.
So when Google claims the Chromebox has a Core processor, it's an unintentional mistake, but when Microsoft says a 10.1" screen is bigger than a 9.7" one, it's a deliberate lie.
>Much like your post, to be honest. Have you considered a job on Microsoft's marketing department? I believe they are desperately seeking talent.
The stats are sales over the last 3 months, not install base. However, Statcounter tries to measure usage, not unique devices. Netmarketshare tries to measure individual browsers/devices.
For example, if you browse 10 sites in a day on an Android phone, but your neighbor browses 990 sites a day on her iPhone, the Statcounter numbers with just you both being counted will be 1% Android and 99% iPhone even though the install base is 50% each.
This is the reason that Statcounter and Netmarketshare browser share numbers are so different from each other, with Statcounter showing Chrome in the lead and Netmarketshare showing IE in the lead.
I would have expected that putting this through a machine learning algorithm( or one of the face recognition ones) trained with a very huge dataset might improve the odds.
I skipped over the BRICS part, but it doesn't change the equation. Anyway meaningful way you slice it Nokia was "way behind", even if they still had a legacy in a few markets that primarily used dumb phones (or "feature phones").
I couldn't find BRICS-specific numbers or I would post them too, but I'd like to see any reputable source that they had anything close to a 60% marketshare in a non-dumbphone category.
(some other numbers I had were also from Kantar, unfortunately their site is TERRIBLE)
Note that in 2012 Brazil STILL HAD 22% of new Smartphones being Symbian.
Also, several other countries in 2011 still had some good Symbian numbers (like first or second place).
Suddenly stopping selling Symbian to switch to WP7 made no sense, even worse when claimed it was to make it different (when other manufacturers ALSO made WP7 phones).
I just wish I could find the 2010 and 2011 numbers for all the BRICS again, but today that is seemly very hard :(
I think what I take issue with is referring to any device running a Symbian OS as a "smart phone". Many Symbian phones were just "feature phones", and this is probably (though I can't prove it) especially true in BRICS countries. It's not really a fair comparison. We've got little reason to believe Nokia's fate would have been any different in those markets as they developed than it was anywhere else.
>I really dont know, but will say seems crazy that Nokia didn't have at least a small team dedicated to making an android OS 'port' for their devices. It seems like even if they thought it was a 1% chance of windows mobile not working out for them the hedge would have been worth it.
Elop did have such a team working on an Android port.
Kantar Worldpanel - July 2013 Windows Phone Share by country
Germany: 8.8% (+2.6% YoY)
GB: 9.2% (+5% YoY)
France: 11% (+7.4% YoY)
Italy: 7.8% (-0.5% YoY)
Spain: 1.8% (+0.1% YoY)
USA: 3.5% (+0.5% YoY)
China: 2.4% (-2.2% YoY)
Australia: 7.4% (+2.4% YoY)
Mexico: 12.5% (+10.5% YoY)
>The biggest and most impressive individual gains were seen in Mexico (12.5%) and France (11%) where double digit Windows Phone sales figures were finally reached.
I really doubt an ethernet connection can push a full HD video frame in 1ms. The <1ms is for ping, which uses a really small network packet. Pushing 1080p HD video is a totally different matter.
$ ping -s 1450 192.168.11.10
PING 192.168.11.10 (192.168.11.10): 1450 data bytes
1458 bytes from 192.168.11.10: icmp_seq=0 ttl=255 time=0.646 ms
1458 bytes from 192.168.11.10: icmp_seq=1 ttl=255 time=0.478 ms
1458 bytes from 192.168.11.10: icmp_seq=2 ttl=255 time=0.469 ms
It would be insane to do this, but you could shove ATSC between boxes over ethernet by shoving each 188-byte MPEG transport stream packet in an ethernet frame and skipping all the layer 3 stuff. Or UDP it, if you want to route it. They probably like the idea of a difficult to route protocol keeping data on one ethernet segment.
My HDhomerun ATSC receiver certainly has no problem shoving a couple megabits of high def video over ethernet. Nor does my mythtv setup, or even just plain old NFS shares to watch videos.
For raw 720p RGBA you'd send:
1280 * 720 * 32 = 29,491,200 bits but obviously you're not going to do that. Suddenly you're not only sending data but also encoding / decoding it.
Indeed.. assuming 60fps, that works out to around 1.8 Gbps. Well within the range of HDMI, but well out of the range of your off-the-shelf router on ethernet.
..though 802.11ad is supposed to be out early 2014, and that maxes out around 7 gig, and has cooperation from the HDMI consortium to use it for streaming. I wonder if some kind of some kind of ultra-high-speed wireless dongle is in Steambox's future..
It doesn't matter, that's the point. Your MTU is almost certainly 1500 bytes. You are sending 1500 byte packets at the most. That does not cause latency. If you want to argue that we're incapable of encoding or decoding video with acceptable latency go right ahead, but doing it in response to me correcting a misconception about network latency doesn't make much sense.
Having worked on the particular issue of streaming real-time video over wireless, I can say that the main issue is not link latency.
The main lag comes from encoding/decoding. If you do it naively you encode frame-per-frame (encoding slices is more difficult), and the encoder does not only outputs iframes: you get partial frames that depend on both previous and future frames. Also the decoder does not always output frames in order. So you have to expect something around ~10 frames of latency, maybe less if you optimize everything well enough. That still means easily more than 100ms of lag.
I would think that the people at Valve would be able to find a solution if anyone could... or are you claiming that this is a hard nut to crack and SteamOS's streaming solution won't end up being that great for high-resolution TVs?
Oh I'm sure if they put their minds to it they can make some improvements and clever optimizations. The thing it becomes exponentially more difficult the lowest the latency you want to achieve, obviously. <300ms? easy. <100ms? manageable. <50ms? hey, very good! <10ms? uh, I want to see it with my own eyes.
Also, you have to remember that steam won't control the encoding end of the pipeline (and if the "steambox" is third party hardware with steamOS installed, no control at all on the hardware). Which means that in the end the observed latency will depend a lot on the hardware and drivers of the desktop PC and there isn't much Valve can do about that.
So in the end I'm sure it'll be more than fine to play Civilization or Torchlight, MMOs and most RPGs but maybe not Counter Strike or Quake III.
Why are people acting like this is impossible even after it has been done? Remember onlive? Notice how the latency was entirely the same as your network latency to their servers, and there was no problem with encoding adding any (noticable) additional latency?
What about scrapping the conversion to streaming video entirely and replacing it with a networked graphics protocol that allows one PC to draw on another's graphics natively?
Well that would simplify things greatly of course, but then it means bit hit on the bandwidth.
I mean, a 720p60Hz stream in 4:2:0 (12 bits per pixel) still amounts to 663Mbits/s. You won't get that out of a gigabit link realistically (at least not over IP). Of course you could use a lightweight compression algorithm, but you'll have to divide this bandwidth by at least 5 to make it manageable for the average home network I'd say (I have absolutely nothing to back that last number, but 100Mbits/s doesn't look too scary...). And that's only for 720p remember.
I think if you plan to stream HD video over the network you have to encode and be clever about it.
It seems to me, intuitively, that the only ways to do that are:
1. Essentially equivalent to streaming, or
2. Essentially equivalent to normal CPU->GPU communication but over the network (and, thus, needing the bandwidth that local CPU->GPU communication has if you want to avoid slowing things down -- GigE wouldn't seem to be enough, much less WiFi, even if you consider only bandwidth and not latency.)
Could it be that it was Android that hurt Blackberry and Nokia more than the iPhone did? Low end Android phones definitely replaced a lot of Symbian phones in the marketplace. It would certainly explain the timing of the change.
>The idea that Apple is vulnerable to the low end is a relic of an idea.
Not seeing how this follows from the data and analysis and seems to be a rather bold claim. There is a real chance of smartphones getting commodotized and it has already happened to a certain extent in China and India, where local OEMs are calling the shots.
The reason it's not happening in the US to the same extent is that the carriers hide the true cost of the flagship phones into two year data contracts, unlike, say PCs. However, the contract free prepaid market is growing at a record pace [1] and we may see a real change in the marketplace if the trend continues. Is the upgrade to S4 or the 5s from the S3 or 4S worth $500?
[1] http://www.fiercewireless.com/story/npd-one-third-us-smartph...