Sounds very interesting! However, I am very curious on whether this works for simple workarounds such as taking a screenshot of the picture or reducing image quality. Since the original and the altered images are visually identical to the human eye, from the limited knowledge I have on the topic, I infer that the changes made by the algorithm are all in the high-frequency side of the spectrum.
Tanenbaum said that Torvalds would not get a good grade on his course, epic.
"I still maintain the point that designing a monolithic kernel in 1991 is
a fundamental error. Be thankful you are not my student. You would not
get a high grade for such a design :-)"
Perfect display of how universities are good at judging people about how well they know how to "play the game" (and usually that involves conforming to whatever frame of mind your professor thinks is right).
Well, a microkernel is harder to implement than a monolithic one and from a theoretical standpoint is a better design, IMO it deserves a higher grade in an operating system design course. Let's not mythify the 1992 Linux kernel, which probably wasn't _that_ good.
A microkernel is much easier to implement than a monolithic one. Been there, done that. The hard part is to get the messaging done with zero overhead (no copying), but using paging for the mbufs you can get quite far with that.
A microkernel is certainly easier to build than a monolithic kernel, because it does much less. You want to compare a feature-equivalent system, so microkernel + some userland drivers etc vs monolithic kernel.
The monolithic approach is easier because you just call a function in another component or read its data structures. In a microkernel, you need to design interfaces and protocols first.
No, the monolithic approach is definitely not easier for a simple reason: there is hardly any kernel level debugging going on in a microkernel based system because the kernel is small. Everything else is userland, so you can test your new driver as just another user process. This takes the sting out of a very large chunk of frustrating debugging and gives you access to all the userland tools to home in on the bug.
And even better: a crash of your driver does not take down the whole system. So you can just keep on working.
Designing protocols and interfaces is roughly the same in either case, after all you could settle on a very simple set of messages for most interface problems, with open, close, fcntl, read and write you would be able to do the majority of interface tasks.
If, as Tanenbaum did, you look at it from a computer science perspective, Linux wasn't good, indeed. It was 'just' a Unix clone.
From a software engineering perspective, it left a lot to be desired, too. It basically only supported what was on Linus desk.
One could say the things Linux had going for it were a) a GPL license, b) being small, c) arriving at just the right time, and d) Linus willing to accept external input.
Whether a) is an advantage of course depends on personal preference. b) helped with d) because it made it feasible for many to download the thing; c) was luck; d) IMO was the most influential factor.
IPC overhead was primarily an issue with the Mach kernel in particular (due to it performing elaborate checks on message ports) that later ended up unfairly stigmatizing the entire microkernel design in general.
Contemporary microkernels like L4 are much, much faster.
In theory you can postulate the performance impact is worth. In practice the is much more in it. That is the difference between computer science and software engineering.
Peter Thiel's advice about monopoly is hugely similar to the point Porter makes in his first book, Competitive Strategy. On the first chapter of his book, Porter clearly states that a business located in an industry with strong competitive forces will have lower margins. He even uses the airline industry example on his HBR article, in the 80's (actually, I'm not sure if it is in his first article, or the one from 2008 in which he revisits the topic 20 years later. Either way, is an old idea in the business world)
It is, in a way, a very old idea in the business world.
The value of Thiel's way of stating it (very, very boldly and controversially) is that it makes people in the startup world actually listen to it. And that's a very good thing
The only problem I see, on the other hand, is the unnecessary bashing of economists. Economists, normally, analyse a market from the point of view of the "public interest". With this perspective in mind, self-perpetuating monopolies are almost always a bad thing (unless you believe in such things as centralized planning and dictatorships).
> Economists, normally, analyse a market from the point of view of the "public interest". With this perspective in mind, self-perpetuating monopolies are almost always a bad thing (unless you believe in such things as centralized planning and dictatorships).
Okay, but the more famous economists are usually at the
more famous universities which tend to have large
endowments and tend to want to get high returns and
often do this by investing in VCs who look for
monopolies that can return money enough to pay for
the professors of economics!
Actually, whatever gets taught or suggested in the
classrooms, it's both common and easy enough for the
high end universities to look for and like
students who are wealthy or relatively likely to become
wealthy and make significant donations
back to the university -- no joke. Or, universities
are not against monopolies everywhere on campus!
>>Economists, normally, analyse a market from the point of view of the "public interest". With this perspective in mind, self-perpetuating monopolies are almost always a bad thing (unless you believe in such things as centralized planning and dictatorships).
Yes. The full saying is, "competition is good... for the customer." It almost always results in reduced prices and better service. It's obviously never good for businesses, since they have to work harder to win and retain customers.
I would LOVE if Sparkfun did some kind of a deal with Radioshack. I mean, SFun is what RadioShack should be right now, and they definitely could use some physical presence... ah, dreams... going through an online catalog of hobbyist stuff is not half as fun as discovering gadgets while getting to touch, play and feel them
While I see your point, I cannot agree with you. Business is not separated from the people that form it. Yes, good businesses are profitable. Yes, if it weren't for the profits, there should be no businesses.
But from my point of view, when you hire someone to work for you, you have some moral (if not legal) obligations with that person. You could say that laying them off was "good, financially". Maybe it was even the only thing to do. As I don't know the specifics, I don't blame Macworld for that.
Although, it is part of your businesses COSTS dealing correctly with your employees. Making theses costs disappear is not "maximizing profit". That is turning your head on a cost you have to pay in order for your business to work.
If you do not do that, you will pay the consequences. Dealing incorrectly with people will hurt you not in a way you can represent in your books, but will definitely hurt you.
Good businessmen are wise if they treat their employees well. It need not be for a higher sense of morality (although it should be), as there are at least a couple of good self-interested reasons to do so.
So you think that the company should sacrifice in a way that hurts them but doesn't benefit their employees. Sounds like a highly rational thing to do.
If they can't afford to keep their employees the thing for them to do would have been to lay them off between events, which would have meant before this last event. Not a few weeks after this event. If they could have afforded to lay them off a few weeks after this event it would have perhaps been better for both parties for them to wait until the next big event and then lay them off, which would have looked exactly the same as this.
Consider that the past few weeks may have been the extra time MacWorld was gracious enough to give them, while at the same time helping themselves so that they may be doing well enough to provide some of their former employees with freelance work.
What moral obligation do you feel was neglected here? Surely hiring someone doesn't create a moral obligation to never lay them off if you can't afford to pay them anymore.
As I said, I don't have the specifics so I can't judge the Macworld case.
Assuming it was an unexpected layoff right after a very demanding day of work, I can see some wrong things there.
First, you should inform people as soon as you made up your mind that you were going to fire them. Letting them work (a lot) and just after that letting them go is wrong. Is using them. Explaining the reasons for the layoff is the moral thing to do. Hiding it with an obvious intention of exploiting people's work without hindering their motivation is wrong (for me, but moral is usually a pretty subjective field).
A simple way to assess that you're up to no good is to see how the employees treat and refer to you after you let them go. And in this case, the twitter action does not feel very amicable to me.
Again, I don't know what really happened there at Macworld. But if someone is laid off and ends up feeling mistreated, maybe, just maybe, we should give him some credit and not directly assume that the business is right and they are just chronic complainers.
> First, you should inform people as soon as you made up your mind that you were going to fire them. Letting them work (a lot) and just after that letting them go is wrong. Is using them.
You're on very shaky moral ground, and I don't find it persuasive. I don't understand the moral obligation to inform as soon as the decision is made. Employment is a 2 way street. By this logic, employees are "using" their employers if they continue working while hunting for a new job. I don't buy that moral logic.
Moreover, morale is important for both employer and employee. You don't want to keep a disgruntled employee around to sap morale. A significant blow to morale can sink the entire enterprise, multiplying the number of layoffs.
> A simple way to assess that you're up to no good is to see how the employees treat and refer to you after you let them go.
Entirely too simple. An entirely legitimate difference of opinion can result in a disgruntled employee. Employees can have wildly inaccurate estimates of their own value and productivity. Losing your job almost always feels unfair. Even the most amicable of splits can still result in latent bitterness.
So if a former employee does go on a rampage, it's unreasonable to conclude that his employer was "up to no good".
It is really cool to see how history repeats itself in such a predictable fashion. According to the International Watch Magazine, at first wristwatches were seen as a passing fad, with some men even saying that "they would sooner wear a skirt than a wristwatch" [1].
Of course, on the beginning of the miniaturization process, wristwatches were not as good as pocket watches. But they are less clumsy, easier to access, keeps both of your hands free. Most importantly, they are worn visibly all the time, a pretty neat characteristic for a fashion item. Wouldn't it be better if that iPhone 6 you are planning to buy were kept in constant display for the eternal envy and adoration of your peers, instead of hidden in your pocket 80% of the time?
It is hard to try to predict the future. Instead, I like to talk about scenarios. And I cannot discard a scenario where a smartwatch is your primary "identification" device and smartphones, tablets and laptops are only big screens with greater computing power, to be accessed and unlocked through your smartwatch. Is that too far-fetched?
From the Internet.org website itself, comes a very timely affirmation: "The future of the world economy is a knowledge economy - the Internet, its backbone". Should this backbone be on the hands of, or at least controlled by, one single company (or cartel, or association)?
This move is a very good strategic move to access the so called "other 3 billion", and the mixture of tech companies and public services in developing countries has been proven effective (see M-Pesa, an initiative that revolutionized the financial services ecosystem in Kenya). But there are very deep philosophical implications when private companies start taking the role of government on the internet. Nowadays, many public servants do not understand the implications of this "knowledge economy" referenced on the internet.org website. Specially when it relates to the forces that lead to huge market concentrations and even de facto monopolies in these industries (see Facebook, Google, Amazon, Microsoft, ...). The irony of this specific initiative is that a company that fights for net neutrality with the EFF against the telcos tries to do the same thing, but at a different layer, in developing countries.
In reality, this raises a very important question for the future of the internet: what is the right amount of private interference on services that were previously considered public ones (identification being the most prominent of these nowadays)? Are we heading to a new Bell style monopoly, but at a global scale, nowadays? And what will be the outcome of the very important fight of internet versus infrastructure companies that is going on?
Regarding these doubts, I really and firmly believe that the most interesting phenomenons will not happen in the US, but on the most unsuspecting countries out there (again, see M-Pesa). We just have to wait and see