Hacker Newsnew | past | comments | ask | show | jobs | submit | pankajkumar229's commentslogin

I find irony here.


there is no reason for users to be maintained in the kernel.


Can you elaborate on that?


Why not just increase sales tax? Or make sales tax sector by sector. Since they are almost a monopoly, it will come out of their own pocket mostly.


Sales tax is a tax to the consumer. I fail to see how that would affect Netflix at all.


All taxes are a tax to the consumer. Sales taxes are just less avoidable.


Basic econ theory shows that tax burdens are shared by consumer and producer, regardless of who legally pays.


Is there any difference?

They would have to lower their prices in order to keep the same price to the end user (to keep the same number of users) and therefore get less profit.

Is it because in the states you show the same price and local (state) sales taxes are added on top that makes it a tax to the consumer?


Companies here won’t lower prices, they’ll just hide the fact that there’s a tax until checkout. At least with income tax it directly comes from their books and forces them to either take the hit or be the ‘bad guys’ and raise prices.


The consumer just cares about the final price. They won't distinguish a price increase caused by taxes from one caused by corpo decisions. So assuming Netflix set their current prices optimally, they will be have to keep them and take the hit.


This is not really true. Sales tax is almost always a tack-on. Netflix can very easily advertise the same prices, and then when sales tax gets added on in your credit card statement they can just throw up their hands and say "the government makes us do this!". At least, that's how I'd see it - I'm paying Netflix X, and I'm paying the government Y. In no way does Netflix "eat" this cost.

Could try and pass legislation to force all companies to only ever advertise after-tax prices, but that's a doozy to enforce.

Sales tax is always regressive. The only ones who are paying it is the consumer.


> doozy to enforce

We have no problem enforcing the after-tax rule in VAT jurisdictions.


All tax gets paid by the consumer in the end.


Sales tax is regressive.


Sales tax most places in the US are segmented. My grocery tax is different than my gas tax is different than my alcohol tax is different than my iPhone tax.


We have an education product and we support C++ using jupyter/nbgrader/clang too. https://datacabinet.info/dc-docs/sbs/projects.html#step-3-c . We had to make some clang changes to make it work.


Would this not just be a container?


Linux containers are 'just' processes that run directly on an existing Linux kernel all together. A kernel feature called namespaces gives these processes their own view of memory, the file system, etc.

What's being described in the article isn't a conventional process. Rather, the article describes using Linux's KVM virtualization capability to launch a virtual CPU that is running another kernel. A virtual machine is like an emulator - hardware being simulated in software. The article is describing how to launch that VM, and how to build a minimalist kernel that runs inside it. This is similar to the setup that one might use in a college operating system course to run an under-development guest kernel on a host system. The under-development guest kernel can then run processes of its own.

(Since the design of this mini-kernel contemplates both a kernel mode and user mode, I would not call it a unikernel. The article’s guest kernel also isn’t Linux, though it seems to be aiming for Unix compatibility, e.g. ELF format)

Namespaces have a long history in operating systems, by the way: processes have had their own 'memory namespace' (aka virtual memory) in OSes since the 1960s. Modern Linux namespaces extend that concept to other aspects of the system like the file system, network, process ids, user ids, and more. (File system namespaces also have a long history in `chroot`). One kernel runs all these processes and implements the namespaces that keep them separate from each other. 'Containers', then, are a type of process whose namespaces are configured to make it seem like that process (or process tree) is the only one running on the system.

You could think of the difference between a kernel and a process in terms of the API that each one builds upon: a process builds upon both the CPU architecture and also API of the kernel that runs it (system calls like fork and open), while a kernel builds upon only the CPU architecture that runs it.

This particular kernel is also aware of the fact that it's running in a VM and is not designed to work on real CPU hardware; it makes "hypercalls" to the hypervisor as described in the article. Since virtualization is common, many real kernels including Linux have explicit support for being run efficiently in a VM, taking advantage of hardware acceleration and hypercalls where available. Most cloud computing environments run customer code in this type of virtual machine, not directly on real hardware.


No, it's more akin to a unikernel[0]. Technically all kinds of processes will run in a container, but in a unikernel, there is really only one process.

I believe this kernel will similarly run a single ELF binary, but I believe it does do some memory mapping, potentially (I've only scanned it quickly, however).

[0] https://en.wikipedia.org/wiki/Unikernel


It does seem like the article author wants to create a separate user and kernel space, something that Unikernels aims to eliminate.


This is hardware virtualization. Containers are OS "virtualization". They offer different levels of isolation and features.


Matter of definition.


Could it be done in llvm or another backend so all frontend languages can benefit?


I have a late 2013 with the same exact keyboard problem. Apple quotes a crazy repair price. Can someone sue for us?


Disclaimer: I am one of the founders of DataCabinet.

We built an online service on top of Jupyter which takes away the effort of handling JupyterHub. It would be great if we could hear some feedback in the context of this conversation. We feel that DataCabinet is better in ways because it provides: a. Autoscaling according to number of users b. Sharing full containers easily between people. You can install pip/conda binaries and share with students/users. c. Shared storage so nbgrader works seamlessly. Here is a full comparison: https://datacabinet.info/pricing.html

Please excuse our landing page, it just got created today and we are fixing it.


We built the exact same technology at Agawi(http://arstechnica.com/gaming/2012/09/report-cable-companies...). The only difference was that we did not have the parker API. We worked very closely with Microsoft and NVidia to make it work back then with full headless Windows GPU servers. H264 encoding both in GPU and CPU. We could have reduced latency by distributing the servers but we did not get to the stage of distributing GPU cloud back then. But the business never took off. I was not on the business side so I cannot tell exactly why. Probably latency but there are 3D strategy type games that you could stream. If you need, I can ask the business head of our team and he can elaborate.


One hard nut: the cost of provisioning datacenters with enough GPU capacity to meet the demand curve of a "hot" title, and still make sense in terms of inevitable idle time and depreciation.

Either you have insufficient capacity to meet day 1 load when everyone piles onto a hot new title, or you over-provision, meet that demand and have much of that hardware doing nothing during doldrum seasons (or when a title bombs).

Probably you need to figure out how to make GPU capacity useful when it's not rendering games, and sell that as a service as well (GPU-based machine learning?). It doesn't help that OSes have been prickly about letting processes share GPU resources; I imagine there are a fair number of thorny security problems, even with GPU MMUs.

This doesn't seem like something a cloud gaming start-up can really tackle; lots of capitalization, with lots of competition from entrenched providers, and no really compelling reason to put games in the cloud to begin with. The bigger (console) players probably realized that having consumers buy their own compute is not only cheaper and more resilient depreciation-wise, but also causes a nice platform lock-in effect once the customers have purchased a few titles.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: