Hacker Newsnew | past | comments | ask | show | jobs | submit | sebastianbk's commentslogin

He's been with Intel since 1979 (only interrupted by a stint as CEO of EMC/VMware).


That stint at EMC/VMWare started in 2009...


In my previous job, I was on a team using Vue.js for the frontend and ASP.NET Core for the backend. I quickly got tired of the internal plumbing, package management, build configuration, and all the other things not related to the actual functionality of the app that Vue (v2) required at the time. So, when I started my own company last year, I quickly jumped on Blazor Server, which has been an absolute joy from a developer productivity perspective.

You can build really rich interactive experiences in Blazor at a fraction of the time required to build the same thing with the standard JavaScript SPA architecture. However, now that we have many customers using the application in production, we're starting to see some of the not-so-pleasant side of Blazor Server. When experiencing a lot of requests, the experience is degraded for all users of the app. In addition, it's not very good at re-establishing the WebSocket connection if it fails, giving a poor impression to the user. Though, I'm impressed with the latency—we're hosted in Europe and have customers in New Zealand who use the app without any latency issues whatsoever.

I'm excited about the auto-rendering mode, which looks pretty straightforward. I don't really buy the author's argument that it introduces an extra layer of complexity—we're still light years away from the complexity that a modern JavaScript SPA involves. For small teams with just a couple of full-stack developers, Blazor is still one of the best and most productive stacks, in my opinion.


> However, now that we have many customers using the application in production, we're starting to see some of the not-so-pleasant side of Blazor Server. When experiencing a lot of requests, the experience is degraded for all users of the app. In addition, it's not very good at re-establishing the WebSocket connection if it fails, giving a poor impression to the user.

We've been using Blazor server for ~3 years now and have had similar experience. We only use it for internally-facing administration sites, and even then it's still quite annoying to hear team members complain about the inevitable reconnecting to website warning, even when everyone knows exactly why its happening.

This experience with using websockets to move state between client and server has pushed us away from the idea that the client could ever be made responsible for any meaningful amount of state. In our current stack, we return final server-side rendered HTML and handle multipart form posts. There are no more websockets in our stack. Everything is stateless HTTP interactions - the client only manages a session token.

SPA was a fun experiment, but I am completely over it. If you are trying to fix some weird UX quirk, reach for a little bit of javascript. Don't throw away everything else that works over something small. There was a time when I would have agreed that you need frameworks, but that time has long since passed.

In 2023, approaches like PHP feel more valid than ever. I know way more about what doesn't work than what does these days. If you want something like PHP but you don't know where to start, you should think really deeply about what PHP is actually doing and if your preferred programming language does not also have a similar string interpolation concept.


Isn't a Blazor application a giant blob of WASM/Javascript? I understand that Blazor is more designed for internal line-of-business applications that require porting to the web (and would otherwise be a .net application running on windows xp), but it seems pretty untenable to use it as a framework for the web.


Blazor - until .NET 8 - came in Blazor Server and Blazoe Webassembly variants.

Blazor Server renders the DOM at the server and sends it to the browser. The server also holds on to some state for each client - notably the "current DOM" - so that it can calculate diffs on changes and only send the diffs to the browser.

Blazor Webassembly does the rendering in Webassembly in the browser. The .NET stack runs in the browser. Here, the code renders and diffs in the browser and the the diffs are applied to the DOM as well.

This also means that the same components can run either server-side or client-side. They basically all end up computing DOM diffs and applying those diffs to the actual DOM. Pretty neat, actually.

Each model has it's pro and cons. Blazor Server initialized really quickly and relies on minimal Javascript in the browser. But it creates load and server affinity on the server. Blazor Webassembly offloads all rendering to the browser, but at the cost of an initial load of the code.

In .NET 8 these can now be blended, and a new "auto" mode allows a component to be initially server-side and then client-side when the webassembly code has downloaded.

In addition to that is now (.NET 8) also the static server side rendering (and "enhanced navigation") which you could say is server side components without the "circuit" server affinity and server state of each client. Static server side rendering has some limitations on how interactive they can be - i.e. changing the DOM.


"The server also holds on to some state for each client"

If this is how Blazor is architected. Then I have no interests in using this and really anyone doing any type of web development shouldn't bother with this. Internal apps eventually need to be used externally. This is a time bomb waiting to explode on the users and the developers.

I use VueJS with Asp.net Core with multi-page single load architecture. Meaning once a page is loaded all other data is loaded through ajax. But the app is made of multiple pages. Over 10 years of being a web developer has brought me to this setup. Client UI state should stay on the client. All other state should be saved to the server. All communication should be stateless.


> This is a time bomb waiting to explode on the users and the developers.

I'm just saying a lot of the target they are aiming to replace is VBA applications slapped on top of Access DBs, and Lovecraftian nightmares born out unholy fornication of Batch scripts and unintelligible spreadsheet formula.

I'm not saying your wrong, just pointing out that even if this is a ticking time bomb it's a ticking time bomb that is using conventional explosives replacing a cesium based nuclear time bomb that is already counting down the seconds.


It's how Blazor Server apps are architected. Blazor WebAssembly apps don't maintain client state in the server and can be load-balanced like normal.


Seems like server vs client would also come with a set of security tradeoffs.


It’s just moving the work that would be happening in the browser to the server. There is no reduction in security with either of these styles


It can be hosted by either WebAssembly or by the server (in which case the server will render the DOM and send diffs to the client over a WebSocket connection). Blazor Server probably isn't the best choice for a popular SaaS app, mainly because of its dependency on WebSocket connections and less-than-perfect reconnection logic.

I'm optimistic about the auto-rendering mode, which will serve the app via Blazor Server (using a WebSocket connection) the first time the user hits the app. It will download the WebAssembly DLLs in the background so that the next time the user comes by, they will get the WebAssembly-hosted version. It's an interesting mix combining the best of both worlds (hopefully).


As a backend dev, I love the technology. The problem is that you have to choose between a not-so-scalable solution (Server, signalR) or a minimum 2mb initial payload (WASM) that can easily go to 5mb.

Interested on how many concurrent users you have for Server to be a problem. Can you elaborate more on your performance issues?


I may have made a mistake in designing the architecture of our app. Since we're a small team, I opted for a big ol' monolith, hosting our APIs on the same server as our Blazor Server app. We normally serve a few hundreds requests per second on our APIs, which is totally fine. However, sometimes we got some spikes up to thousands of requests per second, which has the unfortunate consequence that our Blazor Server app becomes laggy and starts to drop WebSocket connections frequently. So, now, we are in the process of moving our API controllers to another project, which will be hosted on a separate server.


Sounds like you did everything right then. Started off simple, now your business is taking off, failures aren't catastrophic (they're grey, not black, from what it sounds like) and splitting out a component shouldn't be too hard, so you'll be ready for more scale soon. All while maintaining the business!


.NET 8 solves that exact problem as far as I can see. You can opt into auto mode and it uses server side Blazor until the client downloads all assets, then in subsequent visits it uses the WASM runtime. Seems to be a good compromise.


Is 5MB a real problems? Theoretically it may look big, but I have seen many websites much bigger, not to mention all video/image we download are already skewing download by a lot. Considering runtime is cached for long time, I don't see a real blocker. First page render would be an issue but SSR solves that.


> Is 5MB a real problems?

Well if you want to make small fast loading html pages with a minimal js library, and end up at a few hundred KB that you can understand, profile and optimize, then that is impossible with blazor. So it's a very real problem.

If you want 5mb blobs and do not care about what is going on inside, how to optimize or reduce the memory and bandwidth usage, then it's not a problem, works just as well as the websites you have seen, with 200 node dependencies.


On a phone, 5MB it not ideal. On a corporate desktop, not an issue.


As someone in New Zealand that's crazy. The ping to Europe is terrible. To the point that video calls to the UK are painful.


I regularly have video calls from the UK to NZ with no issues at all. Might be your provider.


> I quickly got tired of the internal plumbing, package management, build configuration, and all the other things not related to the actual functionality of the app that Vue (v2) required at the time.

I don’t understand this. I’ve used v2 since before release and it’s never been anything more than an initial setup of 5m and then build your app?


Man, well done- how do you just start a company and instantly the biggest issue becomes having too many customers?


I share your perspective. It is the most productive environment I have seen in many years. Blazor WASM for internal company applications, PWA or Blazor Hybrid (basically Cordova/Electron Shell just with C#) is just awesome.

I share the article's fear of overloading the technology, but do not see it overall that negative.


In my experience, long polling is more stable and you can enable transfer compression. Maybe it would be good in Blazor to disable the persistent connection completely, having only requests and responses. Often we just want to call a backend method and update the view in response.


> In my previous job, I was on a team using Vue.js for the frontend and ASP.NET Core for the backend. I quickly got tired of the internal plumbing, package management, build configuration, and all the other things not related to the actual functionality of the app that Vue (v2) required at the time.

Oh, hey, I have something relevant to say about this setup. Currently I'm bootstrapping a platform with .NET Core on the back end and Vue 3 on the front end.

In short:

  - .NET is one of those boring workhorse choices, like Java - it's pretty stable, the performance is good, the type system is decent, IDEs are as good as it gets (Rider, VS), can run on Windows or Linux with no issues, there is an ecosystem that feels very coherent (more focus on ASP.NET than something like Spring in Java, because there's fragmentation around that, Dropwizard, Quarkus, VertX and so on)
  - Vue feels really nice to work with, the Composition API feels simpler than React, the developer ergonomics are fine, the documentation is nice, packages like Pinia, VueUse and VueRequest keep things simple, in addition to something like PrimeVue giving lots of UI components that you can use to iterate out of the box
  - however, while I think that the SPA approach can be nice to keep the front end separate from whatever technology you use on the back end, it comes at a cost of duplicating your data model and the interfaces between them (REST endpoint and client, for example), in addition to needing to think about how to deploy it all (separate domains vs context paths on the same domain, CSP, CORS etc.), though it's mostly doable
  - I did definitely run into problems with Vue not being the single most popular choice out there, for example I wanted to integrate Mapbox maps and the VueMapbox package is for Vue 2 only, whereas Vue 3 Mapbox GL breaks when you try to integrate it with mapbox-gl-directions. Eventually I switched over to Vue Map (Leaflet based) with leaflet-control-geocoder and leaflet-routing-machine but even those tended to break, because adding markers for a calculated route makes the map break when you zoom in/out, due to it losing a reference to the map JS object; in the end I just used patch-package to fix a few lines of code that didn't work in the offending packages instead of forking/building them myself, but that's a bit of a dirty hack
In short, I think that .NET is pretty good, Vue is pretty good, but the JS ecosystem feels like it works well only sometimes, even for reasonably popular solutions. On that note, I'm all for trying out things like Blazor, but then again my past experiences with Java and something like JSP/JSF/PrimeFaces/Vaadin have soured my perspective a bit (which is also why I prefer to keep the front end decoupled from the back end as much as possible, sometimes to my own dismay).

Honestly, it sometimes feels like picking anything that's not React is shooting yourself in the foot because everyone's building their libraries/packages/integrations for React. At the same time, I don't really enjoy React much at all.


I spent a decade with C# and .Net and even in its current form which is easily the best it’s ever been I vastly prefer to work with Typescript.

Yes, you do need to set up some rather strong governance around it for it to work for multiple teams, but you should really be doing that with any technology, and once you do, it’s just excellent. Part of the reason for this is that it’s basically designed from the ground up to require that you build and maintain “template” projects, have strong linting, testing and pipeline governance, so there is a lot of freedom to make it work easily for your organisation exactly the way you want it to. Typescript is obviously not alone in this, but it’s far less opinionated than something like .Net.

The main reason is that .Net always becomes burdensome once you start using it’s included batteries. This isn’t really an issue with C# as much at it is an issue with it’s libraries. Take OData and Entity Framework as an example, the .Net magic behind them sort of share the same model builder, but they do so differently. What this means is that a lot of the cool OData features like patch doesn’t actually work with EF, meaning you’ll either have to rewrite a lot of the model builder (don’t) or have to work around it. .Net has always sort of been like that. It variates between being 50-90% “done” but it never really gets there until it moves on. Like EF, much of the EF 7 road map isn’t implemented yet, but here we are, moving on to EF8.

I think for a lot of use cases it’s a wonderful technology, and Blazor will probably be awesome until Microsoft moves on, but at least around here, it’s also a technology that doesn’t really see adoption in anything but small-midsized companies, and in my opinion, .Net is part of what hinders growth. Obviously not a huge contributor, but once you step out of where it excels, you’re just going to have to fight .Net so much harder than you will with Typescript. Which is sort of interesting considering they are both Microsoft products which align more and more. That being said, it’s not like it’s bad either.


Batteries are included, but you aren't forced to use them.


Previously from this account: https://news.ycombinator.com/item?id=38228674

The tldr of both the previous and this posts is OData being bad yet the author extends his grievances regarding it to the entirety of ecosystem.


It seems a little disingenuous of you not to mention how I never hide the fact that it's an issue with the batteries. I'd also say that considering how great OData's patch is with the modelbuilder it's actually EF that's being bad in this case.

You could also point back to other posts like this one: https://news.ycombinator.com/item?id=37538333&p=3#37541652

Where I also point out other, similar issues with other parts of the .Net batteries.


If you are having issues with EF Core, maybe it's not the tool's fault?


I enjoyed using ServiceStack. Write your data model in the C# API, run a tiny CLI command and it spits out Typescript definitions to match your data model.

https://docs.servicestack.net/typescript-add-servicestack-re...


TypeLite used to offer something similar, but it's somewhat dead nowadays.

OpenAPI client generators are probably what's popular today.


Yep. For frontend use, I think https://www.npmjs.com/package/openapi-typescript is the most widely-used/well-regarded, though https://www.npmjs.com/package/orval seems to me to have some nicer features like react-query support.

There are other options too, I'd just stay away from "_the_ openapi generator" (https://openapi-generator.tech/) which does a pretty poor job IMO.

Disclaimer: I'm the founder of a company doing SDKs commercially, but we don't focus on the frontend right now, and our free plan is still in beta.


> however, while I think that the SPA approach can be nice to keep the front end separate from whatever technology you use on the back end, it comes at a cost of duplicating your data model and the interfaces between them

Can you elaborate on this? I'm not sure I get it, because once you have your view model in asp.net, it seems like it should be easy to derive a JS/TS model from it using various techniques (reflection, source generators, etc.).


> it comes at a cost of duplicating your data model and the interfaces between them

Not the case if you use graphql.


As a Microsoft employee I am so happy that we got rid of stack ranking a few years ago. It encourages a bad behavior and goes against helping your coworkers with whom you are essentially competing for compensation. I am surprised to see that a company like Valve, which seems to be held in high regard by many developers in the industry, still operates with this compensation system. It's system of the 80's if you ask me.


I suspect it might work differently in Valve's case. I'm led to believe that Microsoft had a fixed, conventional hierarchy, where every little group and person within the group was backstabbing everybody else to keep their jobs.

If their handbook is to be believed, Valve has a much more flat management structure, where it's basically Gabe at the top, sortof, and everyone else doing whatever they think is best for the company, and there's a fluid system where people can move between groups according to their interests and how they perceive they can add value. So, unlike in Microsoft's case, Valve's people have an easy avenue towards putting 'if you can't beat 'em, join 'em' into practice.

Valve has a radically different corporate culture from most other companies in it's space. It doesn't come from the 80s, or indeed almost any other time. Perhaps the stack ranking works a lot better because of it.


Part of me wonders if the Valve management structure isn't almost completely to blame for things like their infamously bad customer service.

I could see why - dealing with support tickets from irate people is not a particularly interesting (or judging by Steam's runaway success: particularly value adding) activity.

Not that I mean to hijack this to complain about Steam, but you have to admit it's a benefit of a traditional management structure: someone is making sure the shitty-but-necessary work gets done.


I suspect it's much more that Valve is an insanely lightweight company for it's customer base. It has something like 330 employees servicing 100 million customers on the go-to PC gaming platform, and a lot of those employees are working on shiny tech and new games. As a comparison, Rockstar North's facility in Edinburgh has roughly the same employee count (slightly more by Wikipedia's count), and all they do is put out one new game every 5 years or so.

I suspect that if Valve wanted to, it could easily hire lots of people who would be happy to man the customer service helplines and who wouldn't be either able or willing to try to make TF2 levels or whatever.

Maybe the management structure is what limits the size of Valve, though.


Rockstar North's facility in Edinburgh has roughly the same employee count (slightly more by Wikipedia's count), and all they do is put out one new game every 5 years or so.

All they did was recreate a city full of assets and added game mechanics to it in five years and made the most expensive (at the time) game product in history with more than 1000 people involved in production and the fastest selling entertainment product in history. A game that broke eight Guinness world records and sold more than 65 million copies and well over $2b in revenue. And that's only GTA 5.

All they did, numerous times so far, was to create vast games full of created digital assets. Games that hold record-breaking sales numbers and hold positions in all best-selling video games charts. That's all they did. Nothing to write home about, really. Bunch of slackers.


What's your point? Valve put out multiple AAA games in the same timeframe too, and did a lot of other stuff besides. Sure, GTA5 is a big-ass game, but Valve's games - games plural - haven't exactly been small projects either.


DCC of assets for something like GTA is magnitudes larger than anything Valve does.

Rockstar has an army of artists working on those assets for that game. Valve is a multidisciplinary house where they dabble in a lot of stuff, split in smaller groups, but their primary vehicle is Steam and a couple of (successful) games where each project does not need as many people. It's comparing apples to oranges.

What Rockstar does demands an army of artists. If Valve were to do same type of game world, they would have an army also. They do not do the same thing though.

If Rockstar were to do what Valve does, they would also have a number of people split into smaller teams working on their own things. They do not do the same thing though.

That's why you can't say it's all Rockstar does with same number of people. Those people are not the same skill and those projects are not the same.

Rockstar's yield, if you're comparing the game industry, is a level above Valve. It's a level above most. It's really not a good example to compare to. They are on the leading edge of what they do. An area Valve isn't in. Maybe if you've compared Avalanche Studios and Just Cause 3. They had 75 core-member team in a studio of 250 and, who knows how many, outsourced people.


Most game development studios (including Valve) outsource most of the art related work. So they can definitely maintain their strength and build a product with scope similar to GTAV


His point is that probably just GTA5 has sold more copies in 5 years than all Valve games combined (a quick check of the most popular ones, HL 1/2, L4D 1/2, Portal 1/2 reaches about 30 mils), so that number of employees is justified.

Also:

http://www.gamespot.com/articles/gta-5s-online-mode-has-gene...

There is no denying Valve does great with the number of people it has, but of all the badly managed companies they could be compared to, Rockstar really it is not a good example...


Rockstar is certainly doing well profit-wise in the industry, but I think Valve still compares favorably productivity-wise.

In the last few years, Valve has developed multiple new pieces of hardware with software support; the Steam Controller, Steam Link, and HTC Vive. They've also developed and supported a new platform, SteamOS, and rolled out hardware with manufacturing partners. Besides this, the Steam platform has added live-streaming (like twitch), family sharing, big picture mode, the VR UI, and other features. They were also involved in the development of the graphics API Vulkan, which they need for their Linux-based SteamOS to become a better platform.

These are just what I can think of off the top of my head and don't even explicitly involve any games (though they released a collection of 11 small experiences for VR as well).


I only wish I could give you multiple upvotes. Valve has been huge for Linux gaming and if I understand correctly, were a major factor in the push for Vulkan. They can indeed Take My Money.


Most of the stuff Valve is selling nowadays are addons to already existing games.

Interesting thought experiment: Break the sales for DOTA and TF2 into ~$50 increments and compare that number against total sales for the last two GTA's put together. I suspect Valve will beat them at least twice over.


Pretty sure Valve outsure all customer support.

> It has something like 330 employees servicing 100 million customers on the go-to PC gaming platform

Actually there is a lot less people who actually have any connection to userbase at all. Most probably never even publicly known.

Few years ago one producer who visited Valve commented that in office there was one of areas where he was told something like "and here is our Steam department". And back then that was like 20-25 people at most include both partner relations and marketing.


> shiny tech and new games

You mean Half Life with it's last full iteration from 10 years ago, and the second episode with a cliffhanger from 2009? Or the Source engine (HL2) which evolved from GoldSource (HL1) which was based on Quake 1 engine (with some parts from Quake 2 engine) - even the Call of Duty engine which aged well too is based on the more modern Quake 3 engine, and I wouldn't consider it shiny in 2016. Rockstar's RAGE engine as well as Crytek engine (StarCitizen, Crysis), and the Frostbite (Battlefield 1) are light years ahead and far superior in every aspect and that's what I would consider shiny tech.

Valve is what Valve is today because of the success of steam. They produce little games like Dota2 more for fun than anything else, they gt rich with steam.


It's a bit disingenuous to just mention Half Life, although note that work on Half Life series games was still ongoing as of last year (and they did a fair amount of work on the whole series in ~2012/2013 porting it to other operating systems).

Those 'blank' 10 years saw the release of Left 4 Dead 1/2, Team Fortress 2, Portal 1/2 and CSGO, for instance, as well as that 'little' Dota2 (which is clearly a large game by any metric you care to choose). At least three of those games involve regular ongoing content infusions. Add in the development of the Source 2 engine, and the Steam Controller, and the work on the SteamOS platform and whatever else on Steam Machines.

This is out of the 330 employees that also support those >100 million customers using the service on a regular, possibly daily, basis. As I pointed out, Rockstar uses more manpower than Valve just to create essentially one game every few years. The last GTA game I bought (San Andreas) did have graphics well behind the state of the art too.

Even if we assume that those 330 employees are doing nothing but working on Steam, it's still a tiny number compared to the customer base. AirBnB, for instance, has had 2 million listings in it's lifetime, and it has about 2300 employees. Uber has over 6000 employees on about 8 million customers, and both those cases are ones where people aren't typically regular users.


the majority of those airbnb/uber numbers are customer support not engineering


Which makes whatever Valve does all the more impressive.


None of the "blank" games are Valve's IPs but acquisitions which kind of stopped in development or releases after Valve acquisition. Half-Life was their last original IP and even that ended in a complete cliff hanger with Episode 3 or sequel nowhere to be found.

It seems that Valve has serious problems actually pushing their games to completion since they started printing money with Steam and pressure to actually release went away.


> None of the "blank" games are Valve's IPs but acquisitions which kind of stopped in development or releases after Valve acquisition.

TF2, CSGO and Dota 2 are regularly, and visibly updated constantly. In no way are these moribund or 'stopped in development'; the latter two are still among the most played and watched competitive games online.

It's also stretching things to say that Valve 'acquired' Portal and it was dropped 'after acquisition', since Valve essentially bought the team that made a small gameplay prototype and fleshed out the mechanic into two full-fledged AAA games.

So that just leaves 'Left 4 Dead', which was a case of Valve buying a company, publishing it's game and then putting out a full-on sequel. Given how similar L4D2 was to L4D1 (to the point where the first game was essentially contained inside the second, and there was even a threatened boycott by fans), would it really have been a smart move to make a third so soon?

Sure, it's been, what, three years since the last full game release by Valve. That's quite a long time, and there is no doubt less pressure to release sooner because they're sitting on their money-printing machine, but that sort of timeframe isn't exactly unprecedented in this business. The Rockstar studio I mentioned is on a 5-7 year turnaround for it's one main title - though it does aid and abet Rockstar's other studios.

I suspect part of what's going on is that Valve is uber PR-conscious these days, given it's position in the market and can afford to be a ultra-conservative about quality control (though hopefully not in game design) in any new game it's going to put out.


Portal was a acquire hire, the originsl game was a free game from a students competition. It was a freeware but full game that takes about two hours to complete. Of course the graphic looked worse and there was little narrative, it was a students project after all. Portal 2 was based on additional aquirehire as well (colored water splash features), from followup competition, game is freeware as well.

Left 4 Dead, company was bought.

Team Fortress was a Mod for Quake. Through a aquire hire transfered the Mod to Half Life 1 (GolgSource engine based on Quake 1 engine). Team Fortress 2 was planned to be released in 1999/2000. Though it took additional 11 years and at least complete restart to release Team Fortress 2 based on Source engine.

CounterStrike was a free Mod from fans for Half Life 1. They aquirehired the team around CounterStrike 0.9. Some say it went downhill from there. CS 1.4 was the last classic CS.

CS Source was a port from GoldSource to Source engine. As Source engine is just an incremental evolution, it was just a matter of building the game again and later replacing textures and models with higher ones, etc.

CS Go is a rehash of CS Source simplified for console gamers and ported to consoles.

Dota was originally a fan-made Mod to WarCraft 3. Valve aquirehired them to create Dota2, a standalone version based on Source engine. Blizzard wasn't amused and announced a similar game a few days after Valve announced Dota2 back then.

Half Life 1 (GoldSource) is based on Quake 1 engine (with small parts from Quake 2 engine), and of cource evolved. Half Life 2 (Source engine) evolved from GoldSource engine. Videos from HL2 beta show clearly the old DX7 renderer and loading older HL1 map format. Valve worked on HL2 episode 3 and Warren Spector (Deus Ex, System Shock) and his former company worked on HL2 episode 4. But Valve decided to shut down both. The unclear situation for several years makes HL fans angry. HL2 episode 3 / HL3 is the new Duke Nukem Forever, at least that one got released after 11 years - we will see about HL2. We all remember also the shitshow around the HL2 release. The infamous HL2 demo at E3 2004 was faked and the game was nowhere to be ready. The leaked video footage a year later showed how far behind Valve was behind the announced release schedule. Half Life 2 was finally released another year later with heavily shortened gameplay and many previously features removed. Making it basically a different game that feels very differently to the original Half Life 1.


> They produce little games like Dota2 more for fun than anything else, they gt rich with steam.

Little games like Dota2? It's the biggest title on Steam in terms of players by a large margin and probably the 2nd most popular game worldwide on PC to ever exist (behind LoL)

It also generates a significant amount of revenue for Valve through the sales of tournament ticket and cosmetics.


You don't mention Portal 2, Dota 2, CS:GO, all the TF2 updates, Source 2, all the advancements to Steam, the Steam Controller, Steam Link, the Vive, the Lab VR demos.

Sure the Battlefield team has made Frostbite which is shinier than Source. Did they also make their own store, gamepad, and VR headset?


> the Lab VR demos

Wii minigames in VR are still just Wii minigames, whole thing feels like something I'd expect from a hackday from a company as supposedly prolific as valve.

Got a strong feeling the talent is long gone.


> Insults 1 thing from list of 10. Ignores other 9 things.


Source 2 is in DOTA 2, but that game doesn't show its full potential. Let's just wait for their next game until we say that they are "years behind".


[insert obligatory Half-Life 3 joke/reference here]

I've never used Steam's customer support; do they outsource any of it to contractors and/or offshore?


They've tried and they discontinued doing so because they felt it was even worse than being apathetic about it.


What's your source on that? I'm fairly certain that all of Valve's customer support is outsourced, though not specifically overseas, but to contract support teams.


> Rockstar North's facility in Edinburgh has roughly the same employee count (slightly more by Wikipedia's count), and all they do is put out one new game every 5 years or so.

Yeah they actually manage to ship games (that manage to make back the entire production budget in pre-orders alone), not just sit on their hands running an online store with an outdated and clunky client.


I mean, it's not like traditional companies—from Google to Comcast—consistently provide great customer support either. So you can't really make strong conclusions about why Valve is having issues there without additional insight; there are too many confounding variables to pin it just on its management structure.

This is a common story with people trying something new. If you do the old-fashioned thing—buy IBM, so to speak—and fail, well, these things happen. A lot of factors could have contributed to the problem. But if you try something new, the new thing must be at fault—even though all those other factors apply just as much now as they did in the IBM case.


That's a fair point, but I'm struggling to think of a way in which it isn't at least partially to blame. The handbook (and Valve insiders) say it's pretty much a do-your-own-thing company. Who'd want to deal with angry users all day?


I work for a software company very much like Valve in structure, who has been operating this way since the 70s.

Support still gets done, because when you hire you specifically hire people who love doing support. Those people exist, and they get tremendously emotionally involved in the quality of their work, just like anyone else.

In every department there are people who struggle with the flat hierarchy and free range to work on what you like, who have a strong emotional need to know who is in charge, and to be told what needs doing. Those people struggle, but they are by no means relegated to any one department -- plenty of them are engineers.


If they increase the weekly/monthly pay for people who choose to deal with angry customers, it should reach equilibrium eventually.


Surely there are some people out there who really do have a passion for, and get satisfaction from, dealing with upset customers. The questions left are how hard are they to find and how expensive are they to employ.


Or someone is making sure to look like their are doing some work. I have seen plenty of managers, that are nothing more than a human layer between the CEO/CTO and the engineers.


I'm a former MSFT employee that was around when we dropped stack ranking, and I didn't feel like that was a substantial change. Managers still calibrate you against your peers, a stack is still created, and compensation is assigned accordingly. I remember reviews feeling the same before and after. What changed for you?


At any company (Microsoft included) where performance-based compensation exists, there is a budget for that line item. Therefore it is a zero-sum game - to pay someone more because of their performance, that means that someone else gets less (or zero) from that line item.

So, depending on how you interpret the term "stack ranking", you can either look at it as "forced removal of the bottom x% of the company" or "people aren't ranked / bucketized". In the MSFT case, I believe that the former has been removed. But the latter definitely cannot be removed if you are to have performance-based compensation.


    > At any company (Microsoft included) where performance-based 
    > compensation exists, there is a budget for that line item.
I'd like to point out that it's possible to pay bonuses out of profits, rather than a fixed, yearly pre-allocated pool. If employees are only paid a bonus when they add to company profits, there is no competition for bonus funds between employees. So for example, if you have a great idea that saves the company 30% on something, you get half of that saving/profit, or something similar.


The problem is, how do you measure that? There are a bunch of good ideas that most people will agree provide a benefit, but it's really hard to come up with an objective measure of how much money that makes or saves.


Make the bonus pool a fixed percentage of company yearly profits (not sales). Assign at least half of the bonus pool in an egalitarian way (like based on the number of days worked on that year), assign the remainder on a roughtly "merit" based criteria, and let every team to agree on the criteria, but make it be something objective (it does not matter if it is number of bugs closed, or being at your desk on time in the morning, or whatever, just as long as the measurement is unambiguous and the team agrees on it).

That would be far from perfect, there will be free loaders, and most likely than not it will be slightly unfair to everyone. But at least you have removed the perverse incentive to sabotage coworkers in order to look yourself better.


If there is no measurable difference, then a bonus cannot be paid out according to this scheme. So we're talking about innovative ideas that release an extra reward (in addition to a regular salary), if they result in a measurable saving somewhere.

For example: you write an algorithm in your spare time that more efficiently packs together a good that your company produces, such that shipments take up 10% less volume thus saving ~10% on shipping.


You will be horrified to learn that there is a push to write stack ranking for all public employees into the constitution of Greece.


Wouldn't it make sense to just pay everyone (in the same type of role) essentially the same amount and then bonus people from time to time on particular outstanding achievements. Nothing is a more powerful inducement than financial incentives and a bonus gives an impact that the ongoing salary doesn't. People don't give a crap about annual reviews unless they think its low enough to get them fired.

My sense is performance evaluations should be banished from the corporate world, for the most part. Usually a waste of time, but that is where managers can be helpful as they are carrying an ongoing assessment of the value of each of their employees at all times.


'Nothing is a more powerful inducement than financial incentives'

Actually there's a lot of research that contradicts this. Essentially as long as people have enough money that they don't worry about money ie a comfortable middle class lifestyle for country/area, and they don't think they earn significantly less that their peers, money is very inefficient and occasionally negative incentive for tasks that require a decent level cognitive ability.


IIRC Peopleware showed that financial bonuses were tremendously effective - at reducing peoples' investment in their work.


Extra money for excellence comes with an implied message that the regular money is just for showing up. You really don't want your employees to feel that way.


that guide is from 2012. things may have changed by now. i know we gave up stack ranking at $GIANTCORP since then.


Same here. I have read several of the IMF Blanchard's school in grad school and naturally thought it would be the same guy. I got a little confused too.


I just sold all of my Microsoft stock two days ago and now I come across this article. Avesh is absolutely right about the points he is making and I think that everybody who are entitled to stocks as a part of their payment package should sell these immediately and construct a more balanced portfolio instead.

If you (like myself) feel like you are better at writing software than acting like a wolf on Wall Street you should take a look at index funds. An index fund is a fund that reflects the development of an index (e.g., S&P 500 or FTSE 100). Rather than paying a portfolio manager a high fee (of up to 5% of the invested portfolio) to actively manage your investments, an index fund is designed so that it simply follows an index. This is much cheaper than actively managing the portfolio. Since John Bogle came up with the idea about 40 years ago and founded The Vanguard Group, history has proven time and time again that active investors can't beat the market in the long run. Index funds therefore yield a higher net return because of their lower costs (typically around 0.5%).

If you are new to investing, I would suggest to go with the three-fund portfolio[1]. Divide your portfolio into three parts and invest in a domestic stock market index fund, an international stock market index fund and a domestic bond index fund. This would probably yield an annual return of 10-15% with a very controlled level of risk. I have constructed my portfolio like this and I am really happy about it. I don't have to constantly worry about my investments and at the same time I can expect a fairly solid rate of return.

[1]: http://www.bogleheads.org/wiki/Three-fund_portfolio


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: