Your blog article stopped at token generation... you need to continue to revenue per token. Then go even further... The revenue for AI company is a cost for the AI customer. Where is the AI customer going to get incremental profits from the cost of AI.
For short searches, the revenue per token is zero. The next step is $20 per month. For coding it's $100 per month. With the competition between Gemini, Grok, ChatGPT... it's not going higher. Maybe it goes lower since it's part of Google's playbook to give away things for free.
Yeah, I thought it was weird right away too, but brush it off as a tech blog... but then I realized it's actually a finance website. Ruins the credibility of the website instantly.
The $4B revolver will likely sit undrawn. When it gets drawn, there usually a specific plan to reduce it back to zero. It's not for building data centres, a revolver typically used just for timing differences like a credit card is used (and the lenders will be paying attention). Also, when things get bad, there are covenant triggers which would allow lenders to renegotiate.
My 82 year old mother has enough trouble figuring out what is a button vs. what's not. She just taps everything on screen to find out. This is going to make it worse.
Guess this is universal because mine does the same. Perhaps it's a frustration that screen doesn't responds in same way as e.g. a remote control where there's a physical press. Sure there can be a haptic feedback on phone but it's not the same. Especially for older people.
It's not only meaningless due to T-Mobile being untrustworthy, it's meaningless if it's for a specific network speed or volume. For example, would 3G and a 2GB plan for $50 lifetime guarantee be useful?
It's partially sleight-of-hand, but it has some value. It effectively means that you've got a couple years of leeway to find an alternate plan without price changes in the meantime if the speed/data become unworkable or a network shutdown becomes imminent.
I had a boss who had a math degree. He'd map out the flow from start to finish on a whiteboard like you see mathematicians on TV/movies. Always had the smoothest projects because he could foresee problems way in advance. If there was a problem or uncertainty identified, we'd just model that part. Then go back to whiteboard and continue.
An analogy is planning a road trip with a map. The way design docs most are built now, it shows the path and you start driving. Whereas my bosses whiteboard maps "over-planned" where you'd stop for fuel, attraction hours, docs required to cross border, budget $ for everything, emergency kit, Plan A, Plan B.
Super tedious, but way better than using throwaway code. Not over-planning feels lazy to me now
Sure, everyone has a plan until you get punched in the mouth; however, that saying applies to war, politics, negotiations, but not coding.
I the book How Big Things Get Done they analyze big and small project failures and success and end up with something along the lines:
1. Spend as much time in planning as necessary, in the context of mega projects planning is essentially free, maximize the time and value gained in planning.
2. Once you start execution of the plan, move as fast as possible to reduce likelihood of unforeseen events and also reduce costs increases due to inflation, interest paid on capital etc.
+1 for "How Big Things Get Done". It changed the way I run projects. I got lucky in the sense that I was able to convince my corporate overloads to allow us to have separate Discovery and Delivery goals, on the premise that discovery is cheap and delivery is expensive (the former significantly reduces risk of the latter) and we show our work. Discovery goals come with prototype deliverables that we're ok not shipping to production but most times lay the foundational work to ship the final product. Every single time we've found something that challenged our initial assumptions and we now catch these issues early instead of in the delivery phase.
We've fully embraced the "Try, Learn, Repeat" philosophy.
Yes I have to second that. MLJ.jl is also written by a mathematician and the API is excellent. Truly well thought-out.
(If you think “why does MLJ.jl have so few stars?” please keep in mind that this library was written for the Julia language and not for Python. I honestly don’t think the library is the cause of low popularity. Just wrong place wrong time.)
And for them to be listened to, what is independent on how well they communicate; and for them to be aligned with the most powerful stakeholder, what is almost never the case; and for no big change to happen in an uncontrolled way, what powerful people nowadays seem intent on causing all the time.
If you create the plan like a mathematical formula like my boss did, the evidence becomes irrefutable... like a mathematical proof. The article does mention that the plan is communication tool.
Everywhere I worked technically correct and irrefutable facts were enough times thrown away and dismissed based on someone feeling or emotion that I don’t believe in irrefutable mathematical proof being communication tool that solves everything.
There had to be something more like just that guys authority or him being majority shareholder or him being super empathetic that he knew how to handle people.
> however, that saying applies to war, politics, negotiations
It’s not even an argument against planning. You’d be a fool to go to war without a plan. The point of the saying is that you’d be a fool not to tear up your plan and start improvising as soon as it stops working.
It is kind of an argument against overplanning though, because if your plan that you spent considerable time creating becomes irrelevant, you wasted a lot of time
That assumes the plan itself is the only useful output from the time spent planning. Even if the plan itself isn't used, the time spent planning means you examined the problem thoroughly, and raised questions that needed answering. Taking the time to think about those questions in order to give a coherent answer is, in and of itself, worthwhile for answering the question later, even if that part's never actually written down.
True, I agree 100%, and that's why I chose to say 'irrelevant' to imply that there was nothing useful about it inherently for those cases. Most of the time, at least in coding, there was probably something useful that came out of it, even if you had to scrap the plan. At the very least, some sort of learning more about the problem space. In the case of war, however, if you lost the war because you over-planned (such as planning one thing very very intricately instead of having several rough plans that leave room for some improv), I'd argue that there probably aren't any residual benefits to celebrate
I had to do this for a patent application, and likewise found it very useful for identifying holes in my thought process or simply forcing myself to do the functional design work up-front and completely.
It was also great for brainstorming about every feature and functional aspect you can imagine for your product, and making an effort to accommodate it in your design even if it's not MVP material.
In my experience it applies to coding when you have any reliance on third party libraries or services and don't have an extensive amount of actual real world experience with that technology already.
If you have unknowns, then your planning process starts with, "let's figure out how to use this new technology." And that process can involve a bunch of prototyping.
Having to make a choice between "make a design document" or "do prototyping" is a false dichotomy. They're complimentary approaches.
My boss would take a piece of data/input and run it through the entire process. It's a string data here, converts to number here, function transforms it here, summarized here, output format there... You wouldn't run into data type issues or have an epiphany that you're missing a data requirement.
If the data transformations are the hard part, sure. But often the hard part is whether you're even outputting the right thing at all. Also, if you're planning in that much detail, you might as well be writing code (perhaps with some holes).
If any Excel alternative wants to make a dent in market share, they need an option for users to mimic all the main Excel shortcuts. Google Sheets is close so it's useable; however, trying to use something like Apple Numbers is like switching from querty to dvorak.
The general advice is to have top of monitor at eye level, but it's been wrong advice for me personally. I now put the middle of the monitor at eye level. Keeps my head up and posture better. Leaning back instead of stooping.
The general advice provided to me, and relayed by me is eyes centered @ 2/3th of the screen.
The best advice received and relayed by me regarding posture might surprise you.
If you struggle with posture, stop caring about what other people might think about your posture. Changing/Tweaking posture all the time might look bad, but it also tends to mitigate the effects of being frozen in bad posture(!) The health impact is too significant to ignore.
Yeah I think the only ergonomic advice I believe anymore is that there does not exist a position that is ergonomic to sustain for more than a couple hours. Humans are not evolved to stay stationary, few mammals are really.
I do this too, though mostly out of necessity. I use a 27" screen a couple feet away. To get the top of the monitor level with my eyes I'd either have to lower it so the bottom of the monitor was almost flush with the desk (which my current monitor's stand won't do anyway), or get a taller chair/lower my desk, both of which would leave my legs rubbing up against the desk underside and my arms at an uncomfortable angle for typing.
Either I have an abnormally short torso, or that advice was written back when most people were using a 14" display.
Indeed. AIUI your head needs to be back, chin tucked in, which means looking down a bit. If you're looking level or up you're going to be sticking your head out a bit
That’s typical of tvs. The signal is delayed by a few seconds because for passive entertainment why not. You will likely have a mode for your tv that does no post processing and has minimal delay Often called pc or gaming mode. Look up “[your tv model] gaming mode”.
For short searches, the revenue per token is zero. The next step is $20 per month. For coding it's $100 per month. With the competition between Gemini, Grok, ChatGPT... it's not going higher. Maybe it goes lower since it's part of Google's playbook to give away things for free.
reply