In America, the problem comes when the gain and the loss come in different years. If you make a big gain in 2024, but didn't pay taxes on that gain, then lose the money in 2025, they will come after you for failing to pay taxes in 2024 even though you no longer have the money in 2025. The lesson is to pay your taxes.
A bank will be happy to lend you the money to cover the spread since you have the collateral of a large tax refund in the future. It'll cost you a little bit of interest but it's generally not the catastrophe that people make it out to be.
Maybe if you are an ultra high net worth individual. I don’t see your avg Joe walking into their neighborhood Chase bank asking for a $500k loan using their potential tax refund as collateral is going to get it. That seems like an esoteric financial product.
Tax refund loans are offered in conjunction with the tax filing service like TurboTax or H&R Block because they already know what your refund amount is going to be and it’s relatively risk free (small refund amounts) and easy to automate. They are similar to pay day loans.
Crypto bro showing up with $1m gains and losses from crypto transactions and asking for a refund loan at their neighborhood bank is probably not going to go anywhere (it’s too large a risk because it’s not just a few thousand dollars but at the same time it’s too small an amount for them to do custom due diligence to underwrite a loan).
Anyway you can’t erase gains in year 1 with losses in year 2 at least in the USA (you can only offset $3k/yr max in year 2 if you don’t have any other gains).
I followed Rob's work on this in real time, it was a master class in calling out a company with no value. He just continually laid out how numbers didn't add up, and laid out the inevitable conclusion. I had no idea about the threats, but I do know his wife had a baby while all this was going on.
There is a very strange totally coincidental correlation where if you are smart and NOT trying to raise money for an AI start-up, you think AGI is far away, and if you are smart and actively raising money for an AI start-up, then AGI is right around the corner. One of those odd coincidences of modern life
Mind you, such a correlation can be reasonable—the Yesses work for something because they believe it, while the Noes don’t because they don’t. (In this instance, I’m firmly a No, and I don’t say such a correlation is reasonable, due to the corrupting influence of money plus hype sweeping people along, which I think are much more common. But there will still be at least some that are True Believers, and it does make sense that they would then try to raise money to achieve their vision.)
This is very similar to the conclusion I have been coming to over the past 6 months. Agents are like really unreliable employees, that you have to supervise, and correct so often that its a waste of time to delegate to them. The approach I'm trying to develop for myself is much more human centric. For now I just directly supervise all actions done by an AI, but I would like to move to something like this: https://github.com/langchain-ai/agent-inbox where I as the human am the conductor of work agents do, then check in with me for further instructions or correction.
Has anyone figured out an elegant way to add front-end design to a process like this? Every implementation I see people use includes either vague references to front-end frameworks, or figma images. It doesn't feel like a cohesive design solution.
I have a folder of scss files containing the utility classes and custom properties that make up the design. By instructing it to use those, and to reference similar files, it more or less conforms nicely to the existing design language.
I used this to teach high school students. Probably not sufficient to get what you want, but it should get you off the ground and you can run from there.
https://youtu.be/86FAWCzIe_4?si=buqdqREWASNPbMQy
I tried switching to this a few years ago, but switched back to Obsidian. My problem was actually the strongest feature: the VS Code integration. Only about a third of my note taking is related to software development, so for me going into VS Code the other 2/3rd of the time was clunky. If 100% (or close to it) of your note taking is software related, this is a great product.
I used both from, essentially, day 1 and had a similar experience. Obsidian is just fantastic and I vastly prefer keeping it separate from my coding, even for coding-related notes.
Though, once in a while I'll open the notes in VS Code just to make use of things like better find/replace, regex etc... - especially globally.
I like the concept, but I think a more reliable/less compute intensive way to implement it would be too use AI to call up non -AI data. I could just type in "some red beans and rice" and the LLM parses what I mean, and retrieves stored verified data.
That's what OpenNutrition does. However, in many cases, there is no publicly accessible "non-AI data" source to refer to. OpenNutrition tries to bridge the gap, using the public data when available, and providing additional inferred data to fill in the gaps. For "red beans" and "rice", OpenNutrition provides a long list of foods with full citations in public databases. See the "References" section where you can click through to the source material.