I wonder if Comma.ai will ever be open to incorporating this into openpilot.
I always thought the argument that humans are adequate drivers and hence only cameras was not great. Why not actually be better than humans at sensing and driving?
Quite, I love the esthetic of the watch stand. The exposed lacquered wire makes it look really exotic, and in general the execution is top notch. It is really tempting to throw stuff like this together and call it a day when it works (that's more my style...), so I have a lot of respect for people that can finish these things to the point where they not only work but also look good.
I currently use Banktivity which is OK. Would love to hear from any others that have used Banktivity and migrated to something else. Ideally, there should be OFX support.
FYI you should have used llama.cpp to do the benchmarks. It performs almost 20x faster than ollama for the gpt-oss-120b model. Here are some samples results on my spark:
Is this the full weight model or quantized version? The GGUFs distributed on Hugging Face labeled as MXFP4 quantization have layers that are quantized to int8 (q8_0) instead of bf16 as suggested by OpenAI.
Example looking at blk.0.attn_k.weight, it's q8_0 amongst other layers:
I tested the following using almost all available models on Locally and did not get a single model that got the right answer.
"What is 9:30 am (Taiwan Standard Time, TST) in US Pacific?"
reply