I found that if you follow a bunch of people on twitter who specialize in a niche topic you'll see a lot of casual interesting conversations there
But the problem is that you can't start new discussions or ask a question unless those people follow you back because no one will see/reply to your tweets
It is and it isn't. It's at ambient pressure (which is something useful), and there is something very odd happening much higher up that needs to be explained. They say their sample purity is higher than the one the Korean team had, so that would normally lead to better yield and easier confirmation of the superconductivity. But since it does show the Meissner effect in other samples as well at room temperature there is a lot that still needs explaining before we can say it is a failed reproduction.
Yes, that's possible and something that has already happened once before: this is exactly how x-rays and eventually radioactivity were discovered, a chance contamination.
Yeah it looks to me like either a replication failure or even evidence AGAINST a superconducting phase. A superconductor's resistance curve is supposed to show a sharp drop to zero at the transition temperature. If it's a dirty inhomogenous sample (e.g. specks of superconductor embedded in non-superconducting material), you get a kink where the curve descends to a non-zero background resistance. In the Southeast University data, there's a smooth curve that goes down until they get to the noise floor. There's no transition.
I don't think it's a failure if the original claims are showing some kind of promise, science is being done, and the frontiers of knowledge are being pushed forward.
Methinks it's all boiling down to some kind of lingua franca for document structure. Altho things like footnotes and tables of content will not fit. Not RSN anyways.
Obviously it depends on the person but I did 3 years of an EE degree then switched to CS so I've experienced both
EE was definitely harder to me mainly just because engineering had you taking a lot more credits at once compared to science
And doing vector calc/electromagnetic field math is way harder than proofs IMO
Also debugging software is so much easier than hardware/circuits. The real world is so much messier compared to computers where stuff is cleanly true/false.
You can put together a circuit perfectly but it turns out some component you ordered was busted or the tolerance is wrong or something got fried accidentally and it can be really painful to track it down with a multi meter/oscilloscope.
Whereas that doesn't really happen in software. You don't have a program that works one day and then and then suddenly the next day "if statements" aren't working. So much more stuff can go wrong in the physical world.
My main job now is pretty high level C++ but I don't regret doing the EE part though because it forced me do a few courses in Verilog and there's no better way to really understand how a computer works than building a simple CPU on an FPGA
Oh yeah, those Verilog courses are like the next level up from nand2tetris in wrapping your head around those kinds of things. I got a lot of mileage out of writing some automated scripts to run through Xcelium's interactive shell so I could get a faster debug loop going with them.
What do you do now that involves C++ if you don’t mind me asking? I too have a EE degree but have been doing web dev for the last few years, I would like to get back into maybe doing lower level software development or work on something I can use my degree somewhat on.
The craziness in software starts at the systems level, especially distributed systems. The madness of debugging distributed systems (at present) shouldn't be underestimated :)
This one is a CPU-based rasterizing renderer, it gives you a good understanding of what a GPU graphics pipeline does underneath.
In the graphics world the two common ways of rendering things are either rasterization or raytracing.
Raytracing is basically all the movie/VFX/CGI/offline renderers (although it is also being used for certain parts of real-time in recent years)
Raster is how most real-time renderers like the ones used for video games work.
If you're interested in graphics I'd highly recommend implementing a ray-tracer and a rasterizer from scratch at least once to get a good mental model of how they both work.
Out of curiosity, is there a third way? The two you mention are the big ones, I know, but your phrasing implies (perhaps unintentionally) the existence of at least one more.
vk_mini_path_tracer is a beginner-friendly introduction to writing your own fast, photorealistic path tracer in less than 300 lines of C++ code and 250 lines of GLSL shader code using Vulkan
But the problem is that you can't start new discussions or ask a question unless those people follow you back because no one will see/reply to your tweets