Same here. Embedded programmer, primary language being C. I sat here staring at that line for a solid 30 seconds trying to understand the intent - it’s nonsensical. I read your comment and see the intent you’re guessing at but… you would not generally write C like this. Pointer arithmetic is to be avoided, and this case is contrived. I don’t think a compiler warning would be thrown here though - maybe that is the point, that the language and compiler would not catch this issue and that typos or novice learners can make compilable mistakes too easily?
I mean, this surely wouldn't be written as obviously as this in real code. It'd be somewhere in a data structure that packs to be able to fit better in a cacheline, and has something like:
struct edge {
struct node* src;
int dst_offset;
int label;
};
Knowing about integer overflow, the original programmer carefully wrote overflow checks where dst_offset gets computed, and the code was correct. Nodes were allocated into one contiguous array, and edges were allocated into another.
Later, someone else changed how nodes were allocated, so adding a new node would never trigger a realloc of the whole graph. Instead, a linked list of node arrays is allocated. Suddenly now, computing dst_offset is UB, and yet, the observed behavior of the resulting program is the same.
Even later, someone updated GCC on the CI machine, and now the inliner is more aggressive. Suddenly, mutations to node labels are unreliable, and edges are found to not refer to any node in the graph, except when debug logging logs the address they actually refer to.
This is the kinda thing that requires an understanding of UB that has nothing to do with the actual hardware to diagnose and fix, which I think is really unnecessary for a beginner.
Not a hardware bug, but in embedded I ran into a fun one early into my first job. I setup a CI pipeline that took a PR number and used it as the build number in a MAJOR.MINOR.BUILD scheme for our application code. CI pipeline done, everything worked hunky-dory for a while, project continued on. A few months later, our regression tests started failing seemingly randomly. A clue to the issue was closing the PR and opening a new one with the exact same changes would cause tests to pass. I don’t remember exactly what paths I went down in investigation, but the build number ended up being one of them. Taking the artifacts and testing them manually, build number 100 failed to boot and failed regression, build 101 passed. Every time.
Our application was stored at (example) flash address 0x8008000 or something. The linker script stored the version information in the first few bytes so the bootloader could read the stored app version, then came the reset vector and some more static information before getting to the executable code. Well, it turns out the bootloader wasn’t reading the reset vector, it was jumping to the first address of the application flash and started executing the data. The firmware version at the beginning of the app was being executed as instructions. For many values of the firmware version, the instructions the data represented were just garbage ADD r0 to r1 or something, and the rest of the static data before getting to the first executable code also didn’t happen to have any side effects, but SOMETIMES the build number would be read as an instruction that would send the micro off into lala land, hard fault or some other illegal operation.
Fixed the bootloader to dereference the reset vector as a pointer to a function and moved on!
From CI pipeline to bootloader would make me about turn and nope out of embedded so fast if that was my first job.
That level of skill requirement is like a department in one. Hopefully that company had some patient seniors.
Have you found a solution to mid-large scale embedded test setup? Could you provide some shallow insight into frameworks or other infrastructure used for embedded testing? I've previously been responsible for firmware testing small production volume devices in aerospace but have since moved to a high volume product with multiple active hardware revisions and no test infrastructure currently in place. It's a different beast to test now while trying to balance schedule and feature dev.
If you're trying to validate new software on known good hardware, then your goal at any scale is to insert a suite of embedded software acceptance tests into your development pipeline usually somewhere between build and release. CI infra is ideal for this, my counterparts all use Jenkins.
Robotframework is a great tool for expressing tests. For simple setups, Jenkins has an RF plugin to kick off tests.
The tricky part is managing many test resources. One jenkins server triggering robot tests on one bench PC with one embedded target is a great way to develop an initial test suite. But once your org gives assigns you more hardware, the tricky part is figuring out how to use it maximally (either optimizing for fastest time to completion or for least time idle of your dedicated test hardware.) This means the tests themselves can no longer "drive" the test resources directly, they need to lease them from a resource manager service.
Larger companies with nearly homogeneous hardware may get by with a basic priority queue. But some test setups with an L1 or L2 switch between them may support a DSL to allow tests to request resources that are physically or logically connected, and may need abstraction layers for common tasks such as "physically disconnect power" which the Resource Manager could implement as a SNMP command to a PDU or as a SCPI command to a DC supply -- the test itself should be declaring what has to happen, not how.
I've scoured the internet and plied counterparts in other industries for any example of commercially available prior art. Everyone I've talked to wrote their own. It took a week to produce something acceptable for my use case. But if the RF devs were to maintain a similar solution, I'd switch to that in a heartbeat.
True, unless the timezone was tied to each user's accounts. The website doesn't have to "not work" it just has to return a page that says something to the effect of "we are closed for business."
There are cases where the business shouldn't promote the use of its website at odd hours (child protection, gambling addiction and other similar things).
I have a limited understanding, but I think this is just further reproduction of the same science. I believe there's some controversy over non-locality in that entanglement and spooky action at a distance is still a result of hidden local variables. By increasing the distance between the sources of the photons, you would give more evidence to nonlocality. Please correct me if this is offbase.
"Good" [1] Bell experiments have already been performed by several groups, giving us high confidence that any local hidden variables theory is incorrect. So, there is very little controversy over the matter. Moreover, the Bell test here is not a "good" Bell test, since the results were post-selected i.e. only some experimental data points were selectively used.
The important part of the paper is the first result about two-photon interference. It shows that two photons from Sun and Earth can be made indistinguishable from each other and hence show maximum interference. This is evidence for the postulate of quantum theory that all photons (or other fundamental particles) can be made identical to each other.
Non-locality means that the way the probability of presence (wave function) of a photon is determined is by trying out all possible paths through the entire universe at once rather than only in regions nearby the photons?
The path integral formulation is the most prevalent and successful version of quantum physics (this is what you describe in the second half). Non locality is different; it supposes that there are hidden mechanisms we have not uncovered (and may but be able to uncover) instead which describes our world. It must be non local to get the observed experimental results. Non local means data about the universe is teleporting faster than (conventional) causality allows i.e. the speed of light
Either case seems to require faster-than-light or instant communication between alternative paths that are being tried out, so the distinction is whether the combination of different paths is mediated by another faster-than-light (non-local) particle or whether the combination is axiomatic to the laws of the universe without a particle mediating it?