That very cheap gigabit copper SFP was running hotter than I'd like -- it probably would have been fine, but this rig is meant to run outside while camping off-grid in the sun in central Florida. So I put some heatsinks from my 3D printing stash on there and so far they've stayed put.
In this system, the Hex S is running OpenWRT and is configured as a PoE-powered managed switch. In that role, it switches packets and does VLAN stuff fine, and is probably a bit of overkill.
But it's also one of several layers of manual redundancy, which is important in that environment: One does not simply go to the store and buy special electronics in central Florida. So it isn't included in the travel kit, then it doesn't exist.
With one shell script, it stops being just-a-switch and becomes a router with all the usual services, plus SQM tricks and multiple WAN ports. The rig works well.
RouterOS, although I'm only using the switch-related functionality.
I found that the temperature of the 10G modules has almost no relation to their cost. So far, the least hot modules are 10G Tek ones that are also the cheapest. Mirkotik's 10G modules are more expensive, and they are also hotter.
Maybe? What follows is just my own dumb anecdotes.
For a long time, I sometimes had issues. I'd keep anti-diarrhea pills in stock at home. I kept some in the car. I even had some in blister packs my wallet (they'd get smashed up over time, but they still worked in powdered form and the desperation was very real).
I didn't know why that was a problem, but I definitely knew it was a real problem and that it could erupt at any time, so I treated the symptoms when that was useful to me. Sometimes, those shitty days on the toilet were intense. They'd wreck me, physically and mentally, for far longer than I want to think about.
Eventually, after decades, I noticed a pattern: Milk. Days when I drank milk or ate ice cream were much more likely to be problematic than days when I did not.
But then, I noticed that some other milk products like cheese were usually just fine. And that made sense and fit the pattern well, because the fermentation of cheesemaking reduces lactose very significantly.
And I like milk. So, experimentally, I started buying lactose-free milk. This worked well, but it was expensive and it tastes different. That helped to further define the pattern.
I started buying cheap lactase tablets instead, in bulk. That saved a fair bit of money, tasted good, and it also worked fine. This also reinforced the observed pattern.
Somewhere along the line, I became interested in kefir, so I bought some completely non-mystical mass-produced kefir from the grocery store and drank some.
Kefir treated me fine (yay fermentation). I found that adding a bit of kefir to a glass of milk also worked: That was never problematic at all, even without lactase tablets. (And it let me stretch that delicious, to me, kefir flavor out over a larger volume -- which also saved some money.)
I found that these observations strongly suggested to me that I was lactose-intolerant.
This went on for a long time; several years. Lactase or kefir, with milk, in various amounts -- whenever I felt like it. I thought I was proactively managing my apparent lactose intolerance very effectively. And by observation, I was indeed doing so. Keeping active stock of anti-diarrhea pills always nearby was reduced to kind of a fuzzy memory.
---
And then one day, I wanted a nice big ice-cold glass of milk, so I poured myself one. I went to the cabinet in the kitchen, but the lactase bottle was empty. I went to the fridge, and the kefir was gone.
So there I am, with a big glass of milk and nothing to help me digest it.
My health-and-sanitation spidey-sense refuses to let me pour stuff back into containers, and my dread for waste refused to let me pour it down the drain.
So I drank that milk. It was every bit as delicious as I expected.
And I expected (anticipated) the worst, but nothing bad happened. Everything was fine.
One sample isn't a trend, so I had more later. That was fine, too.
Weeks went by, then months. Now years. No issues: Milk goes in, and everything comes out properly.
I can have milk without assistance whenever I want, and that's fine. The previous and clearly-evident pattern that suggested lactose intolerance has become broken.
---
So now I don't have lactase tablets in stock anymore. I still drink the least-fancy milk I can get at the grocery store whenever it suits me.
I do enjoy some kefir from time to time (I love the taste of it), but I haven't had any of that for several months now either.
And I'm still fine. I'm doing really well in that area, really.
I'll leave it to the microbiologists to explain the hows and the whys; that's not my field of study. All I know is that this aspect of my life is way, waaaaaaay better than it was.
I'm very deliberately not providing causation or theories here. This is just my story, and I'm sticking to it.
---
(Now, someone reading this probably has some questions that are shaped like "Holy hell. Decades? Why didn't you at least go to the doctor or something?"
And that has a simple, dumb-as-bricks, one-word answer: 'Murica.)
The lack of clarity is in keeping with the USB C connector itself, which may supply or accept power at various rates or not at all, may be fast or slow, may provide or accept video or not, and may even provide an interpretation of PCI Express but probably doesn't.
It probably looks the same no matter what, and the cable selected to use probably also won't be very forthcoming with its capabilities either.
The USB A connector stayed the same between USB 1, 2 and 3. Yet most manufacturers voluntary distinguished them by giving USB 1 and 1.1 a white insert in plug and port, USB 2 a black insert and USB 3 a blue one
This was neither standarized nor enforced, yet it worked remarkably well in the real world
Then we decided to just have no markings at all on USB C cables. On the ports at least we occasionally get little thunderbolt or power symbols
The exterior of the USB A connector stayed the same. The number of pins increased when we went from USB 2 to 3. So, even in this case, it’s slightly more complicated. The colors helped because the capabilities were very different between the ports. But when the USB IF increased the number of options (and reduced the size of the connector), different colors became impossible to do.
The problem is that there are too many uses for one connector. But this is wha we wanted - a reduced number of standardized connector/power options.
> Then we decided to just have no markings at all on USB C cables.
I'm shocked the LTT TrueSpec cables are the first I'm aware of to so such a small and basic thing. I have so many USB C cables and no idea which are power only, USB 2 only, or what. Such a mess
… and a M1 MacBook will source 5V/3A all day long to a non-PD negotiated sink. Somewhere between the M1 and M3 Apple decided to buy into USB-IF compliance and limit to 500mA.
Has lead to some very embarrassing “works on my computer” situations on prototype boards shared with my EE colleagues (I’m a software guy who dabbles in hardware when I need to)
It doesn't take PD negotiation to get 5v, 3A from a compliant source. A 5.1k resistor or two (quantity depends on placement in the overall circuit) is sufficient.
This may be a matter of semantics, but I can't bring myself to call a resistor a negotiator. They only do one thing and they're very resistant to other options. :)
With nothing connected to the CC line(s) at all, then there should be no output voltage on Vcc. It shouldn't be 5v @ 3a, or 500mA, or anything else -- it should be ~exactly 0v, and therefore also 0a.
A resistor or two tells the power source what we want. Without it (or some, you know, actual PD negotiations), we get nothing.
---
A careful reader will note the repeated quantity distinction. Let me explain that.
Every USB C socket has both CC1 and CC2 pins. They're on opposite side of the connector and get used for sorting out PD, and for detecting the cable's connector orientation (if/when that matters).
But a cromulent USB C to USB C cable can have just 1 CC wire, and that's OK. It works; it isn't even wrong. To get such a cable to coax 5v from a 5v/3a source and get power for a prototype widget on Gilligan's Island, with the cable already cut in half to get at the wires inside: Wire up power and ground to your prototype. And put a 5.1k resistor between that single CC wire and ground. Voila: We've requested 5v at up to 3a.
Or: If we're being a bit more proper and snooty and want to do it The Right Way, and we actually have a USB C jack to prototype with, then that more-ideally takes two 5.1k resistors; one to pull CC1 to ground, and another to pull CC2 to ground. This does the same thing, but it does it on the connector side of things instead of the daunting no-mans-land of wires. Only one of these resistors will ever be used at one time.
Or: If we have a USB C jack and can only scrounge up one 5.1k resistor (maybe we only have a single #2 pencil to whittle down to 5.1k of resistance), or we're being particularly lazy, then that's OK too. Pick CC1 or CC2 and put 5.1k between there and ground. It will work with the cable plugged in one way, and it won't work with the cable flipped 180 degrees. That can be enough to get a thing done for the moment or whatever. (There's no solution that is as permanent as a temporary one.)
---
These are some of the things I learned when I was in the field and needed a 5v, >2.5a power supply to replace one that had died. I said to myself, "Self, just go over to Wal-Mart and get a 3a USB C power brick that comes with a cable, cut and splice that cable to fit the widget that needs power, and call it done. If it dies in the future, replacing it will be intuitive and fast."
So dumb ol' me went to Wal-Mart and bought exactly that, and I quite confidently set forth with the splicing.
This did not work. At all.
And that was a harsh rabbit hole to dive into, but it was ultimately fine. After I got back that evening I soldered a 5.1k resistor (of 1206 SMD form) mid-span between the CC wire and ground, and finished the adapter-cable quite neatly with some adhesive-lined shrink tubing.
Doing it this way got the customer's gear working faster than ordering the "right" parts and waiting for them show up would have, and it still works. That's all been a few years ago now; I consider it to be as permanent as anything ever really is.
The lack of clarity is in keeping with the USB C connector itself, which may supply or accept power at various rates or not at all, may be fast or slow, may provide or accept video or not, and may even provide an interpretation of PCI Express but probably doesn't.
It gets even worse.
I now have two cheap Chinese gadgets (a checki printer and a tire inflater) that have USB-C ports for charging, but will only charge with the wire that came with the gadget. The other end of which is an old-style USB plug.
It seems that USB-C sockets are cheap enough parts to use them for everything, even if the manufacturer isn't going to put any actual USB circuitry behind them.
Edit: Three. I forgot about my wife's illuminated makeup mirror.
Wow, thanks for sharing this. Like the parent commenter, I have an increasing number of cheap devices like this. I wonder if anyone sells an "enclosed" version of this product. This won't survive 5 minutes in my house, haha.
A quick google I found one but they're $17 each (!) and it's from a site I've never heard of and can't vouch for, so not bothering to link it here.
I'm really surprised there aren't a number of these all over Amazon. Or if there are, they're using different keywords to describe them, so I can't find them.
Note: If it just needs 5V power (Like many microcontroller-focused devices), USB C is convenient, because chargers and cables are ubiquitous. And they all (WIth exceptions like the one you mentioned) support 5V DC power.
Bonus: YOu can enable USB 2.0 data transfer as well for firmware updates, computer interfaces etc.
So: Cheap/ubiquitous part, everyone has cables + AC adapters to their local plug: I think it's a great default power connector.
Ah that's a fun misuse of USB ports. The companies will often even dodge issues with the USB-IF by labeling the ports as Type C and letting the customer's mind fill in the word USB.
I wish these devices would just use barrel jacks, labeled with the voltage and polarity. But these manufacturers know that the USB-C port weighs into buying decisions (and they know that most people have zero clue about the difference between a physical port and the electrical/protocol specs).
My aftermarket android auto display uses the type c connector for power input - wired directly to raw vehicle power. It will not run on 5v. It doesn't negotiate pd either. It just expects around 13 volts right on the power pins, and the supplied power cable does exactly that. It's portable too, which means that some poor person plugged their cable into their phone and blew it up.
I hate barrel jacks, it seems that every single time I encounter one it's different from any adaptor I have. Size, voltage, and polarity can all differ. People got sick of having 10 differnet power adatpters to charge stuff. Hence the demand for "single connector" which seems to have converged on the USB-C form factor.
Right, but if it's not actually USB-C, at best you're looking at the device not working when plugged into a proper USB-C power supply. At worst you're facing fried electronics.
Agreed that would be like wiring a standard North American household wall outlet with 240VAC. Technically possible, but will probably fry anything not expecting it.
I came across a group of racks in the IT room in a (US) factory once that had 208v on their standard NEMA 5-15R sockets.
Their global-market IT stuff didn't care at all. But some of the US-market audio stuff I was integrating came with old-school linear power supplies, and those items cared a great deal.
Have run into that exact thing also, not that the sockets were 5-15R but IEC C13 in a rack CDU. But someone had some adapter pigtails from C14 to a standard NEMA socket, of course that doesn't change the voltage at all. Hilarity ensued.
I repaired device like that a while back - it only took two half-cent resistors and a half-assed soldering job to make it compatible with standard USB-C cables and chargers: https://www.nfriedly.com/techblog/2021-10-10-v90-usb-c/
Yeah, they got cheap. They either got cheap with the BOM, or they got cheap with the QC and never tested it with USB C power sources, or they got cheap with the spec and it's working as-designed.
It just takes a couple of insignificant resistors and a USB C socket that brings out CC1 and CC2 to pads on the board to do it right. I wrote about how that works in a sister comment if you want to read more.
But those devices will charge/work just fine with any bog-standard USB A to USB C cable, alongside any decent power brick with USB A outputs. It doesn't have to be the exact cables they came with.
It's annoying in the "you cheap bastards" sort of way, but regular A to C cables will work.
(If it's really important to you, then it can be possible to hack in a couple of 5.1k resistors inside the cheap-bastard devices and make them work with regular USB C power bricks and regular USB C to C cables. The resistors will tell the source to provide 5v at up to 3A. All compliant USB C cables are required to safely pass 3A.
The mod can range from very easy, to somewhat problematic, to "fuck this, I quit". In reality, there might already be pads on the board to connect CC1 and CC2 to ground; just solder in the resistors. Or, the pins are probably brought out at the connector itself, so it can be bodged with some extra wire.
But reality is a cruel mistress and not all available PCB-mounted USB C connectors expose CC1 and CC2 at all, although in a sane and pure world absolutely all of them should.)
[tl;dr, just keep an A to C cable with the devices, always have USB A where they get used, and forget about it. The next round of cheap stuff will be better, worse, or the same, and that's a future problem.]
My audio interface is a Linux computer with FPGAs inside (that actually get field-programmed), with two gigabit Ethernet jacks that each talk to different parts of the machine.
But I don't think anyone here would care about that. It's not such an unusual arrangement. I guess it's kind of impressive to use it on my desk at home, but in pro audio world it's actually kind of mundane.
Maybe I'll write about it more after I get the gumption to gain a root shell on it (or brick it, whichever comes first). I think you guys might find that part more interesting. :)
I’m building an audio device. It runs Linux for the control plane (it’s just a CM4 running Yocto, maybe I’ll leave SSH running on production units, maybe not, haven’t decided yet). No audio passes through the CM4, there’s a dedicated FPGA and MCU for that. It’s been a fun project, first time hardware for me, feel free to ask my anything!
This particular box also has RS-232, ssh (with almost zero auth), and telnet as a control plane, by default. Any of that only gets used to tweak/report various things with a rather basic human-readiable protocol. (It has built-in functions to make it more secure; I just don't care on my home LAN, or on my pop-up LANs in the field. A sane person with a professional role would have it locked down and on its own VLAN/VPN, but for me and prototyping: Telnet is actually pretty good.)
I designed none of it. I just bought it, and make good use of it. New, it was a mid-4-digit box; used, they're not so bad. (And I use it every day and like it quite a lot, hence the reluctance to go harder on the potential root shell hack.)
My box, as it sits, just does general-purpose GUI-connected DSP stuff with near-realtime tweaking. I'm in the process of getting it to grok OSC, and thus Reaper or whatever, so it has a better control surface for live work.
It has a USB interface that my Linux box treats as a sound card, which works well. My main reason for wanting to get root is to examine (solve?) its ~5-minute boot times.
5 minutes in a live sound environment is the difference between having a large, active, and involved crowd, and having everyone get bored and find something else to do.
Anyway, the FPGAs here just exist to behave as DSPs and...well, digitally process [audio] signals. It works well; I really just wish it booted faster.
And that may be its downfall. :P
---
But enough about that.
What's your device do? What are your plans and dreams with it? (Do I want one?)
I've built a very small amount of hardware. At least at the level of custom PCBs and some code, it's been richly rewarding even when I screw it up, and it makes me feel like I'm on top of the world when I get it right.
Re: yours, that is a _long_ boot time. Boot time on mine isn't great, but I think I'm just going to have to accept that as an artefact of U-Boot, Linux, and an Ethernet switch chip that takes some time to initialise.
Anyway, re: my widget: it's a personal monitor mixer [1], something one might use in the studio or live, not dissimilar to existing products in the market, except: it supports up to 64 channels of Dante or AVB natively, it has a super nice (HiDPI) UI, and absolutely everything is remote controllable using OCA (AES70) or OSC. I even have a MCP bridge so you can let Claude manage it ;) [2]
The hardware is a custom board that hosts a CM4 SOM (for the control plane and UI), a Brooklyn 3 SOM (Dante), and an XMOS which runs the mixer firmware and AVB stack. There are also some nice AKM DACs, and a Marvell Ethernet switch chip that connects the SOMs and XMOS to two external Ethernet ports.
The CM4 runs Yocto which manages the switch in DSA mode (i.e. hardware offloaded bridge), runs the gPTP and SRP stacks for AVB, the OCA daemon, and the UI (which is just a regular OCA client). SSH is presently enabled but there's not a lot to do once you're in there. Working on secure boot at the moment with U-Boot and dm-verity.
I'm astounded to hear that you consider this your first hardware project. That is, to be clear, rather fucking amazing.
At first, I wanted to ask why all that work is done locally instead of just controlling a mixer over the network. Because, I mean, when networked audio is already happening there's almost invariably some kind of mixer involved somewhere, but think I get it: Controls for mixers are all over the place, but AVB and Dante are fixed and unitized and it's easy-enough to find those streams (and/or for someone to make them available) on a network.
That makes your method very universal in application. Even when the monitor feed is an analog split (as is still often the case), it's easy-enough to convert that to Dante or AVB with a stage box [which can be rented] so the performers still control their own ears.
Nice, dude.
And yes, I want one. (Whether I can afford one or not is a different thing entirely, but the want is resolute.)
Well, also to be clear, I did find a great hardware engineer to design the board based on my somewhat outrageously over-engineered specifications. And I have been working on it for three years and haven't shipped.
It's a great question, and indeed a centralised mixer is pretty much the common approach as it allows for economies of scale. I guess there's a philosophical bent, the same reason I run my own SMTP and IMAP servers instead of Gmail: I like distributed systems. The practical bent is that, in my studio (the target market!), I only really need one or two of these, so the economy of scale doesn't apply. And interestingly with things like Lawo .edge we are seeing distributed mixers come into fashion.
And as you point out, being protocol-agnostic means that it can fit into a lot of scenarios, which might be useful (say) if it were to be a hire product.
Feel free to drop me a line if you want to chat more, I'm lukeh at padl dot com.
Dedicated test gear is different echelon. We've got some crazy-expensive RF test gear where I work that cost us way more than my house. That's an awesome corner of the world, with a combination of robust-but-fickle at every corner.
The sales volume is low, and the development cost is expensive, so the cost to purchase is also expensive. It's an interesting thing to think about, market-wise.
SoundWire. That's an internal[ish], hard-clocked, multipoint, digital audio bus, yeah? I don't know much about it. Looks like it's mostly useful for OE car audio applications?
---
This box I have is just a finished, retail-product, general-purpose pro audio DSP with a good amount of practical analog and digital audio IO. There are many others like it in the marketplace that do very similar things, but this one has a CVE that I want to exploit for my own purposes. :)
---
I really hate being secretive. I strongly prefer to just chat about stuff here, or there, or anywhere.
But even though I'm just some dude in Ohio, my HN comments consistently show up near the top of Google search results when looking at specific topics that I've covered, sometimes just in-passing, so I'm inclined to keep the details to myself for now.
I mean: In the grand scheme of things I haven't even been posting regularly here for very long, but more than once already I've Googled a question and found a link to an answer in my own comment here.
That can be problematic.
This is a great forum for open discussion, and for releasing information, and it is absolutely the wrong forum for secret skunkworks.
If I had a spare box so I could afford to potentially fuck this one up forever, I'd get on with it already. And then, of course, I would publish the results.
I wish I could spill the beans already and maybe get some great help from someone here who does this stuff routinely, but that scares future-me. If the devices can be rooted, then I want them all to be rooted (if useful) -or- better-secured (if not useful).
That sounds fine, except I don't want them to become botnet members, either.
It's a dilemma. There's a lot of this shit out there in the world that doesn't get updated.
We don't place any value on the CE mark in the States.
A lot of consumer electronics need to be FCC compliant, which involves a process of proving that the device doesn't emit too much of the wrong EMI/RFI in the wrong places.
And safety-wise, we use tend to use ETL, UL, and CSA for testing. These are third-party Nationally Recognized Testing Labs, and their own marks are used on devices they approve. But they're only really concerned about the safety of a product. In very broad strokes: If the device is proven to be unlikely-enough to burn a house down or cause electrical shock to humans, then it gets approved.
CE is a whole different thing. No government body in the USA requires or respects a CE mark on consumer goods; that mark doesn't hold any legal weight here.
Whether good or bad, CE is just not how we roll on this side of the pond.
(Of course, none of that means that laws in the EU don't affect product availability and features here. Globalization be that way sometimes.)
I'd like to reiterate that a CE mark means nothing to us here.
If my house burns down and a widget with only a CE mark is blamed as the source, my insurance company will consider that to be the equivalent of it having no marking at all.
If a company wants to sell a product globally including the USA, then CE isn't enough to satisfy the safety boffins.
The world is a big place, and the US isn't alone in this way: Lots of other countries also don't care about an isolated CE mark, like Canada and Mexico here in North America.
Some other large, important markets like Japan and Brazil are this way, too.
That's not what I'm saying. I'm not saying that a device sold anywhere in the world can't have a CE mark -- that's not it, at all. I'm also not saying that a person or company can't seek to get a CE mark for their product from wherever they are in the world (they certainly can do that).
There's a lot that I'm not saying.
What I am saying is that there are places in the world where the CE mark (and the presence or absence of it) means nothing, and that Canada is one such place.
Y'all have your own safety marks up there.
CSA is a big one -- you've had that organization up there and doing great work for over a century. cUL is another very common, accepted mark in Canada.
That's not what they're saying. They're saying that in the US, a device can have the CE mark, but that's not indicative of it passing US safety standards.
Also, I'd be surprised if all those Chinese devices have actually earned that CE mark.
It saves on rewiring stuff. Maybe there's only one person talking today. Maybe they're using PC A, or perhaps they're using PC B instead.
Or maybe there's two people in the room, each on different channels altogether. In this case the other person is just uncorrelated background noise instead of a persistent echo.
Or, in-context: There's two people in the same room, both talking on the same Discord channel.
Anyway, audio routing is useful. Being able to route audio with two different PCs is a pretty neat feature of the rodecaster.
I'm not sure if it was what OP meant, but it's arguably a good availability technique (as long as you can generate the checksum, that is). Like, if I want to run custom firmware and flash it, having a checksum which verifies that the firmware isn't corrupted may help prevent bricking.
I wonder if the cutoff date is the result of so many people posting about the date over time and poisoning the data. "Dead cutoff date theory," perhaps.
Whatever it is, the cutoff date reporting discrepancy isn't new. Back when Musk was making headlines about buying/not buying Twitter, I was able to find recent-ish related news that was published well after the bot's stated cutoff date.
ChatGPT was not yet browsing/searching/using the web at that point. That tool didn't come for another year or so.
Is there something important that I am missing?
reply