Training a model on sound data from readily available public social network posts and targeting their followers (which on say fb would include family and is full of "olds") isn't a very far fetched use-case for AI. I've created audio models used as audiobook narrators where you can trivially make a "frantic/panicked" voice clip saying "help it's [grandson], I'm in jail and need bail. Send money to [scammer]"
It's already happening in India. Voicefakes are working unnervingly well and it's amplified by the fact that old people who had very little exposure to tech have basically been handed a smart phone that has control of their pension fund money in an app.
It is happening already, recently Brazilian woman living in Italy was scammed thinking she was having an online relationship with Brazilian tiktoker, the scammers created a fake profile and were sending her audio messages with the voice of said tiktoker cloned via AI. She sent the scammers a lot of money for the wedding but when she arrived in Brazil discovered the con.
I'd wager it's largely disruptive and dangerous in a highly localized way due to the small percentage of folks doing it. Doesn't make it an acceptable practice though. One person "rolling coal" can temporarily blind 3 or 4 cars back and several across depending on wind conditions, etc.
In terms of NOX it can be a factor of 100. If 1% drive without cats they produce half the NOX emissions. In reality it is probably less since there are other old cars as well that have higher emissions
I live in a progressive state and unfortunately encounter "coal rolling" regularly. I also assume that's the point. Someone has to "own all the libs" as it were
However, I do agree that there aren't enough folks "rolling coal" in aggregate to really move any needles on planet-scale environmental impacts though. Just VERY unpleasant to be caught behind.
Oil in the exhaust in quantities high enough to produce acrid white smoke is extremely common on a number of ICE engines, like blown head gaskets on E25s (found in most Subarus before their Toyota involvement in 2010) for example
Subarus with bad head gaskets leak combustion gas into the cooling system displacing the coolant. If you run a flat engine low on coolant it will score the upper cylinder walls and the you will have oil consumption. This is fundamental to all flat engines. Oil in the exhaust is blue smoke. Coolant is white.
I never got the "blue" and "white" thing. Both look "white" to me, but you're right about subarus also leaking coolant in the exhaust which is easily identified by a "sweet" smell. Blown gaskets on ICE engines like E25s leak both oil AND coolant, no? I might be mixing up blown heads with cracked manifolds which often go hand in hand since temp extremes in engines fissure cast parts like the manifold. Either way the end result is the same: noxious fumes in the exhaust.
You've never see an old vehicle blow a substantially "bluer than normal" cloud. That's what I'm talking about.
> Blown gaskets on ICE engines like E25s leak both oil AND coolant, no?
This is way, way too broad of a statement. The Subaru EJ25 tends to leak oil externally from the valve covers. When they have head gasket problems it tends to be combustion gas into coolant which blows the coolant out the expansion tank until equilibrium is reached. Typical head gasket failures cause some degree of that but coolant mixing with oil is more typical. Many V engines have intake gaskets that can leak coolant into the intake or oil or both.
Regardless, if you can taste coolant in the exhaust the car is basically at the point of "fix it now"
> I might be mixing up blown heads with cracked manifolds which often go hand in hand since temp extremes in engines fissure cast parts like the manifold.
A sizable minority of cars don't even use cast manifolds anymore. While it's possible for cast manifolds to crack in a way that makes them leak that's rare and it's more common for them to crack their mounting tabs off. Steel exhaust tubing can and does sometimes break after many years of vibration, say nothing of rust.
While cylinder heads can crack it usually takes the kind of overheating that requires major work to fix in order to make it happen so just about nobody is driving around with a cracked head.
So I was going to reply to mention that all VWs sold in the US at least for the last 10 years use an iron block. I wanted to know if the EA888 (VWs go to 4 cylinder engine) was cast so I asked chatgpt.
"No — the VW EA888 Gen 3 engine block is not cast iron in the latest versions. engines.... use an aluminum alloy block, not traditional cast iron."
So I know for sure it's iron so I said "Are you sure it's an aluminum block on the gen 3"
"No — the VW/Audi EA888 Gen 3 engine does not have an aluminum block"
My experience is limited to cars manufactured before 2008 (back to 1928). Maybe the new ones are all welded cold rolled steel tubes but I've only seen cast parts for intake and exhaust manifolds, nearly universally. Shit cracked all the time.
This isn't a new thing. Jeep used "factory headers" on the last years of the 4.0. Ford did welded manifolds on the 5.0 Explorers. Subaru went to welded steel for the EJ series engine in the 90s. GM had them on the LT5 in the early 90s. Just about every application that has the catalytic converter right up at the manifold used a welded one.
I recently worked on a 2008 ej25 and it was definitely a cast manifold. Possible we're mixing up terms? Maybe you meant the ejs are cast steel instead of cast iron? When I read "welded steel manifold" I think of perfectly cylindrical tubes welded to steel plate for the mounts. Likely we're saying the same thing and manifolds are just cast parts that are welded together
Note that the one I worked on was in a US Forester so definitely not stock parts (j is for Japan)
EDIT: link to example 2005 manifold listing on eBay
Having a viable alternative to the KHTML-lineage engines (blink/webkit) besides Gecko will be a boon for the web.
I haven't been super happy with Mozilla's management of Firefox, although it's my daily driver and a great browser. I just don't have super high confidence it'll be viable long term from a corporate standpoint, especially since it's largely been payrolled by Google which makes Blink, so having another real alternative would be great. Having a sustainable, grassroots community project in Servo makes me have hope again (after Mozilla dropped the ball on them...)
I had a home brewed ram expansion board (still do actually, in a box somewhere...) I powered everything up a couple years ago when my kids found it and asked what the heck it was. Still works
My original VIC 20 machine that I had in the 80s still works as well, but a few things have been replaced along the way. I still have the same 24K expansion cart that my Dad built 40 years ago and it also still works.
Do you mean specifically the Frogger game? Or in general? - If you mean in general, it hopefully does sound very close to the original VIC 20. I actually reversed engineered the VIC chip schematic from photos of the silicon chip, so the sound emulation is based on what I worked out from the reversed engineered schematic. Some of my discussion on that is covered here:
That's awesome! I had access to an electron microscope at my last makespace and always wanted to decap a simple chip for reverse engineering. A close friend of mine did that with a SID and recreated it in verilog with good results, which I always found fascinating. Great work on the VIC20 side! And yeah, my VIC20 was never the most stable machine, crashing frequently but I also had a bunch of janky expansion boards attached so ymmv...
I'm also using the knowledge gained from reverse engineering the VIC chip to contribute to a soon to be released VIC chip replacement device called the PIVIC, powered internally by a Pico 2 chip.
Sounds cool! Similar to the SID project, assuming you're also aiming for pin compatibility.
I'm curious why you choose the pico platform over something like TinyFPGA which could be near 100% gate level compatible over a pico with software emulation. I bet the < $3 ICE40 has enough gates?
I haven't really looked at the pico2 yet, maybe it's one of those new hybrid arm+fpga designs and you'd have the best of both worlds?
EDIT: sadly no CPLD/FPGA on the pico2 front, at least according to [1]. Pico2 does add a new RISC-5 core (as a coprocessor? I only skimmed...) So I guess you'd have to do a bunch of timer interrupts to keep things in your emulator clock aligned if you're going pin compatible.
For a couple of reasons: The first is that there are already a couple of projects using FPGA to create a VIC chip replacement, e.g. Victor by Jon Brawn, and FATVIC by Thomas Lövskog. The main trigger to try a Pico 2 version was when I saw sodiumlightbaby's OCULA project for the Oric. The ULA in the Oric is the equivalent of the VIC chip in the VIC 20, i.e. the main custom chip. When it dawned on me that he was using the Pico 2 for the OCULA, I thought, "Hey! Why not try the same thing for the VIC chip?". So we've been collabing on it over the past 12 months. I think you're right though, that devices like the Victor that use FPGA will be able to get closer to 100% compatibility. The PIVIC will be an alternative that might not be 100% compatible but is very close and would suffice for most.
Yeah, the PIVIC is pin compatible, and the same size as the original chip, so no overhanging bits. The PCB is as big as the original VIC chip.
The new graphics driver stack they're touting (capable of running unmodified modern windows display drivers) along with support for x86_64 landing may result in increased interest in the project. They've already made a lot of progress with almost no resources as is. It's truly an impressive project.
I haven't heard much about the ArduCopter (and ArduPilot) projects for a decade, are those projects still at it? I used to run a quadroter I made myself a while back until I crashed it in a tree and decided to find cheaper hobbies...
Well at least crashing drones into trees has never been cheaper hahaha. So it's super easy to get into nowadays, especially if it's just to play around with flight systems instead of going for pure performance.
Fellow old here, I had several 56k baud modems but even my USR (the best of the bunch) never got more than half way to 56k throughput. Took forever to download shit over BBS...
The real analog copper lines were kind of limited to approx 28K - more or less the nyquist limit. However, the lines at the time were increasingly replaced with digital 64Kbit lines that sampled the analog tone. So, the 56k standard aligned itself to the actual sample times, and that allowed it to reach a 56k bps rate (some time/error tolerance still eats away at your bandwidth)
If you never got more than 24-28k, you likely still had an analog line.
56k was also unidirectional, you had to have special hardware on the other side to send at 56k downstream. The upstream was 33.6kbps I think, and that was in ideal conditions.
The special hardware was actually just a DSP at the ISP end. The big difference was before 56k modems, we had multiple analog lines coming into the ISP. We had to upgrade to digital service (DS1 or ISDN PRI) and break out the 64k digital channels to separate DSPs.
The economical way to do that was integrated RAS systems like the Livingston Portmaster, Cisco 5x00 seriers, or Ascend Max. Those would take the aggregated digital line, break out the channels, hold multiple DSPs on multiple boards, and have an Ethernet (or sometimes another DS1 or DS3 for more direct uplink) with all those parts communicating inside the same chassis. In theory, though, you could break out the line in one piece of hardware and then have a bunch of firmware modems.
The asymmetry of 56k standards was 2:1, so if you got a 56k6 link (the best you could get in theory IIRC) your upload rate would be ~28k3. In my expereience the best you would get in real world use was ~48k (so 48kbpd down, 24kbps up), and 42k (so 21k up) was the most I could guarantee would be stable (baring in mind “unstable” meant the link might completely drop randomly, not that there would be a blip here-or-there and all would be well again PDQ afterwards) for a significant length of time.
To get 33k6 up (or even just 28k8 - some ISPs had banks of modems that supported one the 56k6 standards but would not support more than 28k8 symmetric) you needed to force your modem to connect using the older symmetric standards.
Yeah 28k sounds more closer to what I got when things were going well. I also forget if they were tracking in lower case 'k' (x1000) or upper case 'K' (x1024) units/s which obviously has an effect as well.
The lower case "k" vs upper case "K" is an abomination. The official notation is lower case "k" for 1000 and lower case "ki" for 1024. It's an abomination too, but it's the correct abomination.
That's a newer representation, mostly because storage companies always (mis)represented their storage... I don't think any ISPs really misrepresent k/K in kilo bits/bytes
* 56k baud modems but even my USR (the best of the bunch) never got more than half way to 56k throughput*
56k modem standards were asymmetric, the upload rate being half that of the download. In my experience (UK based, calling UK ISPs) 42kbps was usually what I saw, though 46 or even 48k was stable¹ for a while sometimes.
But 42k down was 21k up, so if I was planning to upload anything much I'd set my modem to pretend it as a 36k6 unit: that was more stable and up to that speed things were symmetric (so I got 36k6 up as well as down, better than 24k/23k/21k). I could reliably get a 36k6 link, and it would generally stay up as long as I needed it to.
--------
[1] sometimes a 48k link would last many minutes then die randomly, forcing my modem to hold back to 42k resulted in much more stable connections
Even then, it required specialized hardware on the ISP side to connect above 33.6kbps at all, and almost never reliably so. I remember telling most of my friends just to get/stick with the 33.6k options. Especially considering the overhead a lot of those higher modems took, most of which were "winmodems" that used a fair amount of CPU overhead insstead of an actual COM/Serial port. It was kind of wild.
Yep. Though I found 42k reliable and a useful boost over 36k6 (14%) if I was planning on downloading something big¹. If you had a 56k capable modem and had a less than ideal line, it was important to force it to 36k6 because failure to connect using the enhanced protocol would usually result in fallback all the way to 28k8 (assuming, of course, that your line wasn't too noisy for even 36k6 to be stable).
I always avoided WinModems, in part because I used Linux a lot, and recommended friends/family do the same. “but it was cheaper!” was a regular refrain when one didn't work well, and I pulled out the good ol' “I told you so”.
In case anyone else is curious, since this is something I was always confused about until I looked it up just now:
"Baud rate" refers to the symbol rate, that is the number of pulses of the analog signal per second. A signal that has two voltage states can convey two bits of information per symbol.
"Bit rate" refers to the amount of digital data conveyed. If there are two states per symbol, then the baud rate and bit rate are equivalent. 56K modems used 7 bits per symbol, so the bit rate was 7x the baud rate.
Not sure about your last point but in serial comms there are start and stop bits and sometimes parity. We generally used 8 data bits with no parity so in effect there are 10 bits per character including the stop and start bits. That pretty much matched up with file transfer speeds achieved using one of the good protocols that used sliding windows to remove latency. To calculate expected speed just divide baud by 10 to covert from bits per second to characters per second then there is a little efficiency loss due to protocol overhead. This is direct without modems once you introduce those the speed could be variable.
Yes, except that in modern infra i.e. WiFi 6 is 1024-QAM, which is to say there are 1024 states per symbol, so you can transfer up to 10bits per symbol.
Yes, because at that time, a modem didn't actually talk to a modem over a switched analog line. Instead, line cards digitized the analog phone signal, the digital stream was then routed through the telecom network, and the converted back to analog. So the analog path was actually two short segments. The line cards digitized at 8kHz (enough for 4kHz analog bandwidth), using a logarithmic mapping (u-law? a-law?), and they managed to get 7 bits reliably through the two conversions.
ISDN essentially moved that line card into the consumer's phone. So ISDN "modems" talked directly digital, and got to 64kbit/s.
An ISDN BRI (basic copper) actually had 2 64kbps b channels, for pots dialup as an ISP you typically had a PRI with 23 b, and 1 d channel.
56k only allowed one ad/da from provider to customer.
When I was troubleshooting clients, the problem was almost always on the customer side of the demarc with old two line or insane star junctions being the primary source.
You didn’t even get 33k on analog switches, but at least US West and GTE had isdn capable switches backed by at least DS# by the time the commercial internet took off. Lata tariffs in the US killed BRIs for the most part.
T1 CAS was still around but in channel CID etc… didn’t really work for their needs.
33.6k still depended on DS# backhaul, but you could be pots on both sides, 56k depended on only one analog conversion.
As someone that started with 300/300 and went via 1200/75 to 9600 etc - I don't believe conflating signalling changes with bps is an indication of physical or temporal proximity.
Oh, I got the implication, but I think it was such a common mistake back then, that I don't think it's age-related now - it's a bit of a trope, to assume baud and bps mean the same thing, and people tend to prefer to use a more technical term even when it's not fully understood. Hence we are where we are with terms like decimate, myriad, nubile, detox etc, forcefully redefined by common (mis)usage. I need a cup of tea, clearly.
Anyway, I didn't think my throw-away comment would engender such a large response. I guess we're not the only olds around here!
No, just that confusing the two was ubiquitous at the time 14.4k, 28k, and 56k modems were the standard.
Like it was more common than confusing Kbps and KBps.
I mean, the 3.5" floppy disk could store 1.44 MB... and by that people meant the capacity was 1,474,560 bytes = 1.44 * 1024 * 1000. Accuracy and consistency in terminology has never been particularly important to marketing and advertising, except marketing and advertising is exactly where most laypersons first learn technical terms.
I started out with a 2400 baud US Robotics modem with my "ISP" being my local university to surf gopher and BBS. When both baud rates and bits per second were being marketed side by side I kinda lost the thread tbh. Using different bases for storage vs transmission rates didn't help.
If it's not happening yet, it will...
reply