U-Boot does basic enough UEFI emulation for most use cases. I find that I don't need native UEFI firmware and I can just build U-Boot with UEFI support for most ARM devices.
For example, right now I have an old armhf i.MX6 Wandboard Quad that runs:
U-Boot -> UEFI (with Secure Boot if desired) -> Systemd Boot (or Linux EFI Boot Stub) -> Debian (or other distro)
That same layout should be doable on any U-Boot¹ supported device.
Some arm devices such as the i.MX6, are strict on the placement of their boot firmware where it would interfere with a normal GPT table. One solution to this is to use a special "--move-main-table" option in gdisk² so that the GPT doesn't clobber U-Boot. While technically GPT is optional as long as U-Boot can read your main partition, I still always setup GPT anyway or Systemd Boot complains.
U-Boot is implementing ACPI, yes, there was something at FOSDEM this year about it for ARM64 machines. QEMU can apparently now boot an arm64 virt machine with only ACPI tables and no device tree. This is all very recent. When I was working on a vague equivalent of systemd-stub last year as a fun exercise in UEFI development, I settled on U-Boot for UEFI and QEMU (but with virtualized device tree). I can't remember the specifics of what's unsupported by RuntimeServices when I was working on it. I'm like 99% certain it's not enough for SBSA compliance, at least.
Honestly, my opinion after all this was that UEFI is pretty convenient and nice, and I'm glad U-Boot has its own implementation, even if it isn't fully grown yet; UEFI for the executable spec with a DTB for bringing up the SOC with nothing else was nicer than a lot of embedded boot flows I've handled.
You of course can, assuming that your hardware configuration supports eMMC with dedicated hardware boot partition(s). Some devices, like the one I used in my example, in its default configuration the hardware is actually booted from a MicroSD card as a regular mmc block device; which is quite common in hobbyist targeted SBCs. In those cases you have to be cognizant of boot loader firmware locations if you want a GPT partition table, as most user guides and sample disk images for these systems assume MBR style partition table reserving only the first block. Followed by system-specific boot blob(s) at the required offset location and then a fat32 partition somewhere thereafter with the OS. If the boot firmware blob starts before the 34th¹ block with 512-byte sectors blocks, then for setting up GPT, you move the main GPT table to after the firmware; otherwise you can just allocate the first partition space as reserved protected area in GPT for the firmware so that it doesn't get written to as a reference for future repartioning.
People interested in FUSE might also be interested in the CUSE companion (sub)project.
CUSE is userspace character device driver emulation. It allows you to emulate hardware without compiling a new kernel module. I just used it recently to write a hardware device supporting IOCTLs using Python. However I didn't find any good Python libraries that worked easily and documentation was lacking, but I found it easy enough that I ended up writing it using just the ctypes ffi library. The only part that wasn't immediately intuitive for me, as someone who has never written kernel drivers, is how IOCTL's require the buffers to be resized for each read and write which means the calls come in pairs, but luckily CUSE has a debug mode which shows all bytes in and out. CUSE was originally implemented to create userspace sound devices¹ but has also been use for things like custom TTYs. I used it for creating a virtual SPI device. Hopefully someone finds this useful and this project can get more attention.
People interested in this project might also be interested in Cloudflare's webrtc streaming service¹ as a cloud hosted solution to this same problem. "Sub-second latency live streaming (using WHIP) and playback (using WHEP) to unlimited concurrent viewers." Using the same OBS WHIP plugin, you can just point to Cloudflare instead. Their target pricing model is $1 per 1000 minutes.² Which equates to $0.06 per hour streamed.
They mention "$1 per 1000 minutes of video delivered", does that mean per viewer? $0.06 per hour per viewer seems like a lot, although I have no idea if that's "competitive" or inline with actual bandwidth costs.
As per the direct quote in my parent comment, by "unlimited concurrent viewers", I can only assume that they mean what it says as taken by the text from the links of both ¹⁺²
Another library that I would recommend people look into for live streaming to WebRTC, as an alternative to Pion used in this project, is Janus WebRTC Server. I use it for ingesting RTP streams I generate from usb webcams and then playing it with very low latency in the browser. It even has a feature where you can stream multiple streams simultaneously. It also has a simple http api for adding, updating, and removing streams on demand.
I went to your discourse link. As Lorenzo was trying to say, you will want to get a better idea what is going on with ICE candidate gathering. In firefox there is an about:config setting that might help with this:
If you do not get any valid candidates, then it is most likely a misconfiguration of your browser, vpn, firewall or network. Unfortunately this is most likely not a Janus thing, but I would need more information to know for sure.
I used the demo streaming.html code linked above as-is and host it statically on Nginx alongside Janus on a cloud VPS. As far as config, there is just the one file: /etc/janus/janus.plugin.streaming.jcfg , but you can leave it blank and just use the html api¹ if you don't want to mess with that file as the API has an option:
"permanent":true
that automatically generates/updates the config for you. You can substitute srtp (secure) for regular rtp for testing, but I prefer to use the secure variant since it goes out to a public VPS and doesn't really incur any overhead for my source devices.
I wrote a little helper shell code using wget, openssl, dd, jq, and jo to make it easy to talk JSON to the API, for the one-off configs I do. Here is an example of what I use which demonstrates simulcast and srtp ingestion for both h264 and vp8 video streams as well as opus audio. Just fill in the [ ]'s with your specifics. I then use ffmpeg to generate all the streams and pipe to the appropriate ports for each simulcast level and target. If you use gstreamer beware srtp key format is different.
#!/bin/sh
server="https://[YOUR_JANUS_SERVER_HOST]/janus"
token(){ dd if=/dev/urandom bs=6 count=1 status=none|openssl base64;}
post_jo(){ first="$1";shift;wget --quiet --output-document - --post-data "$(jo -- "$@")" "$server$first";}
tx(){ post_jo "$@" transaction="$(token)";}
data_id(){ jq ".data.id//(null|halt_error(1))";}
message()( set -e
id="$(tx / janus=create|data_id)" # create janus session and store id
hn="$(tx "/$id" janus=attach plugin=janus.plugin.streaming|data_id)" # create plugin session and store id
tx "/$id/$hn" janus=message body="$(jo -- "$@")"
tx "/$id" janus=destroy >/dev/null
)
# example usage:
# list all streams
message request=list|jq .plugindata.data.list
# remove stream with id 666
message request=destroy id=666 secret=adminpwd permanent=true|jq
# create new stream
message request=create id=666 secret=adminpwd permanent=true name=[YOUR_STREAM_NAME] type=rtp media=["$(
jo type=video mid=v1 codec=h264 pt=96 simulcast=true svc=false port=5010 port2=5020 port3=5030 \
fmtp=level-asymmetry-allowed=1\;packetization-mode=1\;profile-level-id=42c01f
)","$(
jo type=video mid=v2 codec=vp8 pt=96 simulcast=true svc=false port=5012 port2=5022 port3=5032
)","$(
jo type=audio mid=a codec=opus pt=97 port=5000
)"] srtpsuite=80 srtpcrypto=zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz|jq # your srtp token needs to match your source
# show new stream info to verify configuration
message request=info id=666 secret=adminpwd|jq
# on streaming source device
ffmpeg [YOUR_INPUT_CONFIG HERE] -f rtp -srtp_out_suite SRTP_AES128_CM_HMAC_SHA1_80 -srtp_out_params zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz srtp://[YOUR_JANUS_SERVER_HOST]/:[PORT]
# for gstreamer, showing alternative key format
gst-launch-1.0 [YOUR_INPUT_CONFIG HERE] ! rtpav1pay ! srtpenc key=cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3cf3 ! udpsink host=[YOUR_JANUS_SERVER_HOST] port=[PORT]
Use a free signalling server from the webtorrent community. You can skip the torrent part of the implementation and just use the signalling, it's awesome. You can use a libraries like:
to get started. For me, I found the protocol is simple enough where I just use small vanilla javascipt implementation to talk to the websocket servers to generate the signalling messages. I wish more people knew about this and realize how easy it can be to bring WebRTC to their applications.
This is super cool and almost makes it possible building PWAs that only need a dumb http server to deliver the app as a bunch of static files and still allow users to synchronize data between their devices. It still depends on the tracker but if the user could change the tracker it sounds like it's currently the best way to get clients to communicate with each other without depending on a server provided by the PWA.
Thank you for this! I knew this shit was done by someone already and I've spent two years resisting the urge to re-invent this wheel. p2pt is exactly what I knew was possible and have been looking for!
I am sorry if this comes off to be negative, but with every example provided on the site, when compiled and then fed into ShellCheck¹, generates warnings about non-portable and ambiguous problems with the script. What exactly are we supposed to trust?
It seems ShellCheck errs on the side of caution when checking arithmetic expansions and some of its recommendations are not relevant in the context they are given. For example, on `cat.sh`, one of the lines that are marked in red is:
In examples/compiled/cat.sh line 7:
: $((_$__ALLOC = $2)) # Track object size
^-- SC1102 (error): Shells disambiguate $(( differently or not at all. For $(command substitution), add space after $( . For $((arithmetics)), fix parsing errors.
^-----------------^ SC2046 (warning): Quote this to prevent word splitting.
^--------------^ SC2205 (warning): (..) is a subshell. Did you mean [ .. ], a test expression?
^-- SC2283 (error): Remove spaces around = to assign (or use [ ] to compare, or quote '=' if literal).
^-- SC2086 (info): Double quote to prevent globbing and word splitting.
It seems to be parsing the arithmetic expansion as a command substitution, which then causes the analyzer to produce errors that aren't relevant. ShellCheck's own documentation[0] mention this in the exceptions section, and the code is generated such that quoting and word splitting are not an issue (because variables never contain whitespace or special characters).
It also warns about `let` being undefined in POSIX shell, but `let` is defined in the shell script so it's a false positive that's caused by the use of the `let` keyword specifically.
If you think there are other issues or ways to improve Pnut's compatibility with Shellcheck, please let us know!
You might want to consider adding keyboard shortcuts for the keys 1-4 corresponding to the answer order so that the questions can be answered more quickly on desktop. I also agree with the others as far as spelling variations, maybe you want to include Modern Standard Arabic and potentially other dialect versions as well.
Having owned Dell's 5K for years, last year I finally upgraded to their 8K model. I could never go back to anything less. If you need it and can justify the cost, you will know; but I don't see either being justifiably true for very many people at this point. While higher refresh would be nice, 60Hz is more than adequate for me, as it's the screen real estate that matters, and you really don't have many options to choose from.
I would expect ROCm to work just fine with with Steam Deck. Given that the Steam Deck apparently uses gfx1033. So you probably need to specify the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.0 and corresponding gfx1030_20.ukdb miopen kernel. I do not own a Steam Deck, but I do have another RDNA2 card, an RX 6700XT, which uses the similar gfx1031 ISA, which works just fine. While I don't have a RX 570, I do have a RX 480 which is also gfx803 like your RX 570 which should technically work, however don't expect much in performance or capability, as most work loads expect more than your 4GB of vram and much more compute power. You will also would need to use older versions of ROCm as well as these older cards are deprecated; I think I had to use ROCm <=4.3.1 if I remember correctly. What did you want to run specifically?
¹ https://u-boot.org/blog/seeing-is-believing-video-support-la...