Graviton with Nitro 4 has been quite pleasant to use, with the rust aarch64 musl static target and rust-lld I can build monolith ELFs that work not just on my android via `adb push` and `adb shell` but also on AWS.
AWS with Nitro v3+ iirc supports TPM, meaning I can attest my VM state via an Amazon CA. I know ARM has been working a lot with Rust, and it shows - binfmt with qemu-user mean I often forget which architecture I'm building/running/testing as the binaries seem to work the same everywhere.
Wouldn't the business impact always be performance per dollar from client perspective? This reads like a document that's meant to convince AWS management to invest in the new chip, focusing on how it's maximally flexible for sale, not a document to convince customers to use it ...
> This reads like a document that's meant to convince AWS management to invest in the new chip, focusing on how it's maximally flexible for sale, not a document to convince customers to use it ...
AWS management is the customer.
Higher compute density, lower infrastructure costs, and higher performance. Those are data center selling points.
The truth of the matter is that your average external customer doesn't really care about CPU architectures if all they are doing is using serverless offerings, specially AWS Lambdas handling events. They care about what it costs them to run the services. AWS management decide if the returns on their investment is paying off and helps them lower costs and improve margins.
Can someone please confirm, is the Graviton an ARM-based CPU or something different? The page mentioned ARM, but I was still a little confused. Are we able to launch a Debian/Fedora using the CPU, or is meant for something different?
As far as I'm aware- if it's called an ARM CPU it's either the v7 or v8 instruction set with the possibility of extra instructions (changes to ARM die) or a tightly integrated coprocessor (via AXI bus, adjacent to the ARM silicon on the same substrate).
There are different Coretex series that optimize for different things- A and X for applications (phones, cloud compute, SBCs, desktops and laptops), M for microcontrollers, and R for realtime.
This doesn't apply if the company has an ARM founder and/or architecture license. (I think that's what they're called) Eg- Apple and their M series SOCs are not Coretex cores, but share the base instruction set- but only if Apple wants it to.
Yup, Amazon supports the 6.11? kernel on aarch64. Most toolchains if you target linux aarch64 static they, they will produce executables that will run on Amazon Linux aarch64 and Android, set-top boxes with 64-bit chips and Linux 3+ it's surprising how many devices a static aarch64 ELF will run on.
Not really: burstable (“t”) instances haven't been updated in years. The current generation (“t4g”) still use Graviton2 processors. I get the impression that they would vastly prefer cost-conscious users to use spot instances.
Ah, thank you for pointing these out! I'd missed the introduction of “flex” instance types (apparently in May last year[0] – still long overdue relative to the introduction of T4g in September 2020[1]). Curious that so far, they all appear to be Intel-based (C7i, M7i, C8i, M8i, and R8i). M7i-flex instances also cost 45% more than the corresponding T4g instances. That's sort of understandable, as the generational improvements probably bring more than 45% better performance for most workloads, but it also makes them harder to justify for the sorts of long-running,-mostly-idle duties they're being touted for.
If you're interested in the underlying technology of flex there's some reinvent talks from last year on YouTube where they acknowledge it's based on VM live migration which is I think the first public reference to AWS using migration in their products.
I suspect the burstable types were always priced too cheaply and were more about attracting the cheap market segment which they don't need now in the days of AI money.
Burstable pricing gets complex quick when adding in the option to burst to full usage. Flex seems a lot simpler which is great.
AWS with Nitro v3+ iirc supports TPM, meaning I can attest my VM state via an Amazon CA. I know ARM has been working a lot with Rust, and it shows - binfmt with qemu-user mean I often forget which architecture I'm building/running/testing as the binaries seem to work the same everywhere.