_If_ there aren’t any multibyte characters that contain bytes that could be ASCII characters, the ”process the ASCII-compatible text” step doesn't change any multibyte characters, so they round-trip.
Of course, this will break down if multi-byte characters can contain byte values that could be ASCII. It can break HTML or TeX, for example.
If you're looking at legacy 8-bit encodings, you'll be ok, most (all?) of those have ascii as the first 128, or if not (ebdic), you're pretty screwed anyway. For utf-8 you're ok too -- all of the multibyte sequences have the high bit set. For ucs-2 or utf-16, you're likely to screw things up.
UTF-8 is ascii-compatible. Everything with the low bit cleared (characters 0x00-0x7F) is represented identically to ASCII. All codepoints >= 0x80 are represented with multiple bytes with the high bit (0x80) set.
UTF-8 is a very elegant construct for Unix-type C systems — you could basically reuse all your nul-terminated string APIs.
This encodes to the byte sequence F0 9F 92 A9 in UTF-8. Notice that every one of these bytes has a value > 0x7F, which means they're all outside the ASCII range.
That's one of the useful properties of UTF-8: you know that a code point requiring multi-byte encoding will never contain any bytes that could be confused for ASCII, because every byte of a multi-byte code point will be > 0x7F.
Which in turn means that if you use any processing mechanism that only alters bytes which are in the ASCII range, and passes all other bytes through unmodified, you are guaranteed not to modify or corrupt any multi-byte UTF-8 sequences.