It would be neat to add some randomness-measuring statistics of the last N bits: like counts of 1s and 0s; digraph counts for 00, 01, 10, and 11; length of longest 0/1 repetitions; or more sophisticated tests.
…isn't an "ascii stream". You can look at the source code for "covertBinaryToAscii", and it's really converting 8 bits of random data at a time into one of the first 256 Unicode code points.
(Also, you can avoid the need to escape special characters by using textContent[1] over innerHTML, although that comes with the catch of only being supported in IE 9 and later…)
It is ASCII (edit: ok, as per the replies, more like poorly-defined Extended ASCII since it uses 8 bits instead of 7). For every character that has an ASCII value, the Unicode code point and the ASCII value of that character are the same. Unicode was designed with backward compatibility in mind.
What you say about Unicode and ASCII sounds correct. But many of those characters are not actually ASCII. See this table[0]; none of "ÿª§ò" are on it
I was going to say that you've pointed to the 7 bit ASCII chart and you want the 8 bit Extended ASCII chart. But the wikipage chart for extended ascii doesn't have those characters either so maybe it's extended ASCII and codepages?
As the Wikipedia article states, there is no single thing called "the" extended ASCII. It is a nonstandard term that can refer to any of the multitude of 8-bit encodings that contain ASCII as a subset.
I played with this a few days ago when it was submitted as "Show HN" (but only got 3 points).
It was originally vulnerable to Javascript injection (with potential for XSS), since the "ascii stream" had no protection against arbitrary HTML that could be injected by binary-encoding it and sending it using a little script. I only had the time to PoC by injecting goat pictures: the next day, I tried making a more fleshed out potential-XSS demonstration and the author fixed the vulnerability while I was playing with it.
There's still a (disputable) glitch that I'd like to point out: the ASCII stream is different for everyone, since it's rendered client-side and simply uses the first available bit on page load as point of reference to know when a byte starts.
This "frameshift" glitch obviously marred my injection demonstration, but it's still somewhat annoying for anyone who wants to demo their l33t skills, as other viewers only have 1/8 chance of seeing any binary-encoded text that we send. So currently, the ugly way of forcing a message to be seen is by sending it 7 times, once for every possible alignment. (But why would anyone do that, right?)
Edit: It also looks like nothing is working anymore today.
There are lots of tiny bugs on this experiment. I actually started it as the regular node js chat tutorial, and figured out it would be fun to create a femto-blogging platform as you might see in the about page.
I'm still working on fixing bugs and improve rate limiting.
Or you're lucky. One time a friend of mine was describing his fool-proof plan to win at Roulette. I jokingly asked, "What, double your bet when you lose?". He replied—in all seriousness—"No, triple it!"
I then argued with him about the money he would likely lose implementing his plan. He said, "what are the odds the ball will land on red five times in a row?". (We were ignoring the existence of the green 0 and 00). I took out a quarter, flipped it seven times, and it landed heads every time. This happened straight-away.
That was a random sequence, it was all 0s, and I'd like to think he was lucky that it happened that way, and convinced him to abandon his plan.
But I was also lucky. I had intended to demonstrate this, and was prepared to be flipping the coin hundreds of times until the run of 0s came up. You could say I was predicting the next result correctly 100% of the time on those first 7 flips. But my ability to predict the results didn't show their non-randomness, instead it showed my "luckiness". Which really means they weren't predictions at all, I guess.
Just because it's biased towards 0 it doesn't mean there's no entropy in it. Even the raw output of an entropy source based on radioactive decay or thermal noise is biased.
To generate a highly random output that appears independent from the source and uniformly distributed a randomness extractor [1] has to beapplied. The most well know is the Von Neumann extractor.
You're talking about a cryptographically secure random. Normal random can really contain any sequence, including a repeating string of 1's or 0's. The infinite monkey theorem proves it :)
this is the least random "random" binary stream I've ever seen in my life. at the moment it's being gamed and people are submitting 99% 1's. (it looks like 111111111111111111111111111111110111111111111111111 with just a few 0's thrown in.)
it would work a lot better if the server made a pseudorandom bit that it sent to the user for the user to xor their choice by on client-side, and then out of spite undid that bit in half the cases on server-side.
var b = io.connect();
setInterval(function() {"00001001".split('').forEach(function(n) { b.emit('input',n); });},10)
I found it interesting to see how hard I had to hit the server to a.) get all my requests in before another person's request gets in the middle - and b.) have my 8 bits (or however many I was sending, didn't have much luck past 40 bits) occur at the start of on ascii character - as opposed to having it be off by n bits that previously were on the stack.
Thankfully websockets/socket.io ensures ordering so I didn't have to worry about them becoming unordered like I would if it was using basic http requests.