It's my first time seeing the package, but looking over the docs it looks like it implements LSA. The major difference here is that word2vec dramatically outperforms LSA in a variety of tasks (http://datascience.stackexchange.com/questions/678/what-are-...). My experience has been that the vector representations in LSA can be underwhelming and poorly performant. I can't comment on the Random Projection and Reflective Random Indexing techniques SemanticVectors implements.
Sorry, I should have specifically mentioned how it differs from random indexing/projection. I was immediately reminded of a similar inference example using random indexing/projection.
This link is about document distances but still compares other techniques nicely: http://datascience.stackexchange.com/questions/678/what-are-...