Throwing money on RAM because most graph databases haven't figured out how to use the disk effectively is not a good use of money. What's cheaper than buying all the RAM in the universe is figuring out a different system besides a graph database that does the job.
A good example would be the graph of Wikipedia links. About 100 million edges among 5 million nodes, last I checked. The nodes have large differences in degree.
The raw data for this is not the slightest bit large. We're only talking about gigabytes. But it would absolutely destroy Neo4J to even try to import it, to say nothing of running an interesting algorithm that justifies using a graph database on it, and Neo4J seems to be everyone's favorite open-source graph database for some reason.
A good example would be the graph of Wikipedia links. About 100 million edges among 5 million nodes, last I checked. The nodes have large differences in degree.
The raw data for this is not the slightest bit large. We're only talking about gigabytes. But it would absolutely destroy Neo4J to even try to import it, to say nothing of running an interesting algorithm that justifies using a graph database on it, and Neo4J seems to be everyone's favorite open-source graph database for some reason.