The primary function of this database would be to act as a sort of "global cache." When a member computer is about to perform a computation, it would first check this database. If the computation has already been done, the computer would simply fetch the pre-computed result instead of redoing the computation. The underlying goal is to save on compute resources globally.
N.B. this does not mean we precompute anything necessarily, but we do store everything we have computed thus far. The hit rate on the cache might be very low for a while, but one would think it'd eventually go up. The way we're going about this (throwing more GPUs at it) just seems awfully wasteful to me.
Has anyone thought about/done any research on this?