For straight-ahead performance that's not specifically concerned with the issue of loading the data (from wherever), then work on smaller (1GiB) sectors is okay
Hmmmm. These are just my thoughts off the cuff:
- For straight-ahead performance that's not specifically concerned with the issue of loading the data (from wherever), then work on smaller (1GiB) sectors is okay.
- When considering optimization for space so that large sectors can be replicated, 128GB for >1GiB sectors is obviously problematic from a normal replication perspective. However, if we consider the attacker who wants to replicate fast at any cost, then maybe it's okay.
Based on this, we could probably focus on smaller sectors as a reasonable representation of the problem. This has the unfortunate consequence that the work is less applicable to the related problem of speeding replication even when memory must be conserved to some extent.
I guess as a single datum to help calibrate our understanding of how R2 scales, it would be worth knowing exactly how much RAM is required for both 1GiB and (I guess) 2GiB. If the latter really fails with 128GB RAM, how much does it require not to? If the work you're already doing makes it easy to get this information, it might help us reason through this. I don't think you should spend much time or go out of your way to perform this investigation though, otherwise.
Others may feel differently about any of this.