HAMMER VFS - Fix over-enthusiastic cluster read
* The block device I/O was over-enthusiastic in calling cluster_read()
and could wind up creating the buffers of the wrong size which
would then overlap the address space later buffer requests for the
right size.
This could result in the corruption of large-data (64K) blocks,
usually causing a hammer reblock to fail with a CRC error but
not corrupting the actual filesystem on-media.
Meta data could not usually get corrupted by this unless the
cluster-read happened to cross a large-block (8MB) boundary.
* Particularly easy to reproduce with the dm_crypt module due to
crypt overheads.
* Fixed by disallowing read-aheads in the large-data zone (the only
zone which can contain a mix of 16K and 64K blocks), and ensuring
that any other cluster_read does not cross a large-block boundary.