Large amounts of data are being backed up from multiple hosts. When sizing the number of
deduplication stores, which best practice will improve performance by simplifying each store and
reducing the amount of data each contains?

A.
create many small deduplication stores
B.
create one large deduplication store 200GB larger than the total amount of data
C.
separate them across several libraries and consequently into multiple backup jobs
D.
use several large cartridges greater than 800GB
Explanation: