![]() The only thing a backup command can do is add files to a repository so parallel backups have no problem running together. And large indexing metadata.īackups and prunes don’t need to be lockedīackups on duplicacy are lock-free. The trade-off on chunksize is internal fragmentation for large chucks and external fragmentation for small chunks. So if a back changes a small file in the middle of an existing chuck then a whole new chunk will be uploaded and the old chunk becomes stale. Chunks no longer referenced became stale. Instead the backup operation with always creates fully populated chunks and the “snapshot” equilivant will list the chucks needed for each backup. It is more willing to tradeoff wasted space for performanceīecause of Duplicacy’s chucking model they don’t have pack’s like restic and don’t need to deal with partially populated packs when doing a prune. After doing a full directory listing of the remote side and possibly downloading some “snapshot” files created by other hosts writing to the same repository the prune command has everything it needs to determine what to prune. In Duplicacy the remote file names are a hash of the contents and indexes are cached locally. This is just based on casual observation and may be wrong. I am really looking forward fix to prune, but this original thread was comparing prune on Restic to Duplicity. Unused size after prune: 0.28% of total size Well done makes restic much much stronger!!! This new version much better for informing what step is being taken I really like the additional reporting as well the old process just seem to hang without informing what was status (my previous 12 hour adventure had a “error timeout to onedrive” after 8 hour so I wasn’t sure whether still going or what). Took 10 minutes (stats below on the job size to give context). Proud to update - the new version with the PR worked flawless first time! Will delete 1 packs and rewrite 0 packs, this frees 78.879 KiB Processed 226553 blobs: 0 duplicate blobs, 0 B duplicateįind data that is still in use for 22 snapshotsįound 226547 of 226553 data blobs still in use, removing 6 blobs Repository xxx opened successfully, password is correct Restic.exe -repo c:\Photos forget -keep-daily 7 -keep-weekly 8 -keep-monthly 24 Restic 0.9.5 compiled with go1.12.4 on windows/amd64 When you run restic prune please include the “-v” flag I’d like to see throughput for both large files and with random I/O. Run a disk benchmarking program against your local disk and NAS.Run the forget/prune on the copy on the local disk, and provide the log similar to below.Run the forget/prune across the network, and provide the log similar to below.Copy your restic repo onto a local disk (SSD if you have enough capacity, spinning disk fine).I could see restic.exe was using all the CPU time - the virus scanner wasn’t doing anything, no other process was doing much.Ĭan you please do the following, showing us the output of each step? My execution log from the console is below. ![]() The “forget” command took 10 seconds and the “prune” command took 6 minutes. The repository backs up photos and videos - 40,000 files, about 45% very small xmp metadata, 45% raw files that are about 20MB each, 8% jpeg files a few MB, and 2% videos that vary between 20MB and 500MB. This PC is an i7 2600 with 16GB RAM and OS on SSD - it’s not a new PC but it’s fast enough for everything I do including video editing. I just ran restic prune / forget on my local Windows PC on a 279GB repository, which is sitting on a locally attached SATA 7200RPM spinning HGST disk.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |