(full disclaimer: I am the guy who did the comparison) For performance testing local SSD shall be used if this arbitrary performance metic is of interest) Backup to a cloud or storage appliance will not have such a penalty for random io. Backing up to an HDD is a bad artificial usecase nobody should be doing that and measuring it is therefore pointless. (Leaving out the relevance of the HDD scenario in the first place: some apps may generate predominantly sequential IO and others - random, and will get penalized unfairly. Those things need to be compared and analyzed, not how fast the app is running gzip and copying files to the local hard drive. The longer is your backup history the more fragile it becomes. Some of the other tools mentioned create long chain of dependent backups. Why is it in the list to begin with? It can never be a serious contender. Heck, the Duplicati does not have a stable version - 1.x is EOL and 2.0 is permanent beta. What does matter - is stability in general meaning of the term, resilience to datastore corruption, robust handling of network interruptions, and most importantly clear architecture that inspires confidence In the feasibility of simple and robust implementation. Yes, it’s nice that Duplicacy was and is is way of competition in performance but that is not the selling point in any way. Nobody cares if one can backup (or corrupt datastore) 10x faster. It’s supposed to be slow and lightweight. Bunch or typos and autocorrect frenzy.And yet, All of these metrics are absolutely irrelevant in a backup solution in the sense that nobody should be choosing backup solution by its speed or deduplication ratio.īackup is by nature a background process. ![]() There is no reason to even bother with the other two at that point.Įdit. It’s faster than nearest competitor by at least an order of magnitude. That project is ill conceived and should not have existed in the first place.ĭuplicacy is written in golang, produces self contained monolithic native executables on every platform. It is in fact rewrite of duplicity (why?!) in C# - and suck suffers from the same inherent design flaws and downsides + has more bugs. The only benefit of it (and its frontend Deja(something) is that both are readily available on most Linux distributions).įrom the two discussed here - Duplicati is written in C# on *nix requires mono framework a separate behemoth. It’s old, creates long chain of fragile incremental backups and requires the user to manually create full backups periodically to break the chain which is obnoxious. It did not work for me in most basic and simples usecase (on a Mac), and after I filed numerous reproducible bugs with screencasts support simply gave up with “no other customers reports these problems”. ![]() I would not recommend Arq either - unless you thoroughly test it and prove to yourself that it works reliably in your circumstances (backup/restore, including network interruptions during backup, restore, datastore corruption and concurrency mishaps). There is no stable version since forever and entrusting it with your data is foolish. I recommend reading their design documents.ĭuplicati is an unstable and unreliable slow crap. ![]() Besides obvious flexibility and high performance it is unique in supporting cross-machine lock-less deduplication. I use command line version on most of my machines (pc, macs and synology Diskstations) but still bought licenses on principe because great work should be rewarded. GUI is $20 first year or so +$5/each additional year. It’s not even a question.Ĭommand line version is free for personal use.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |