No Data Corruption & Data Integrity
What exactly does the 'No Data Corruption & Data Integrity' motto mean to every hosting account user?
The process of files being corrupted due to some hardware or software failure is known as data corruption and this is one of the main problems which Internet hosting companies face because the larger a hard disk drive is and the more information is kept on it, the much more likely it is for data to be corrupted. There're a couple of fail-safes, but often the information becomes damaged silently, so neither the file system, nor the administrators see anything. Because of this, a bad file will be handled as a regular one and if the HDD is a part of a RAID, the file will be duplicated on all other disk drives. In theory, this is for redundancy, but in practice the damage will be even worse. When some file gets corrupted, it will be partially or entirely unreadable, so a text file will not be readable, an image file will display a random mix of colors if it opens at all and an archive will be impossible to unpack, so you risk sacrificing your site content. Although the most widely used server file systems have various checks, they often fail to discover a problem early enough or require a vast period of time to be able to check all files and the web hosting server will not be functional for the time being.
No Data Corruption & Data Integrity in Shared Web Hosting
The integrity of the data which you upload to your new shared web hosting account will be guaranteed by the ZFS file system that we make use of on our cloud platform. The majority of hosting service providers, like our company, use multiple hard drives to keep content and because the drives work in a RAID, the same data is synchronized between the drives at all times. When a file on a drive becomes corrupted for reasons unknown, however, it is very likely that it will be duplicated on the other drives since other file systems do not include special checks for that. In contrast to them, ZFS uses a digital fingerprint, or a checksum, for every file. In the event that a file gets damaged, its checksum will not match what ZFS has as a record for it, which means that the bad copy will be replaced with a good one from another drive. Since this happens instantly, there's no possibility for any of your files to ever get corrupted.