The central point is:- It is possible to build storage arrays ( as in != rocket science ) vastly cheaper if you stop drinking vendor kool-aid. The system can be designed for whatever characteristics are more useful. Those guys don't need to prove anything to you they have paying customers and whole lot of other startups requesting them to start an enterprise storage business.
See another example http://www.linux.com/archive/feature/146861
Ofcourse someone who believes in DDM like mainstream media belonging to the west more than the dime a dozen blogs isn't going to learn.
Its definitely not rocket science but it is definitely proprietary as they invested huge sums on RnD to get that product to work the way it does. Lets for the time being think of your solution. None of the one that you have specified provides block level access to storage and only provides file level access. So the OS is never gonna see it as a raw storage space, but it will see it only as a mapped folder. I have yet to see in any of your solution that has come via cheap way to provide direct access to storage blocks. Cheap is good but that doesn't mean cheap always satisfies the requirements. And if you look at any solution with block level access to storage will never come cheap. If we go by your solution, then everyday someone has to initiate the transfer of files to the storage using NFS or SFS. The OS can never access remote storage on block level and can only see them as a folder share.
Yes I would like to hear the difference between your 1 million USD storage box purchased from a vendor and building stuff on your own.
Already explained earlier but let me get a bit deeper. The solutions that you have put forth works good for data backup but it cannot be used for direct data I/O from the Database/application itself. Because, in the mentioned solutions, the storage is available on file level and not on block level. There is a file system layer that is active on all individual boxes that have the hard drives. The way they communicate with each other seems to be on the application layer in the first solution that you said i.e. backblaze
The second solution i.e:http://www.linux.com/archive/feature/146861
provides the tool openfiler which is much more extensive but still provides only file level access with software Raid levels upto Raid 10. Software Raids have their own disadvantages compared to hardware RAIDs with the later being costlier. Even in this solution the access that it provides is still on file level and not block level.
Why do you need block level access to begin with. Let me tell you a case study:
1.) You have 5 webservers, 1 Mail server and 2 Database servers.
2.) Each server has a min. on board hdd with raid 1 providing a storage size of 300 GB.
3.) You have one storage array of 10 Terabytes (10 + 10) (1 TB each) with RAID 10 from the vendors that I specified.
The webservers do not need much storage space and hence use their local hard drives.
The mail servers need more than the local 300 GB. So you install the operating system on the local hard drive. Then using SAN technologies and communication protocols like iSCSI, FC etc. use connect to the storage area, whose RAW disk space is already mapped using LUNs. Those LUNs show up in your server as direct hard drive partition. The operating system formats that virtual hard drive using its own file system and treats it as a local drive, with the Mailing application direction doing block level I/O on it, though its residing on a Storage Array. It is made possible because of the custom ASICs which natively support FC or iSCSI and provide block level access to every LUN.
*The data on spindle arrangement is decided directly using the onboard CLI of the storage array and LUNs are created depending on the requirement with whatever RAID level required and completely transparent to the initiator. In this case the initiator is the Mail Server and the target is the storage array.
Similarly for database servers, you can create much more bigger LUN and attach it directly to the server as a new partition. The entire storage array comes with its own set of RAID controllers and completely transparent to the server or the OS.
Is 1m USD equipment capable of delivering more random Input Output Operations Per Second from practically same number of spindles( read disks ).
Talking in terms of File I/O. All the cheap systems you specified, they just do I/O. None of them does caching and predictive data regeneration. The enterprise class sytems comes with its own caching engine which stores the most frequently accessed files in its cache engine. Apart from that, when the same pattern of strings are noticed, the predictive engine regenerates the data even without accessing the file. That kind of caching far surpasses any amount of I/O that the present disk operators support. Moreover both the examples that you gave are examples of NAS and definitely not SAN. I would had agreed with you if the requirement is for NAS, but then the requirement I see is primarily for SAN with NAS providing secondary storage just for backup. I would definitely think of using the solutions mentioned by you for NAS but not as the primary storage.
So the server CPU cycles are completely dedicated to application and need not be wasted at all on file I/O. The File I/O is taken care by the local iSCSI or FCOE or FC controller without putting any burden on server CPU.
Does a storage array used in SAN/NAS have more Mean Time between Failure when organized in similar setup to RAID-10+spare disks as opposed to having the same setup configured in Linux on your own.
Do BBUs, Multi-pathing on fiber running on storage networking ( as opposed to say ISCSI on 10Gbit Ethernet with NIC bonding that can easily deliver 40Gbps at several orders lower cost ), storage volume replication, exporting of LUNs in hardware make the million USD storage arrays better than removing all these Single points of failure on your own?
All I hear is regurgitated marketing talk from storage vendors paraded as universal truth as if written in a holy book.
In the above said example there won't be single point failure because the storage array is never a single stand alone box. It is min. 2, with both acting as complete redundant to each other. Most of the times even the servers are two to make the system completely redundant.
BTW, I am not the types who goes by the vendor talks. In Fact all my vendors think I am the toughest nut to crack. All solutions are good, your cheaper ones and my costly ones, depends what we actually need.
The conversation with you really has a surreal quality :-
Me: Dude, here look here's how to build with schematics with part numbers and costs, an 8K USD device that has twice the amount of storage of a million USD device.
You: Duh! such blogs are dime a dozen
Me: Look again its a blog belonging to a storage vendor who stores data for large number of consumers backing up their PCs with utmost reliability, they used practically less than a million USD to build a business storing petabytes of data.
You: Yeah but can they prove they are really more reliable than my million USD storage 'array', you didn't research what is the difference between consumer grade stuff and 'enterprise' stuff
Me: Duh! what 'enterprise features' they can all be had in similar costs another here see another blog link
Rinse and repeat.
So having spent a million USD on a 20TB storage obsolete by the time it got shipped to your datacenter/NOC you can spend the rest of your life arguing about how it is better. Go on suit yourself.
I am not surreal at all. You couldn't answer most of the questions I put forth but you are easy when it comes to put blame on me. Infact you are the guy who has a very stringent mind and probably an illussion that you know the best so wake up. If you think you know the best, then you can prove it to me by answering my questions and convincing me. Trust me, I am not the egoistic types, I will be first person to praise you, if you can convince me and yes I will praise it in open and even accept that my thinking was flawed.