Alfresco, NAS or SAN, that’s the question!

The main requirement on the shared storage is being able to cross-mount the storage between the Alfresco servers. Whether this is done via an NAS or SAN is partly a decision around which technology your organization’s IT department can best support. Faster storage will have positive implications on the performance of the system, with Alfresco recommending throughput at 200 MB/sec.

NAS allows us to mount the content store via NFS or CIFS on all Alfresco servers, and they are able to read/write the same file system at the same time. The only real requirement is that the OS on which Alfresco is installed supports NFS (which is any Linux box actually). NFS tends to be cheaper and easier, but is not the fastest option. It is typically sufficient, though.

SAN is typically faster and more reliable, but obviously more expensive and complex (dedicated hardware and configuration requirements). In order to read/write from all Alfresco servers from/to the SAN, special file system types are necessary. For Red Hat, we use GFS2, other Linux flavors use OCFS or many others.

You are maybe thinking what happen in case of having multiple Alfresco servers writing to the same LUN could result in corruption (especially in header files), so it sounds like NAS (NFS/CIFS) would take care of that issue, however, if using a SAN, the filesystem must be managed properly to allow for read/write from multiple servers. For the Alfresco stand point, you don’t have to take care of that in both SAN or NAS approaches because Alfresco manages the I/O such that no collisions or corruption occur.

Note: If using a SAN, ensure the file system is managed properly to allow for read/write from multiple servers.

I also wanted to share this presentation I did internally some time ago but I think it would be useful.

7 thoughts to “Alfresco, NAS or SAN, that’s the question!”

  1. Great post. I’d like to add that we have used an OCFS2 approach on SUSE Enterprise Server on an Active-Active cluster with 3 nodes.

    OCFS2 is okay at speed but cluster stack always adds more complexity and sensibility to failure due to the locking daemon and stonith shoots. Needs some time and investigation and testing to do it correctly.

    OCFS2 is not being supported by vendors. AFAIK only SUSE is doing it. Over virtualized environments, disk sharing might be even more complex. HyperV for example did not allow us more than 2 machines using the same disk.

    With NFS is a lot simpler. NFS handles locking for you and it’s not complex at all to set it up, but of course you would need a decent transfer speed at network and fast disks too. Disk throughput is always a bottleneck in any implementation.

    Adding more Alfresco client nodes to NFS is easier too. For high availability I would suggest using DRBD for the NFS disk in an Active-Pasive cluster, which is simpler and less error-prone.

    If you want to improve read speed, tuning NFS is easy and one may consider cachefilesd supported by Linux kernel, for maximum performance.

  2. Thanks for you comment Jonathan, but why to use DRBD if Alfresco take care of the conflicts? I think is to add more complexity to the equation. In addition to that to have an Active-Pasive cluster with Alfresco doesn’t make sense to me because you are losing capacity, active-active is how Alfresco works actually.

  3. My experience is that in most cases it’s not worth the discussion because
    – I’ve never seen a system where the contentstore was the real bottleneck
    – requirements for db, index and contentstore shouln’t be mismatched
    – most IT departments made wrong decisions when buying and/or setting up the SAN hardware (not enough controllers, too few to big disks, wrong disk layout) so save money
    – don’t even think about performant SAN if you don’t get your own controllers, disks. Alfresco may be veery hungry in I/O
    – cheap, fast NAS hardware will be fast enough for contentstore but you need the fastest you can get for db and solr using a separate disk layout
    – having the contentstore on a NFS device has much more benefits (snapshots, easy access from several nodes, easy setup, easy backup/restore, price for performance) and maybe equal in performance if you couldn’t expect to get exclusive access to your SAN hardware (what you can expect for NAS hardware because it’s muuch cheaper)
    – switch question to DB and SOLR requirents and then the discussion may change ;-)

  4. Hey Toni. DRBD is for replication only for a NFS server in the example case I described, on an active-passive cluster, not for locking management. This way you may have N Alfresco servers acting as NFS clients, and no dual primary or 3x-active cluster scenarios, without OCFS2 or any shared disk cluster at all.

Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.