Performance impact of running different filesystems on a single Linux serverMonitor, extend, and Index Linux...

What do you call something that goes against the spirit of the law, but is legal when interpreting the law to the letter?

Circuitry of TV splitters

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

Shell script can be run only with sh command

A Journey Through Space and Time

How can the DM most effectively choose 1 out of an odd number of players to be targeted by an attack or effect?

What defenses are there against being summoned by the Gate spell?

What Brexit solution does the DUP want?

How do we improve the relationship with a client software team that performs poorly and is becoming less collaborative?

Can a German sentence have two subjects?

Why Is Death Allowed In the Matrix?

Can you lasso down a wizard who is using the Levitate spell?

What are these boxed doors outside store fronts in New York?

Should I join an office cleaning event for free?

New order #4: World

I probably found a bug with the sudo apt install function

Why CLRS example on residual networks does not follows its formula?

How does one intimidate enemies without having the capacity for violence?

The use of multiple foreign keys on same column in SQL Server

Why is the design of haulage companies so “special”?

How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?

Is it possible to do 50 km distance without any previous training?

Is Social Media Science Fiction?

Motorized valve interfering with button?



Performance impact of running different filesystems on a single Linux server


Monitor, extend, and Index Linux filesystemsLinux FilesystemsPerformance of Loopback FilesystemsDoes increasing the journal size improve performance for ext4 filesystems?Raid rebuild performance impactProduction-ready, highly reliable filesystems on Linux: ext4 ext3 XFS or JFS (or ZFS)?Performance Impact of VirtualizationHow does the number of subdirectories impact drive read / write performance on Linux?Servers in different DC's - performance impactPerformance comparison of single RAID vs multiple filesystems






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







13















The book "HBase: The definitive guide" states that




Installing different filesystems on a single server is not recommended.
This can have adverse effects on performance as the kernel may have to
split buffer caches to support the different filesystems. It has been reported that, for certain operating systems, this can have a devastating
performance impact.




Does this really apply to Linux? I have never seen the buffer cache bigger than 300 Mbytes and most modern servers have gigabytes of RAM so splitting the buffer cache between different filesystems should not be an issue. Am I missing something else?










share|improve this question




















  • 1





    Maybe try emailing/tweeting the author.. let's get his/her input!

    – Dolan Antenucci
    Jan 17 '13 at 21:00


















13















The book "HBase: The definitive guide" states that




Installing different filesystems on a single server is not recommended.
This can have adverse effects on performance as the kernel may have to
split buffer caches to support the different filesystems. It has been reported that, for certain operating systems, this can have a devastating
performance impact.




Does this really apply to Linux? I have never seen the buffer cache bigger than 300 Mbytes and most modern servers have gigabytes of RAM so splitting the buffer cache between different filesystems should not be an issue. Am I missing something else?










share|improve this question




















  • 1





    Maybe try emailing/tweeting the author.. let's get his/her input!

    – Dolan Antenucci
    Jan 17 '13 at 21:00














13












13








13


2






The book "HBase: The definitive guide" states that




Installing different filesystems on a single server is not recommended.
This can have adverse effects on performance as the kernel may have to
split buffer caches to support the different filesystems. It has been reported that, for certain operating systems, this can have a devastating
performance impact.




Does this really apply to Linux? I have never seen the buffer cache bigger than 300 Mbytes and most modern servers have gigabytes of RAM so splitting the buffer cache between different filesystems should not be an issue. Am I missing something else?










share|improve this question
















The book "HBase: The definitive guide" states that




Installing different filesystems on a single server is not recommended.
This can have adverse effects on performance as the kernel may have to
split buffer caches to support the different filesystems. It has been reported that, for certain operating systems, this can have a devastating
performance impact.




Does this really apply to Linux? I have never seen the buffer cache bigger than 300 Mbytes and most modern servers have gigabytes of RAM so splitting the buffer cache between different filesystems should not be an issue. Am I missing something else?







linux performance filesystems ext4 xfs






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 6 mins ago









ewwhite

174k78370725




174k78370725










asked Jan 12 '13 at 2:01









AlexAlex

6,61132744




6,61132744








  • 1





    Maybe try emailing/tweeting the author.. let's get his/her input!

    – Dolan Antenucci
    Jan 17 '13 at 21:00














  • 1





    Maybe try emailing/tweeting the author.. let's get his/her input!

    – Dolan Antenucci
    Jan 17 '13 at 21:00








1




1





Maybe try emailing/tweeting the author.. let's get his/her input!

– Dolan Antenucci
Jan 17 '13 at 21:00





Maybe try emailing/tweeting the author.. let's get his/her input!

– Dolan Antenucci
Jan 17 '13 at 21:00










2 Answers
2






active

oldest

votes


















14














Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.



You have to remember that data between different mount points is unshareable too.



While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from slabtop for a system running 3 different file systems (XFS, ext4, btrfs):




OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
42882 42460 99% 0.70K 1866 23 29856K shmem_inode_cache
14483 13872 95% 0.90K 855 17 13680K ext4_inode_cache
4096 4096 100% 0.02K 16 256 64K jbd2_revoke_table_s
2826 1136 40% 0.94K 167 17 2672K xfs_inode
1664 1664 100% 0.03K 13 128 52K jbd2_revoke_record_
1333 886 66% 1.01K 43 31 1376K btrfs_inode_cache
(many other objects)


As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.






share|improve this answer


























  • +1 for informing me about the slabtop command!

    – Scott
    Jan 17 '13 at 19:43











  • I'd say that since those caches are mutually exclusive it doesn't really matter (but still can have an impact to resource constrained systems).

    – poige
    Mar 10 '13 at 0:15



















5














I don't think there's a negative impact. I often have ext3/ext4 mixed with XFS (and even ZFS) on the same server setup. I would not describe my performance as being anything less than expected, given the hardware I'm running on.



[root@Lancaster ~]# mount
/dev/cciss/c0d0p2 on / type ext4 (rw)
/dev/cciss/c0d0p7 on /tmp type ext4 (rw,nobarrier)
/dev/cciss/c0d0p3 on /usr type ext4 (rw,nobarrier)
/dev/cciss/c0d0p6 on /var type ext4 (rw,nobarrier)
vol2/images on /images type zfs (rw,xattr)
vol1/ppro on /ppro type zfs (rw,noatime,xattr)
vol3/Lancaster_Test on /srv/Lancaster_Test type zfs (rw,noatime,xattr)


Are you concerned about a specific scenario? What filesystems would be in play? What distribution are you on?






share|improve this answer


























    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "2"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f466376%2fperformance-impact-of-running-different-filesystems-on-a-single-linux-server%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    14














    Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.



    You have to remember that data between different mount points is unshareable too.



    While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from slabtop for a system running 3 different file systems (XFS, ext4, btrfs):




    OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
    42882 42460 99% 0.70K 1866 23 29856K shmem_inode_cache
    14483 13872 95% 0.90K 855 17 13680K ext4_inode_cache
    4096 4096 100% 0.02K 16 256 64K jbd2_revoke_table_s
    2826 1136 40% 0.94K 167 17 2672K xfs_inode
    1664 1664 100% 0.03K 13 128 52K jbd2_revoke_record_
    1333 886 66% 1.01K 43 31 1376K btrfs_inode_cache
    (many other objects)


    As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.






    share|improve this answer


























    • +1 for informing me about the slabtop command!

      – Scott
      Jan 17 '13 at 19:43











    • I'd say that since those caches are mutually exclusive it doesn't really matter (but still can have an impact to resource constrained systems).

      – poige
      Mar 10 '13 at 0:15
















    14














    Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.



    You have to remember that data between different mount points is unshareable too.



    While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from slabtop for a system running 3 different file systems (XFS, ext4, btrfs):




    OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
    42882 42460 99% 0.70K 1866 23 29856K shmem_inode_cache
    14483 13872 95% 0.90K 855 17 13680K ext4_inode_cache
    4096 4096 100% 0.02K 16 256 64K jbd2_revoke_table_s
    2826 1136 40% 0.94K 167 17 2672K xfs_inode
    1664 1664 100% 0.03K 13 128 52K jbd2_revoke_record_
    1333 886 66% 1.01K 43 31 1376K btrfs_inode_cache
    (many other objects)


    As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.






    share|improve this answer


























    • +1 for informing me about the slabtop command!

      – Scott
      Jan 17 '13 at 19:43











    • I'd say that since those caches are mutually exclusive it doesn't really matter (but still can have an impact to resource constrained systems).

      – poige
      Mar 10 '13 at 0:15














    14












    14








    14







    Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.



    You have to remember that data between different mount points is unshareable too.



    While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from slabtop for a system running 3 different file systems (XFS, ext4, btrfs):




    OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
    42882 42460 99% 0.70K 1866 23 29856K shmem_inode_cache
    14483 13872 95% 0.90K 855 17 13680K ext4_inode_cache
    4096 4096 100% 0.02K 16 256 64K jbd2_revoke_table_s
    2826 1136 40% 0.94K 167 17 2672K xfs_inode
    1664 1664 100% 0.03K 13 128 52K jbd2_revoke_record_
    1333 886 66% 1.01K 43 31 1376K btrfs_inode_cache
    (many other objects)


    As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.






    share|improve this answer















    Splitting the buffer cache is detrimental, but the effect it has is minimal. I'd guess that it's so small that it is basically impossible to measure.



    You have to remember that data between different mount points is unshareable too.



    While different file systems use different allocation buffers, it's not like the memory is allocated just to sit there and look pretty. Data from slabtop for a system running 3 different file systems (XFS, ext4, btrfs):




    OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
    42882 42460 99% 0.70K 1866 23 29856K shmem_inode_cache
    14483 13872 95% 0.90K 855 17 13680K ext4_inode_cache
    4096 4096 100% 0.02K 16 256 64K jbd2_revoke_table_s
    2826 1136 40% 0.94K 167 17 2672K xfs_inode
    1664 1664 100% 0.03K 13 128 52K jbd2_revoke_record_
    1333 886 66% 1.01K 43 31 1376K btrfs_inode_cache
    (many other objects)


    As you can see, any really sizeable cache has utilisation level of over 90%. As such, if you're using multiple file systems in parallel, the cost is about equal to to loosing 5% of system memory, less if the computer is not a dedicated file server.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Jan 14 '13 at 0:26

























    answered Jan 12 '13 at 3:44









    Hubert KarioHubert Kario

    5,50542563




    5,50542563













    • +1 for informing me about the slabtop command!

      – Scott
      Jan 17 '13 at 19:43











    • I'd say that since those caches are mutually exclusive it doesn't really matter (but still can have an impact to resource constrained systems).

      – poige
      Mar 10 '13 at 0:15



















    • +1 for informing me about the slabtop command!

      – Scott
      Jan 17 '13 at 19:43











    • I'd say that since those caches are mutually exclusive it doesn't really matter (but still can have an impact to resource constrained systems).

      – poige
      Mar 10 '13 at 0:15

















    +1 for informing me about the slabtop command!

    – Scott
    Jan 17 '13 at 19:43





    +1 for informing me about the slabtop command!

    – Scott
    Jan 17 '13 at 19:43













    I'd say that since those caches are mutually exclusive it doesn't really matter (but still can have an impact to resource constrained systems).

    – poige
    Mar 10 '13 at 0:15





    I'd say that since those caches are mutually exclusive it doesn't really matter (but still can have an impact to resource constrained systems).

    – poige
    Mar 10 '13 at 0:15













    5














    I don't think there's a negative impact. I often have ext3/ext4 mixed with XFS (and even ZFS) on the same server setup. I would not describe my performance as being anything less than expected, given the hardware I'm running on.



    [root@Lancaster ~]# mount
    /dev/cciss/c0d0p2 on / type ext4 (rw)
    /dev/cciss/c0d0p7 on /tmp type ext4 (rw,nobarrier)
    /dev/cciss/c0d0p3 on /usr type ext4 (rw,nobarrier)
    /dev/cciss/c0d0p6 on /var type ext4 (rw,nobarrier)
    vol2/images on /images type zfs (rw,xattr)
    vol1/ppro on /ppro type zfs (rw,noatime,xattr)
    vol3/Lancaster_Test on /srv/Lancaster_Test type zfs (rw,noatime,xattr)


    Are you concerned about a specific scenario? What filesystems would be in play? What distribution are you on?






    share|improve this answer






























      5














      I don't think there's a negative impact. I often have ext3/ext4 mixed with XFS (and even ZFS) on the same server setup. I would not describe my performance as being anything less than expected, given the hardware I'm running on.



      [root@Lancaster ~]# mount
      /dev/cciss/c0d0p2 on / type ext4 (rw)
      /dev/cciss/c0d0p7 on /tmp type ext4 (rw,nobarrier)
      /dev/cciss/c0d0p3 on /usr type ext4 (rw,nobarrier)
      /dev/cciss/c0d0p6 on /var type ext4 (rw,nobarrier)
      vol2/images on /images type zfs (rw,xattr)
      vol1/ppro on /ppro type zfs (rw,noatime,xattr)
      vol3/Lancaster_Test on /srv/Lancaster_Test type zfs (rw,noatime,xattr)


      Are you concerned about a specific scenario? What filesystems would be in play? What distribution are you on?






      share|improve this answer




























        5












        5








        5







        I don't think there's a negative impact. I often have ext3/ext4 mixed with XFS (and even ZFS) on the same server setup. I would not describe my performance as being anything less than expected, given the hardware I'm running on.



        [root@Lancaster ~]# mount
        /dev/cciss/c0d0p2 on / type ext4 (rw)
        /dev/cciss/c0d0p7 on /tmp type ext4 (rw,nobarrier)
        /dev/cciss/c0d0p3 on /usr type ext4 (rw,nobarrier)
        /dev/cciss/c0d0p6 on /var type ext4 (rw,nobarrier)
        vol2/images on /images type zfs (rw,xattr)
        vol1/ppro on /ppro type zfs (rw,noatime,xattr)
        vol3/Lancaster_Test on /srv/Lancaster_Test type zfs (rw,noatime,xattr)


        Are you concerned about a specific scenario? What filesystems would be in play? What distribution are you on?






        share|improve this answer















        I don't think there's a negative impact. I often have ext3/ext4 mixed with XFS (and even ZFS) on the same server setup. I would not describe my performance as being anything less than expected, given the hardware I'm running on.



        [root@Lancaster ~]# mount
        /dev/cciss/c0d0p2 on / type ext4 (rw)
        /dev/cciss/c0d0p7 on /tmp type ext4 (rw,nobarrier)
        /dev/cciss/c0d0p3 on /usr type ext4 (rw,nobarrier)
        /dev/cciss/c0d0p6 on /var type ext4 (rw,nobarrier)
        vol2/images on /images type zfs (rw,xattr)
        vol1/ppro on /ppro type zfs (rw,noatime,xattr)
        vol3/Lancaster_Test on /srv/Lancaster_Test type zfs (rw,noatime,xattr)


        Are you concerned about a specific scenario? What filesystems would be in play? What distribution are you on?







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Jan 12 '13 at 3:29

























        answered Jan 12 '13 at 2:35









        ewwhiteewwhite

        174k78370725




        174k78370725






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Server Fault!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f466376%2fperformance-impact-of-running-different-filesystems-on-a-single-linux-server%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            As a Security Precaution, the user account has been locked The Next CEO of Stack OverflowMS...

            Список ссавців Італії Природоохоронні статуси | Список |...

            Українські прізвища Зміст Історичні відомості |...