Server load high, CPU idle. NFS the cause?Ubuntu 10.10 Maverick Server makes system locks up at random...

Is it true that good novels will automatically sell themselves on Amazon (and so on) and there is no need for one to waste time promoting?

Equivalents to the present tense

What did “the good wine” (τὸν καλὸν οἶνον) mean in John 2:10?

Is it normal that my co-workers at a fitness company criticize my food choices?

Professor being mistaken for a grad student

Examples of transfinite towers

Why do tuner card drivers fail to build after kernel update to 4.4.0-143-generic?

PTIJ: Who should I vote for? (21st Knesset Edition)

Are ETF trackers fundamentally better than individual stocks?

Can I use USB data pins as power source

What's the meaning of a knight fighting a snail in medieval book illustrations?

Math equation in non italic font

Why one should not leave fingerprints on bulbs and plugs?

A diagram about partial derivatives of f(x,y)

What is a ^ b and (a & b) << 1?

What is the Japanese sound word for the clinking of money?

Is it insecure to send a password in a `curl` command?

Simplify an interface for flexibly applying rules to periods of time

Why does overlay work only on the first tcolorbox?

How to pronounce "I ♥ Huckabees"?

Why do newer 737s use two different styles of split winglets?

What are substitutions for coconut in curry?

"of which" is correct here?

Do the common programs (for example: "ls", "cat") in Linux and BSD come from the same source code?



Server load high, CPU idle. NFS the cause?


Ubuntu 10.10 Maverick Server makes system locks up at random intervals (i7 930; 12GB RAM)NFS SSH Tunnel - High CPU UsageLoad on linux system is High but CPU is idleLinux: High Load Average while CPU 40% Idlerandom high cpu load on linux boxHigh load cause?What are some good ways to identify nfs clients that cause high load on nfs serverLow load average, but high %user and %system cpu usagecpu load and nfs performanceRedhat NFS Cluster High Load average suddenly













7















I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat



procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0
0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0
0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0
0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0
6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0
4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0
4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0
3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0
3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0
5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0
0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0
0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0
4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0
3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0
3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0


From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?










share|improve this question



























    7















    I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat



    procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0
    0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0
    0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0
    0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0
    6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0
    4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0
    4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0
    3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0
    3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0
    5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0
    0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0
    0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0
    4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0
    3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0
    3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0


    From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?










    share|improve this question

























      7












      7








      7


      5






      I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat



      procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
      r b swpd free buff cache si so bi bo in cs us sy id wa st
      2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0
      0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0
      0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0
      0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0
      6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0
      4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0
      4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0
      3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0
      3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0
      5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0
      0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0
      0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0
      4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0
      3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0
      3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0


      From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?










      share|improve this question














      I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat



      procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
      r b swpd free buff cache si so bi bo in cs us sy id wa st
      2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0
      0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0
      0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0
      0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0
      6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0
      4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0
      4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0
      3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0
      3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0
      5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0
      0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0
      0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0
      4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0
      3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0
      3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0


      From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?







      linux centos vps vmstat nfs






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 9 '10 at 1:35









      Mech SoftwareMech Software

      4652922




      4652922






















          2 Answers
          2






          active

          oldest

          votes


















          8














          you can try to use iostat to pin down which device generates the i/o wait:



          # iostat -k -h -n 5


          see the iostat man page for further details. nfs is often part of the problem especially if you serve a large number of small files or have particular many file operations. you can tune nfs access by using the usual mount options like rsize=32768,wsize=32768. there's a good whitepaper by netapp covering this topic: http://media.netapp.com/documents/tr-3183.pdf



          also make sure you have no drops on the network interface..



          hope this helps



          frank.






          share|improve this answer
























          • Freaking awesome! That was just it. It shows NFS as the device which is what I suspected (or hoped). I'm not terribly worried about the NFS since it's a backup device for offsite backups so if that's waiting I'm fine with that. Thanks again for the tip that was exactly the kind of information I was searching for.

            – Mech Software
            Mar 10 '10 at 13:46



















          0














          Adding async option to /etc/exports helped me to bring back the load average in norms.



          /mnt/dir      *(rw,async,pnfs,no_root_squash,no_subtree_check)





          share|improve this answer








          New contributor




          user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.




















            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "2"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f120570%2fserver-load-high-cpu-idle-nfs-the-cause%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            8














            you can try to use iostat to pin down which device generates the i/o wait:



            # iostat -k -h -n 5


            see the iostat man page for further details. nfs is often part of the problem especially if you serve a large number of small files or have particular many file operations. you can tune nfs access by using the usual mount options like rsize=32768,wsize=32768. there's a good whitepaper by netapp covering this topic: http://media.netapp.com/documents/tr-3183.pdf



            also make sure you have no drops on the network interface..



            hope this helps



            frank.






            share|improve this answer
























            • Freaking awesome! That was just it. It shows NFS as the device which is what I suspected (or hoped). I'm not terribly worried about the NFS since it's a backup device for offsite backups so if that's waiting I'm fine with that. Thanks again for the tip that was exactly the kind of information I was searching for.

              – Mech Software
              Mar 10 '10 at 13:46
















            8














            you can try to use iostat to pin down which device generates the i/o wait:



            # iostat -k -h -n 5


            see the iostat man page for further details. nfs is often part of the problem especially if you serve a large number of small files or have particular many file operations. you can tune nfs access by using the usual mount options like rsize=32768,wsize=32768. there's a good whitepaper by netapp covering this topic: http://media.netapp.com/documents/tr-3183.pdf



            also make sure you have no drops on the network interface..



            hope this helps



            frank.






            share|improve this answer
























            • Freaking awesome! That was just it. It shows NFS as the device which is what I suspected (or hoped). I'm not terribly worried about the NFS since it's a backup device for offsite backups so if that's waiting I'm fine with that. Thanks again for the tip that was exactly the kind of information I was searching for.

              – Mech Software
              Mar 10 '10 at 13:46














            8












            8








            8







            you can try to use iostat to pin down which device generates the i/o wait:



            # iostat -k -h -n 5


            see the iostat man page for further details. nfs is often part of the problem especially if you serve a large number of small files or have particular many file operations. you can tune nfs access by using the usual mount options like rsize=32768,wsize=32768. there's a good whitepaper by netapp covering this topic: http://media.netapp.com/documents/tr-3183.pdf



            also make sure you have no drops on the network interface..



            hope this helps



            frank.






            share|improve this answer













            you can try to use iostat to pin down which device generates the i/o wait:



            # iostat -k -h -n 5


            see the iostat man page for further details. nfs is often part of the problem especially if you serve a large number of small files or have particular many file operations. you can tune nfs access by using the usual mount options like rsize=32768,wsize=32768. there's a good whitepaper by netapp covering this topic: http://media.netapp.com/documents/tr-3183.pdf



            also make sure you have no drops on the network interface..



            hope this helps



            frank.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Mar 9 '10 at 19:21









            fenfen

            32538




            32538













            • Freaking awesome! That was just it. It shows NFS as the device which is what I suspected (or hoped). I'm not terribly worried about the NFS since it's a backup device for offsite backups so if that's waiting I'm fine with that. Thanks again for the tip that was exactly the kind of information I was searching for.

              – Mech Software
              Mar 10 '10 at 13:46



















            • Freaking awesome! That was just it. It shows NFS as the device which is what I suspected (or hoped). I'm not terribly worried about the NFS since it's a backup device for offsite backups so if that's waiting I'm fine with that. Thanks again for the tip that was exactly the kind of information I was searching for.

              – Mech Software
              Mar 10 '10 at 13:46

















            Freaking awesome! That was just it. It shows NFS as the device which is what I suspected (or hoped). I'm not terribly worried about the NFS since it's a backup device for offsite backups so if that's waiting I'm fine with that. Thanks again for the tip that was exactly the kind of information I was searching for.

            – Mech Software
            Mar 10 '10 at 13:46





            Freaking awesome! That was just it. It shows NFS as the device which is what I suspected (or hoped). I'm not terribly worried about the NFS since it's a backup device for offsite backups so if that's waiting I'm fine with that. Thanks again for the tip that was exactly the kind of information I was searching for.

            – Mech Software
            Mar 10 '10 at 13:46













            0














            Adding async option to /etc/exports helped me to bring back the load average in norms.



            /mnt/dir      *(rw,async,pnfs,no_root_squash,no_subtree_check)





            share|improve this answer








            New contributor




            user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.

























              0














              Adding async option to /etc/exports helped me to bring back the load average in norms.



              /mnt/dir      *(rw,async,pnfs,no_root_squash,no_subtree_check)





              share|improve this answer








              New contributor




              user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.























                0












                0








                0







                Adding async option to /etc/exports helped me to bring back the load average in norms.



                /mnt/dir      *(rw,async,pnfs,no_root_squash,no_subtree_check)





                share|improve this answer








                New contributor




                user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.










                Adding async option to /etc/exports helped me to bring back the load average in norms.



                /mnt/dir      *(rw,async,pnfs,no_root_squash,no_subtree_check)






                share|improve this answer








                New contributor




                user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|improve this answer



                share|improve this answer






                New contributor




                user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered 12 mins ago









                user395869user395869

                1




                1




                New contributor




                user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                user395869 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Server Fault!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f120570%2fserver-load-high-cpu-idle-nfs-the-cause%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    As a Security Precaution, the user account has been locked The Next CEO of Stack OverflowMS...

                    Список ссавців Італії Природоохоронні статуси | Список |...

                    Українські прізвища Зміст Історичні відомості |...