Bayes factor vs P value Unicorn Meta Zoo #1: Why another podcast? Announcing...

With indentation set to `0em`, when using a line break, there is still an indentation of a size of a space

Is it acceptable to use working hours to read general interest books?

What was Apollo 13's "Little Jolt" after MECO?

Putting Ant-Man on house arrest

My admission is revoked after accepting the admission offer

What’s with the clanks in Endgame?

What's the difference between using dependency injection with a container and using a service locator?

Is Electric Central Heating worth it if using Solar Panels?

Flattening the sub-lists

How to open locks without disable device?

How can I wire a 9-position switch so that each position turns on one more LED than the one before?

What is a 'Key' in computer science?

What is this word supposed to be?

Can I criticise the more senior developers around me for not writing clean code?

Could moose/elk survive in the Amazon forest?

All ASCII characters with a given bit count

std::is_constructible on incomplete types

How would this chord from "Rocket Man" be analyzed?

Check if a string is entirely made of the same substring

"Whatever a Russian does, they end up making the Kalashnikov gun"? Are there any similar proverbs in English?

Mistake in years of experience in resume?

Is there any hidden 'W' sound after 'comment' in : Comment est-elle?

Why isn't everyone flabbergasted about Bran's "gift"?

Raising a bilingual kid. When should we introduce the majority language?



Bayes factor vs P value



Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar ManaraWhen should I be worried about the Jeffreys-Lindley paradox in Bayesian model choice?Bayesian analysis and Lindley's paradox?Do Bayes factors require multiple comparison correction?When does it make sense to reject/accept an hypothesis?Why are 0.05 < p < 0.95 results called false positives?Marginal Likelihoods for Bayes Factors with Multiple Discrete HypothesisIs p-value essentially useless and dangerous to use?Are smaller p-values more convincing?Interpreting Granger Causality F-testBayes factor (B) vs p-values: sensitive (H0/H1) vs insensitive dataWald test and LRT arriving at different conclusionsCompute Bayesian Probability





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







3












$begingroup$


I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



I may be missing something very basic since I am a beginner in this area.










share|cite|improve this question











$endgroup$



















    3












    $begingroup$


    I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



    However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



    So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



    I may be missing something very basic since I am a beginner in this area.










    share|cite|improve this question











    $endgroup$















      3












      3








      3


      1



      $begingroup$


      I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



      However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



      So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



      I may be missing something very basic since I am a beginner in this area.










      share|cite|improve this question











      $endgroup$




      I am trying to understand Bayes Factor (BF). I believe they are like likelihood ratio of 2 hypotheses. So if BF is 5, it means H1 is 5 times more likely than H0. And value of 3-10 indicates moderate evidence, while >10 indicates strong evidence.



      However, for P-value, traditionally 0.05 is taken as cut-off. At this P value, H1/H0 likelihood should be 95/5 or 19.



      So why a cut-off of >3 is taken for BF while a cut-off of >19 is taken for P values? These values are not anywhere close either.



      I may be missing something very basic since I am a beginner in this area.







      hypothesis-testing bayesian p-value






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 2 hours ago







      rnso

















      asked 3 hours ago









      rnsornso

      4,067103168




      4,067103168






















          2 Answers
          2






          active

          oldest

          votes


















          4












          $begingroup$

          A few things:



          The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



          These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



          "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks for your insight. However, if evidence in favor of a hypothesis is apple, I think evidence for alternate hypothesis can be inverted apple but not orange! Also, what would you say is approximate Bayes Factor value corresponding to P=0.05?
            $endgroup$
            – rnso
            1 hour ago





















          1












          $begingroup$

          The Bayes factor $B_{01}$ can be turned into a probability under equal weights as
          $$P_{01}=frac{1}{1+frac{1}{large B_{01}}}$$but this does not make them comparable with a $p$-value since





          1. $P_{01}$ is a probability in the parameter space, not in the sampling space

          2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute

          3. both $B_{01}$ and $P_{01}$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space


          If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
          $$Q_{01}=mathbb P(B_{01}(X)le B_{01}(x^text{obs}))$$
          where $x^text{obs}$ denotes the observation and $X$ is distributed from the posterior predictive
          $$Xsim int_Theta f(x|theta) pi(theta|x^text{obs}),text{d}theta$$
          but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






          share|cite|improve this answer









          $endgroup$














            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "65"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404933%2fbayes-factor-vs-p-value%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4












            $begingroup$

            A few things:



            The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



            These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



            "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Thanks for your insight. However, if evidence in favor of a hypothesis is apple, I think evidence for alternate hypothesis can be inverted apple but not orange! Also, what would you say is approximate Bayes Factor value corresponding to P=0.05?
              $endgroup$
              – rnso
              1 hour ago


















            4












            $begingroup$

            A few things:



            The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



            These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



            "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Thanks for your insight. However, if evidence in favor of a hypothesis is apple, I think evidence for alternate hypothesis can be inverted apple but not orange! Also, what would you say is approximate Bayes Factor value corresponding to P=0.05?
              $endgroup$
              – rnso
              1 hour ago
















            4












            4








            4





            $begingroup$

            A few things:



            The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



            These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



            "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.






            share|cite|improve this answer











            $endgroup$



            A few things:



            The BF gives you evidence in favor of a hypothesis, while a frequentist hypothesis test gives you evidence against a (null) hypothesis. So it's kind of "apples to oranges."



            These two procedures, despite the difference in interpretations, may lead to different decisions. For example, a BF might reject while a frequentist hypothesis test doesn't, or vice versa. This problem is often referred to as the Jeffreys-Lindley's paradox. There have been many posts on this site about this; see e.g. here, and here.



            "At this P value, H1/H0 likelihood should be 95/5 or 19." No, this isn't true because, roughly $p(y mid H_1) neq 1- p(y mid H_0)$. Computing a p-value and performing a frequentist test, at a minimum, does not require you to have any idea about $p(y mid H_1)$. Also, p-values are often integrals/sums of densities/pmfs, while a BF doesn't integrate over the data sample space.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited 25 mins ago









            Xi'an

            59.8k897369




            59.8k897369










            answered 2 hours ago









            TaylorTaylor

            12.7k21946




            12.7k21946












            • $begingroup$
              Thanks for your insight. However, if evidence in favor of a hypothesis is apple, I think evidence for alternate hypothesis can be inverted apple but not orange! Also, what would you say is approximate Bayes Factor value corresponding to P=0.05?
              $endgroup$
              – rnso
              1 hour ago




















            • $begingroup$
              Thanks for your insight. However, if evidence in favor of a hypothesis is apple, I think evidence for alternate hypothesis can be inverted apple but not orange! Also, what would you say is approximate Bayes Factor value corresponding to P=0.05?
              $endgroup$
              – rnso
              1 hour ago


















            $begingroup$
            Thanks for your insight. However, if evidence in favor of a hypothesis is apple, I think evidence for alternate hypothesis can be inverted apple but not orange! Also, what would you say is approximate Bayes Factor value corresponding to P=0.05?
            $endgroup$
            – rnso
            1 hour ago






            $begingroup$
            Thanks for your insight. However, if evidence in favor of a hypothesis is apple, I think evidence for alternate hypothesis can be inverted apple but not orange! Also, what would you say is approximate Bayes Factor value corresponding to P=0.05?
            $endgroup$
            – rnso
            1 hour ago















            1












            $begingroup$

            The Bayes factor $B_{01}$ can be turned into a probability under equal weights as
            $$P_{01}=frac{1}{1+frac{1}{large B_{01}}}$$but this does not make them comparable with a $p$-value since





            1. $P_{01}$ is a probability in the parameter space, not in the sampling space

            2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute

            3. both $B_{01}$ and $P_{01}$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space


            If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
            $$Q_{01}=mathbb P(B_{01}(X)le B_{01}(x^text{obs}))$$
            where $x^text{obs}$ denotes the observation and $X$ is distributed from the posterior predictive
            $$Xsim int_Theta f(x|theta) pi(theta|x^text{obs}),text{d}theta$$
            but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






            share|cite|improve this answer









            $endgroup$


















              1












              $begingroup$

              The Bayes factor $B_{01}$ can be turned into a probability under equal weights as
              $$P_{01}=frac{1}{1+frac{1}{large B_{01}}}$$but this does not make them comparable with a $p$-value since





              1. $P_{01}$ is a probability in the parameter space, not in the sampling space

              2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute

              3. both $B_{01}$ and $P_{01}$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space


              If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
              $$Q_{01}=mathbb P(B_{01}(X)le B_{01}(x^text{obs}))$$
              where $x^text{obs}$ denotes the observation and $X$ is distributed from the posterior predictive
              $$Xsim int_Theta f(x|theta) pi(theta|x^text{obs}),text{d}theta$$
              but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






              share|cite|improve this answer









              $endgroup$
















                1












                1








                1





                $begingroup$

                The Bayes factor $B_{01}$ can be turned into a probability under equal weights as
                $$P_{01}=frac{1}{1+frac{1}{large B_{01}}}$$but this does not make them comparable with a $p$-value since





                1. $P_{01}$ is a probability in the parameter space, not in the sampling space

                2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute

                3. both $B_{01}$ and $P_{01}$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space


                If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
                $$Q_{01}=mathbb P(B_{01}(X)le B_{01}(x^text{obs}))$$
                where $x^text{obs}$ denotes the observation and $X$ is distributed from the posterior predictive
                $$Xsim int_Theta f(x|theta) pi(theta|x^text{obs}),text{d}theta$$
                but this does not imply that the same "default" criteria for rejection and significance should apply to this object.






                share|cite|improve this answer









                $endgroup$



                The Bayes factor $B_{01}$ can be turned into a probability under equal weights as
                $$P_{01}=frac{1}{1+frac{1}{large B_{01}}}$$but this does not make them comparable with a $p$-value since





                1. $P_{01}$ is a probability in the parameter space, not in the sampling space

                2. its value and range depend on the choice of the prior measure, they are thus relative rather than absolute

                3. both $B_{01}$ and $P_{01}$ contain a penalty for complexity (Occam's razor) by integrating out over the parameter space


                If you wish to consider a Bayesian equivalent to the $p$-value, the posterior predictive $p$-value (Meng, 1994) should be investigated
                $$Q_{01}=mathbb P(B_{01}(X)le B_{01}(x^text{obs}))$$
                where $x^text{obs}$ denotes the observation and $X$ is distributed from the posterior predictive
                $$Xsim int_Theta f(x|theta) pi(theta|x^text{obs}),text{d}theta$$
                but this does not imply that the same "default" criteria for rejection and significance should apply to this object.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 14 mins ago









                Xi'anXi'an

                59.8k897369




                59.8k897369






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f404933%2fbayes-factor-vs-p-value%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Фонтен-ла-Гаярд Зміст Демографія | Економіка | Посилання |...

                    Список ссавців Італії Природоохоронні статуси | Список |...

                    Маріан Котлеба Зміст Життєпис | Політичні погляди |...