{ "cells": [ { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "**************************\n", "Fitting Probability Models\n", "**************************" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.1\n", "============\n", "\n", ".. math::\n", "\n", " \\frac{\\partial L}{\\partial \\sigma}\n", " &= \\frac{\\partial}{\\partial \\sigma} \\left[\n", " -0.5I \\log[2 \\pi] - 0.5I \\log \\sigma^2 -\n", " 0.5 \\sum_{i = 1}^I \\frac{(x_i - \\mu)^2}{\\sigma^2}\n", " \\right]\\\\\n", " 0 &= -\\frac{I}{\\sigma} + \\sum_{i = 1}^I \\frac{(x_i - \\mu)^2}{\\sigma^3}\\\\\n", " \\sigma^2 &= \\sum_{i = 1}^I \\frac{(x_i - \\mu)^2}{I}" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.2\n", "============\n", "\n", "The log-likelihood is\n", "\n", ".. math::\n", "\n", " \\DeclareMathOperator{\\NormDist}{Norm}\n", " \\DeclareMathOperator{\\NormInvGamDist}{NormInvGam}\n", " L &= \\sum_{i = 1}^I\n", " \\log\\left(\n", " \\NormDist_{x_i}[\\mu, \\sigma^2]\n", " \\right) +\n", " \\log\\left(\n", " \\NormInvGamDist_{\\mu, \\sigma^2}[\\alpha, \\beta, \\gamma, \\delta]\n", " \\right)\\\\\n", " &= -0.5 (I + 1) \\log(2 \\pi) -\n", " 0.5 (I + 1) \\log \\sigma^2 -\n", " 0.5 \\sum_{i = 1}^I \\frac{(x_i - \\mu)^2}{\\sigma^2} +\n", " 0.5 \\log \\gamma +\n", " \\alpha \\log \\beta - \\log \\Gamma[\\alpha] -\n", " (\\alpha + 1) \\log \\sigma^2 -\n", " \\frac{2 \\beta + \\gamma (\\delta - \\mu)^2}{2 \\sigma^2}.\n", "\n", "The mean is derived as\n", "\n", ".. math::\n", "\n", " 0 = \\frac{\\partial L}{\\partial \\mu}\n", " &= \\sum_{i = 1}^I\n", " \\frac{x_i - \\mu}{\\sigma^2} +\n", " \\frac{\\gamma (\\delta - \\mu)}{\\sigma^2}\\\\\n", " \\mu (I + \\gamma) &= \\sum_{i = 1}^I x_i + \\gamma \\delta\\\\\n", " \\mu &= \\frac{\\sum_{i = 1}^I x_i + \\gamma \\delta}{I + \\gamma}.\n", "\n", "The variance is derived as\n", "\n", ".. math::\n", "\n", " 0 = \\frac{\\partial L}{\\partial \\mu}\n", " &= -\\frac{I + 1}{\\sigma} +\n", " \\sum_{i = 1}^I\n", " \\frac{(x_i - \\mu)^2}{\\sigma^3} -\n", " \\frac{2 (\\alpha + 1)}{\\sigma} +\n", " \\frac{2 \\beta + \\gamma (\\delta - \\mu)^2}{\\sigma^3}\\\\\n", " \\sigma^2 \\left( I + 1 + 2 \\alpha + 2 \\right)\n", " &= \\sum_{i = 1}^I (x_i - \\mu)^2 + 2 \\beta + \\gamma (\\delta - \\mu)^2\\\\\n", " \\sigma^2\n", " &= \\frac{\n", " \\sum_{i = 1}^I (x_i - \\mu)^2 + 2 \\beta + \\gamma (\\delta - \\mu)^2\n", " }{\n", " I + 2 \\alpha + 3\n", " }." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.3\n", "============\n", "\n", ".. math::\n", "\n", " \\frac{\\partial L}{\\partial \\lambda_k}\n", " &= \\frac{\\partial}{\\partial \\lambda_k} \\left[\n", " \\sum_k N_k \\log \\lambda_k + \\nu \\left( \\sum_k \\lambda_k - 1 \\right)\n", " \\right]\\\\\n", " 0 &= \\frac{N_k}{\\lambda_k} + \\nu\\\\\n", " \\lambda_k &= \\frac{N_k}{\\nu}\n", " & \\quad & \\text{the negative factor is absorbed into } \\nu\n", "\n", "From the constraints on the categorical distribution,\n", "\n", ".. math::\n", "\n", " 0 = \\frac{\\partial L}{\\partial \\nu} &= \\sum_k \\lambda_k - 1\\\\\n", " 1 &= \\sum_k \\lambda_k\\\\\n", " 1 &= \\sum_k \\frac{N^k}{\\nu}\n", " & \\quad & \\text{substitute in optimal parameter for } \\lambda_k\\\\\n", " \\nu &= \\sum_k N^k.\n", "\n", "Thus :math:`\\lambda_k = \\frac{N^k}{\\sum_m N^m}`." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.4\n", "============\n", "\n", "The log-likelihood is\n", "\n", ".. math::\n", "\n", " L =\n", " \\sum_k (N_k + \\alpha_k - 1) \\log \\lambda_k +\n", " \\nu \\left( \\sum_k \\lambda_k - 1 \\right).\n", "\n", "Solving for the roots of :math:`L` gives\n", "\n", ".. math::\n", "\n", " 0 = \\frac{\\partial L}{\\partial \\lambda_k}\n", " &= \\frac{N_k + \\alpha_k - 1}{\\lambda_k} + \\nu\\\\\n", " \\lambda_k &= \\frac{N_k + \\alpha_k - 1}{\\nu}\n", " & \\quad & \\text{the negative factor is absorbed into } \\nu\n", "\n", "and\n", "\n", ".. math::\n", "\n", " 0 = \\frac{\\partial L}{\\partial \\nu} &= \\sum_k \\lambda_k - 1\\\\\n", " 1 &= \\sum_k \\lambda_k\\\\\n", " 1 &= \\sum_k \\frac{N^k + \\alpha_k - 1}{\\nu}\n", " & \\quad & \\text{substitute in optimal parameter for } \\lambda_k\\\\\n", " \\nu &= \\sum_k N^k + \\alpha_k - 1.\n", "\n", "Thus :math:`\\lambda_k = \\frac{N^k + \\alpha_k - 1}{\\sum_m N^m + \\alpha_k - 1}`." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. _prince2012computer-ex-4.5:\n", "\n", "Exercise 4.5\n", "============\n", "\n", ".. math::\n", "\n", " Pr(x_{1 \\ldots I})\n", " &= \\int Pr(x_{1 \\ldots I}, \\theta) d\\theta\n", " & \\quad & \\text{marginalization}\\\\\n", " &= \\int \\prod_i Pr(x_i \\mid \\theta) Pr(\\theta) d\\theta\n", " & \\quad & \\text{samples are independent and identically distributed}\n", "\n", "(i)\n", "---\n", "\n", "Assume samples come from a univariate normal distribution with the\n", "corresponding conjugate prior.\n", "\n", ".. math::\n", "\n", " \\int \\prod_i Pr(x_i \\mid \\theta) Pr(\\theta) d\\theta\n", " &= \\int \\prod_i \\NormDist_{x_i}\\left[ \\mu, \\sigma^2 \\right] \\cdot\n", " \\NormInvGamDist_{\\mu, \\sigma^2}[\\alpha, \\beta, \\gamma, \\delta] d\\theta\\\\\n", " &= \\int \\kappa \\NormInvGamDist_{\\mu, \\sigma^2}\\left[\n", " \\tilde{\\alpha}, \\tilde{\\beta}, \\tilde{\\gamma}, \\tilde{\\delta}\n", " \\right] d\\theta\n", " & \\quad & \\text{Exercise 3.11}\\\\\n", " &= \\kappa\n", " & \\quad & \\text{the conjugate prior is a valid probability distribution}\n", "\n", "See :ref:`Exercise 3.11 ` for more details.\n", "\n", "(ii)\n", "----\n", "\n", "Assume samples come from a categorical distribution with the corresponding\n", "conjugate prior.\n", "\n", ".. math::\n", "\n", " \\DeclareMathOperator{\\CatDist}{Cat}\n", " \\DeclareMathOperator{\\DirDist}{Dir}\n", " \\int \\prod_i Pr(x_i \\mid \\theta) Pr(\\theta) d\\theta\n", " &= \\int \\prod_i \\CatDist_{\\mathbf{x}_i}[\\lambda_{1 \\ldots K}] \\cdot\n", " \\DirDist_{\\lambda_{1 \\ldots K}}[\\alpha_{1 \\ldots K}] d\\theta\\\\\n", " &= \\int \\kappa \\DirDist_{\\lambda_{1 \\ldots K}}\\left[\n", " \\tilde{\\lambda}_{1 \\ldots K}\n", " \\right] d\\theta\n", " & \\quad & \\text{Exercise 3.10}\\\\\n", " &= \\kappa\n", " & \\quad & \\text{the conjugate prior is a valid probability distribution}\n", "\n", "See :ref:`Exercise 3.10 ` for more details." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.6\n", "============\n", "\n", "From :ref:`Exercise 4.5 `,\n", "\n", ".. math::\n", "\n", " Pr(x_{1 \\ldots I}) = \\kappa =\n", " \\frac{1}{(2 \\pi)^{I / 2}}\n", " \\frac{\n", " \\sqrt{\\gamma} \\beta^\\alpha\n", " }{\n", " \\sqrt{\\tilde{\\gamma}} \\tilde{\\beta}^\\tilde{\\alpha}\n", " }\n", "\n", "where\n", "\n", ".. math::\n", "\n", " \\tilde{\\alpha} &= \\alpha + 0.5 I\\\\\\\\\n", " \\tilde{\\beta}\n", " &= 0.5 \\sum_i x_i^2 +\n", " \\beta + 0.5 \\gamma \\delta^2 -\n", " 0.5 \\frac{(\\gamma \\delta + \\sum_i x_i)^2}{\\gamma + I}\\\\\\\\\n", " \\tilde{\\gamma} &= \\gamma + I.\n", "\n", "The following solutions use the conjugate prior with parameters\n", ":math:`\\alpha = 1, \\beta = 1, \\gamma = 1, \\delta = 0`.\n", "\n", ":math:`Pr(\\mathcal{S}_1 = \\{ 0.1, -0.5, 0.2, 0.7 \\})`\n", "-----------------------------------------------------\n", "\n", ".. math::\n", "\n", " \\begin{gather*}\n", " \\tilde{\\alpha} = 3\\\\\n", " \\tilde{\\beta} = 1.37\\\\\n", " \\tilde{\\gamma} = 5\\\\\n", " Pr(\\mathcal{S}_1) = \\kappa = 4.4 \\times 10^{-3}\n", " \\end{gather*}\n", "\n", ":math:`Pr(\\mathcal{S}_2 = \\{ 1.1, 2.0, 1.4, 2.3 \\})`\n", "----------------------------------------------------\n", "\n", ".. math::\n", "\n", " \\begin{gather*}\n", " \\tilde{\\alpha} = 3\\\\\n", " \\tilde{\\beta} = 2.606\\\\\n", " \\tilde{\\gamma} = 5\\\\\n", " Pr(\\mathcal{S}_2) = \\kappa = 6.4 \\times 10^{-4}\n", " \\end{gather*}\n", "\n", ":math:`Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_1)`\n", "-----------------------------------------------------\n", "\n", ".. math::\n", "\n", " \\begin{gather*}\n", " \\tilde{\\alpha} = 5\\\\\n", " \\tilde{\\beta} = 4.664\\\\\n", " \\tilde{\\gamma} = 9\\\\\n", " Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_1) = \\kappa = 9.7 \\times 10^{-8}\n", " \\end{gather*}\n", "\n", ":math:`Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_2)`\n", "-----------------------------------------------------\n", "\n", ".. math::\n", "\n", " Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_2) =\n", " Pr(\\mathcal{S}_1) Pr(\\mathcal{S}_2) = 2.8 \\times 10^{-6}\n", "\n", "Priors on :math:`M_1` and :math:`M_2`\n", "-------------------------------------\n", "\n", "Suppose the priors on :math:`M_1` and :math:`M_2` are uniform. The posterior\n", "probability simplifies to\n", "\n", ".. math::\n", "\n", " Pr(M_1 \\mid \\mathcal{S}_1 \\cup \\mathcal{S}_2) =\n", " \\frac{\n", " Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_1) Pr(M_1)\n", " }{\n", " \\sum_i Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_i) Pr(M_i)\n", " } = 0.0335\n", "\n", "and\n", "\n", ".. math::\n", "\n", " Pr(M_2 \\mid \\mathcal{S}_1 \\cup \\mathcal{S}_2) =\n", " \\frac{\n", " Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_2) Pr(M_2)\n", " }{\n", " \\sum_i Pr(\\mathcal{S}_1 \\cup \\mathcal{S}_2 \\mid M_i) Pr(M_i)\n", " } = 0.967." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.7\n", "============\n", "\n", "Suppose samples are independent and identically distributed. The likelihood\n", "function\n", "\n", ".. math::\n", "\n", " \\DeclareMathOperator*{\\argmax}{arg\\,max}\n", " \\argmax_\\lambda Pr(x_{1 \\ldots I} \\mid \\lambda) =\n", " \\argmax_\\lambda \\prod_{i = 1}^I \\lambda^{x_i} (1 - \\lambda)^{1 - x_i}\n", "\n", "is difficult to optimize analytically due to the product rule. Since\n", ":math:`\\log` is a monotonic function, applying that pointwise gives a\n", "log-probability whose optimization consists of simple chain rules:\n", "\n", ".. math::\n", "\n", " L = \\log Pr(x_{1 \\ldots I} \\mid \\lambda)\n", " &= \\sum_{i = 1}^I x_i \\log \\lambda + (1 - x_i) \\log(1 - \\lambda)\\\\\n", " &= I \\log(1 - \\lambda) +\n", " \\sum_{i = 1}^I x_i \\log \\lambda - x_i \\log (1 - \\lambda).\n", "\n", "An optima of :math:`L` can be found via applying Fermat's theorem:\n", "\n", ".. math::\n", "\n", " 0 = \\frac{\\partial L}{\\partial \\lambda}\n", " &= -\\frac{I}{1 - \\lambda} +\n", " \\sum_{i = 1}^I\n", " x_i \\left( \\frac{1}{\\lambda} + \\frac{1}{1 - \\lambda} \\right)\\\\\n", " I &= \\sum_{i = 1}^I x_i \\left( \\frac{1 - \\lambda}{\\lambda} + 1 \\right)\\\\\n", " \\lambda &= I^{-1} \\sum_{i = 1}^I x_i." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.8\n", "============\n", "\n", "The posterior function\n", "\n", ".. math::\n", "\n", " \\DeclareMathOperator{\\BetaDist}{Beta}\n", " \\DeclareMathOperator{\\BernDist}{Bern}\n", " \\argmax_\\lambda Pr(\\lambda \\mid x_{1 \\ldots I})\n", " &= \\argmax_\\lambda\n", " \\frac{\n", " Pr(x_{1 \\ldots I} \\mid \\lambda) Pr(\\lambda)\n", " }{\n", " Pr(x_{1 \\ldots I})\n", " }\\\\\n", " &= \\argmax_\\lambda\n", " \\frac{\n", " \\prod_{i = 1}^I Pr(x_i \\mid \\lambda) \\cdot Pr(\\lambda)\n", " }{\n", " Pr(x_{1 \\ldots I})\n", " }\n", " & \\quad & \\text{assuming samples are I.I.D.}\\\\\n", " &= \\argmax_\\lambda \\prod_{i = 1}^I Pr(x_i \\mid \\lambda) \\cdot Pr(\\lambda)\n", " & \\quad & Pr(x_{1 \\ldots I}) \\text{ is a constant}\\\\\n", " &= \\argmax_\\lambda \\prod_{i = 1}^I \\BernDist_{x_i}[\\lambda] \\cdot\n", " \\BetaDist_\\lambda[\\alpha, \\beta]\n", "\n", "is difficult to optimize analytically due to the product rule. Since\n", ":math:`\\log` is a monotonic function, applying that pointwise yields a\n", "log-probability whose optimization consists of simple chain rules:\n", "\n", ".. math::\n", "\n", " L &= \\sum_{i = 1}^I\n", " \\log \\BernDist_{x_i}[\\lambda] +\n", " \\log \\BetaDist_\\lambda[\\alpha, \\beta]\\\\\n", " &= \\left[\n", " \\sum_{i = 1}^I x_i \\log \\lambda + (1 - x_i) \\log(1 - \\lambda)\n", " \\right] +\n", " (\\alpha - 1) \\log \\lambda + (\\beta - 1) \\log (1 - \\lambda)\\\\\n", " &= I \\log(1 - \\lambda) +\n", " \\sum_{i = 1}^I\n", " x_i \\left[ \\log \\lambda - \\log(1 - \\lambda) \\right] +\n", " (\\alpha - 1) \\log \\lambda + (\\beta - 1) \\log (1 - \\lambda).\n", "\n", "The optima of :math:`L` can be found via applying Fermat's theorem:\n", "\n", ".. math::\n", "\n", " 0 = \\frac{\\partial L}{\\partial \\lambda}\n", " &= -\\frac{I}{1 - \\lambda} +\n", " \\sum_{i = 1}^I\n", " x_i \\left( \\frac{1}{\\lambda} + \\frac{1}{1 - \\lambda} \\right) +\n", " \\frac{\\alpha - 1}{\\lambda} - \\frac{\\beta - 1}{1 - \\lambda}\\\\\n", " I + \\beta - 1\n", " &= \\sum_{i = 1}^I x_i \\left( \\frac{1 - \\lambda}{\\lambda} + 1 \\right) +\n", " \\frac{\\alpha - 1}{\\lambda} (1 - \\lambda)\\\\\n", " I + \\alpha + \\beta - 2\n", " &= \\sum_{i = 1}^I x_i \\frac{1}{\\lambda} +\n", " \\frac{\\alpha - 1}{\\lambda}\\\\\n", " \\lambda &= \\frac{\\alpha - 1 + \\sum_{i = 1}^I x_i}{I + \\alpha + \\beta - 2}." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.9\n", "============\n", "\n", "(i)\n", "---\n", "\n", ".. math::\n", "\n", " Pr(\\lambda \\mid x_{1 \\ldots I})\n", " &= \\frac{Pr(x_{1 \\ldots I} \\mid \\lambda) Pr(\\lambda)}{Pr(x_{1 \\ldots I})}\\\\\n", " &= \\frac{\n", " \\prod_{i = 1}^I Pr(x_i \\mid \\lambda) \\cdot Pr(\\lambda)\n", " }{\n", " Pr(x_{1 \\ldots I})\n", " }\n", " & \\quad & \\text{assuming samples are I.I.D.}\\\\\n", " &= \\frac{\n", " \\prod_{i = 1}^I \\BernDist_{x_i}[\\lambda] \\cdot\n", " \\BetaDist_\\lambda[\\alpha, \\beta]\n", " }{\n", " Pr(x_{1 \\ldots I})\n", " }\\\\\n", " &= \\BetaDist_\\lambda\\left[ \\tilde{\\alpha}, \\tilde{\\beta} \\right]\n", "\n", "See :ref:`Exercise 3.9 ` and\n", ":ref:`Exercise 4.5 ` for more details.\n", "\n", "(ii)\n", "----\n", "\n", ".. math::\n", "\n", " Pr(x^* \\mid x_{1 \\ldots I})\n", " &= \\frac{Pr(x^*, x_{1 \\ldots I})}{Pr(x_{1 \\ldots I})}\\\\\n", " &= \\int \\frac{Pr(x^*, x_{1 \\ldots I}, \\theta)}{Pr(x_{1 \\ldots I})} d\\theta\n", " & \\quad & \\text{marginalization}\\\\\n", " &= \\int\n", " \\frac{\n", " Pr(x^* \\mid \\theta) Pr(x_{1 \\ldots I} \\mid \\theta) Pr(\\theta)\n", " }{\n", " Pr(x_{1 \\ldots I})\n", " } d\\theta\\\\\n", " &= \\int Pr(x^* \\mid \\theta) Pr(\\lambda \\mid x_{1 \\ldots I}) d\\theta\n", " & \\quad & \\text{see (i)}\\\\\n", " &= \\int\n", " \\BernDist_{x^*}[\\lambda]\n", " \\BetaDist_\\lambda[\\tilde{\\alpha}, \\tilde{\\beta}] d\\theta\\\\\n", " &= \\kappa(x^*, \\hat{\\alpha}, \\hat{\\beta})\n", "\n", "where\n", "\n", ".. math::\n", "\n", " \\begin{gather*}\n", " \\kappa =\n", " \\frac{\n", " \\Gamma[\\tilde{\\alpha} + \\tilde{\\beta}]\n", " \\Gamma[\\hat{\\alpha}] \\Gamma[\\hat{\\beta}]\n", " }{\n", " \\Gamma[\\tilde{\\alpha}] \\Gamma[\\tilde{\\beta}]\n", " \\Gamma[\\tilde{\\alpha} + \\tilde{\\beta} + 1]\n", " }\\\\\n", " \\hat{\\alpha} = \\tilde{\\alpha} + x^*\\\\\n", " \\hat{\\beta} = \\tilde{\\beta} + 1 - x^*.\n", " \\end{gather*}" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Exercise 4.10\n", "=============\n", "\n", "ML confidently predicted that future data will have only :math:`x = 0`.\n", "\n", "Using a uniform beta prior :math:`(\\alpha = 1, \\beta = 1)` reduced MAP to ML.\n", "\n", "Only the Bayesian approach gave a proper weighting to :math:`x = 0` and\n", ":math:`x = 1`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy\n", "import scipy.signal\n", "\n", "alpha = 1\n", "beta = 1\n", "data = numpy.asarray([0, 0, 0, 0])\n", "I = len(data)\n", "bern = lambda x, theta: theta**x * (1 - theta)**(1 - x)\n", "\n", "#predictive distribution for maximum likelihood\n", "ml_lambda = numpy.sum(data) / I\n", "print('ML: lambda = {0}'.format(ml_lambda))\n", "for x in [0, 1]:\n", " print('x = {0}: {1}'.format(x, bern(x, ml_lambda)))\n", "\n", "#predictive distribution for maximum a posteriori\n", "map_lambda = (alpha - 1 + numpy.sum(data)) / (I + alpha + beta - 2)\n", "_ = '\\nMAP: lambda = {0}, alpha = {1}, beta = {2}'\n", "print(_.format(map_lambda, alpha, beta))\n", "for x in [0, 1]:\n", " print('x = {0}: {1}'.format(x, bern(x, map_lambda)))\n", "\n", "#predictive distribution for Bayesian\n", "t_alpha = alpha + numpy.sum(data)\n", "t_beta = beta + numpy.sum(1 - data)\n", "G = lambda z: scipy.special.gamma(z)\n", "\n", "print('\\nBayesian')\n", "for x in [0, 1]:\n", " h_alpha = t_alpha + x\n", " h_beta = t_beta + 1 - x\n", " _ = G(t_alpha + t_beta) * G(h_alpha) * G(h_beta)\n", " kappa = _ / (G(t_alpha) * G(t_beta) * G(t_alpha + t_beta + 1))\n", " _ = '(alpha = {0}, beta = {1}) x = {2}: {3}'\n", " print(_.format(h_alpha, h_beta, x, kappa))" ] } ], "metadata": { "anaconda-cloud": {}, "celltoolbar": "Raw Cell Format", "kernelspec": { "display_name": "Python [default]", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 0 }