Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

All published worksheets from http://sagenb.org

Views: 168737
Image: ubuntu2004

Sections 5.1 and 5.2

Let's take a look at a joint continuous distribution for to random variables, problem 5.5 on page 233 of your text.  The region of the plane 0yx10\leq y\leq x\leq 1 looks like

polygon([(0,0),(1,0),(1,1)]).show()
p1=polygon([(0,0,0),(1,0,0),(1,0,3),(1,1,3),(1,1,0),(1,0,0),(0,0,0)]).show()

The probability we wish to compute can be computed using symbolic calculus techniques (antidifferentiation).

y=var('y')
integrate(integrate(3*x,(y,0,x)),(x,0,1/3))+integrate(integrate(3*x,(y,0,1/3)),(x,1/3,1/2))
23/216
float(23/216)
0.10648148148148147

We can also compute this integral using the Monte Carlo method.  Here is the idea.  The integral we wish to compute is the volume of a solid in 3-space.  Bound that volume in a rectangular solid, whose sides are parallel to the coordinate planes, and compute the volume of that solid (I bounded the volume in the rectangular solid [0,1]×[0,1]×[0,3][0,1]\times[0,1]\times[0,3], which has volume 3).

Next, pick a bunch of points NN randomly in the rectangular solid.  Count the number of them that lie inside the volume you wish to compute; call this number SS (for "success").  Then, the volume we wish to compute is approximately the proportion of randomly chosen points falling in the volume, times the volume of the rectangular solid.  (That is, the volume we wish to compute is a fraction of the volume of the rectangular solid.  How big a fraction?  The proportion of randomly chosen points inside the volume of interest should tell you!)

from random import random def r(): return (random(),random(),3*random())
def InRegion(p): if p[0]<=1/2 and p[1]<=1/3 and p[1]<=p[0] and p[2]<=3*p[0]: return True else: return False
N=1000;number_results=100 def MonteCarlo(): sample=[r() for j in range(N)] a=[s for s in sample if InRegion(s)] return 3*len(a)/N results=[MonteCarlo() for j in range(number_results)] float(sum(results)/number_results)
0.10640999999999999

Let's try out the Monte Carlo method on a known, and simple, integral, for comparison purposes:01x2dx\int_0^1x^2\, dx

def r(): return (random(),random()) def InRegion(p): x,y=p if x<= 1 and y<=x^2: return True else: return False
N=100000 sample=[r() for j in range(N)] a=[s for s in sample if InRegion(s)] float(1*len(a)/N)
0.33448999999999995

The actual value is 1/3, so this was not too bad!

What is the benefit to using Monte Carlo methods?  First, they work very well when antiderivatives are difficult to compute.  Generally, logical conditions describing a region are much easier to write down than a string of computations computing an integral.  Second, Monte Carlo methods for computing integrals can be used as a sort of "benchmark test" for computing.  They are BDAs (Big Dumb Algorithms) and generally slow (my 100000 data point sample did not even get the third decimal place correct here, and so we might have to choose a much larger sample), but... it's really, really  easy to code Monte Carlo.  So, MC is often used as a reality check against more sophisticated techniques (or used when we don't have a more sophisticated technique).

Section 5.7 -- Covariance

First, a plot of the functionz=f(x,y)=xyz=f(x,y)=xy

x,y,t=var('x,y,t')
p1=plot3d(x*y,(x,-1,1),(y,-1,1)) p2=parametric_plot([t,0,0],(t,-1,1),color='red',thickness=3) p3=parametric_plot([0,t,0],(t,-1,1),color='yellow',thickness=3) show(p1+p2+p3)