Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
robertopucp
GitHub Repository: robertopucp/1eco35_2022_2
Path: blob/main/Trabajo_grupal/WG1/Grupo_7_jl.ipynb
2714 views
Kernel: Julia 1.6.7
##Primero cargamos las librerías correspondientes # import Pkg # Pkg.add("LinearAlgebra") # Pkg.add("Random") # Pkg.add("Distributions") # Pkg.add("Statistics") # Pkg.add("DataFrames") using Random using Statistics using Distributions, LinearAlgebra using DataFrames
x1 = rand(500) # distribución uniforme x2 = rand(500) # distribución uniforme x3 = rand(500) # distribución uniforme x4 = rand(500) # distribución uniforme z = randn(500) e = randn(500) # distribución normal; mean = 0 y sd = 1 # DGP Y = ones(500) + 0.8.*x1 + 1.2.*x2 + 0.5.*x3 + 1.5.*x4 + e
500-element Vector{Float64}: 1.732376881771378 2.409978687041122 4.150753116878622 2.793717008885756 5.0863608462978505 3.4483497238839727 2.6495342654093355 2.98844380749602 3.4908273860413495 3.1712853692231864 1.8676466375279612 4.019376920268693 4.876522242403515 ⋮ 1.3419909808267683 3.4604431589567786 2.0691561148564275 4.262429421943498 2.5748543033922187 4.188473021936753 1.5412229221891398 3.086776312001024 5.426574663429239 2.7987707979958913 3.502149956514354 3.284028382055414
# joint vectors to matrix X = hcat(ones(500),x1,x2,x3,x4)
500×5 Matrix{Float64}: 1.0 0.822616 0.265703 0.19344 0.25784 1.0 0.4063 0.0468582 0.152245 0.226299 1.0 0.928465 0.929036 0.12129 0.0154756 1.0 0.68993 0.858346 0.346843 0.833202 1.0 0.592235 0.851619 0.203868 0.779895 1.0 0.675279 0.602735 0.245972 0.608907 1.0 0.247588 0.185229 0.803019 0.249005 1.0 0.751151 0.326369 0.148468 0.778784 1.0 0.0720629 0.765281 0.93624 0.904085 1.0 0.50906 0.497169 0.0215682 0.908182 1.0 0.666871 0.399246 0.957679 0.625106 1.0 0.513633 0.0512418 0.713356 0.733343 1.0 0.868238 0.447019 0.793059 0.607345 ⋮ 1.0 0.0639895 0.0632196 0.128167 0.102697 1.0 0.945388 0.485486 0.956185 0.67077 1.0 0.195108 0.192561 0.372833 0.797801 1.0 0.726164 0.457337 0.806353 0.0627122 1.0 0.913781 0.428816 0.31562 0.0350651 1.0 0.275156 0.205836 0.582011 0.903211 1.0 0.289271 0.157083 0.80139 0.0578881 1.0 0.942083 0.379045 0.109796 0.562426 1.0 0.615894 0.865984 0.997874 0.861874 1.0 0.817006 0.812282 0.611677 0.0378364 1.0 0.434898 0.777112 0.229168 0.290471 1.0 0.267566 0.983208 0.637553 0.904835
function ols(M::Matrix ,Y, est::Bool = true , Pvalue = true , instrumento = nothing , index = nothing) if est && Pvalue && isnothing(instrumento) && isnothing(index) beta = inv(transpose(X) * X) * (transpose(X) * Y) ## estimación de beta y_est = X*beta ## Y estimado n = size(X,1) k = size(X,2) - 1 nk= n - k dist = TDist(4) ## 4 grados de libertad m2= Y - y_est sigma2= (transpose(m2) * m2) ./ nk ##sigma cuadrado Var = sigma2* inv(transpose(X) * X) ##hallamos varianza sd = Var[ diagind(Var) ] ## raíz cuadrado a los datos de la diagonal principal de Var test = 3.5 Pvalue = 2*(1 - cdf(dist, test)) df= DataFrame(OLS = beta, standar_error= sd, Pvalue= Pvalue ) return df elseif !isnothing(instrumento) && !isnothing(index) beta = inv(transpose(X) * X) * (transpose(X) * Y) index = index - 1 Z = X Z[:,index] = z ## reemplazamos la variable endógena por el instrumento en la matrix de covariables beta_x = inv(transpose(Z) * X) * (transpose(Z) * X) x_est = Z*beta_x X[:,index] = x_est ## se reemplaza la variable x endógena por su estimado beta_iv = inv(transpose(X) * X) * (transpose(X) * Y) df = DataFrame( OLS=beta , OLS_IV= beta_iv) return df end end ols(X,Y)
5×3 DataFrame Row OLS standar_error Pvalue Float64 Float64 Float64 ─────┼──────────────────────────────────── 1 │ 1.11067 0.025959 0.0248962 2 │ 0.747655 0.0233189 0.0248962 3 │ 1.17503 0.0239942 0.0248962 4 │ 0.527027 0.0213323 0.0248962 5 │ 1.35568 0.0219997 0.0248962