Have run into a problem fitting a binomial logistic regression, in that the results seem to be suspect between languages. Having spent an extended period looking into this and looking for online suggestions, (tried all data variations just in case as well), I believe it comes down to what fitting procedure MATLAB is using for `glmfit`

(I have a sneaking suspicion its a Maximum Likelihood Estimator, whereas python and R use IRLS/IWLS.)

I first ran my problem in MATLAB using:

[b_lr,dev,stats] = glmfit(x',y','binomial','link','logit');

Where `x'`

is a multi-column array with predictors and `row length = y`

, and `y`

is a response vector with a binary result based on the criterion.

Since that calculation I've moved to using python/R2py. I tried the same procedure in both Python and R for fitting a logit linked binomial using the equivalent of glmfit from statsmodels and got a different set of coefficients for the regression (note that the position of the response vector changes for these two):

glm_logit = sm.GLM(yvec.T,Xmat,family = sm.families.Binomial()).fit()

and using R2py:

%R glm.out = glm(Data ~ ONI + Percentiles, family=binomial(logit), data=df)

Would appreciate if someone could clarify what MATLAB uses, and if anyone had suggestions for how to replicate the MATLAB result in python or R.

Since it a very general question without any details, here is a partial answer that is also very general based on my comparison of R, Stata and statsmodels, I don't have matlab.

GLM is a maximum likelihood (or quasi-maximum likelihood) model. The parameter estimates should be independent of the optimizer, whether it's IRLS or something else. Differences can come from numerical precision problems, different convergence criteria or different handling of ill-defined problems.

First, you need to check that they are actually estimating the same model by comparing the design matrix across packages. The two main sources are whether a constant is included by default or not, and how categorical variables are encoded.

Second, check that the data allows for a well defined model. The main differences across packages are in the treatment of singular or almost singular cases, and how they treat perfect separation in the case of Logit and similar models.

Third, maybe it's a coding mistake. Since you don't provide a replicable examples, that's impossible to tell.

推荐文章

- 1. Regular Expressions: Regexes in Python (Part 2)
- 2. Airshare - Transfer files over WiFi back and forth all devices ..
- 3. Parallel Iteration With Python's zip() Function
- 4. Acme – A framework for distributed reinforcement learning
- 5. svelte-copyright - A Svelte component to format and display a c..
- 6. Ultimate Guide to Python Debugging

热门文章

- 1. Introducing time-machine, a new Python library for mocking the current ti..
- 2. Airshare - Transfer files over WiFi back and forth all devices from your ..
- 3. Acme – A framework for distributed reinforcement learning
- 4. svelte-copyright - A Svelte component to format and display a copyright n..
- 5. Ultimate Guide to Python Debugging

## 我来评几句

登录后评论已发表评论数()