# ordinal logistic regression output

Hi guys, I could use some help interpreting my R Studio Output. I am doing an ordinal logistic regression and am struggling to interpret the output.

``````> model_fit <- polr(BerufKategorie~EA+EK+Alter+Geschlecht+ Motivation*EK +Motivation*EA, data = BAw, Hess = T)
> summary(model_fit)
Call:
polr(formula = BerufKategorie ~ EA + EK + Alter + Geschlecht +
Motivation * EK + Motivation * EA, data = BAw, Hess = T)

Coefficients:
Value Std. Error t value
EA            -6.10712    4.17810 -1.4617
EK            -1.98408    3.34476 -0.5932
Alter          0.01648    0.01065  1.5479
Geschlecht     0.25441    0.27454  0.9267
Motivation    -1.68006    0.87583 -1.9182
EK:Motivation  0.56140    0.99528  0.5641
EA:Motivation  1.71133    1.21433  1.4093

Intercepts:
Value   Std. Error t value
1|2 -5.8061  3.1396    -1.8493
2|3 -3.6051  3.1219    -1.1548

Residual Deviance: 411.282
AIC: 429.282
``````

specific questions I have:

Is Value the b or the beta?
What exactly are the intercepts? 1, 2, and 3 are the categories of my independent variable, but I thought the intercept was the value when everything else is 0 so this doesn't make sense to me.
How do I get the confidence intervals and the R square from here?

I am an absolute beginner at this so I would appreciate someone explaining.

Here are two links that are really helpful to understand the ordinal logistic regression and you can see the method to extract confidence interval.

https://stats.idre.ucla.edu/r/dae/ordinal-logistic-regression/

https://stats.idre.ucla.edu/r/faq/ologit-coefficients/

In the first part of your model you have the different coefficients of the variables of which you have previously defined the model.
In value you have the equivalent to Beta. You can see the sign of these, and therefore note the relationship of every variable with the model(positive or negative).
The Std Error is simply the "standard deviation" of the sampling distribution or an estimate of that "standard deviation".
The T value is a measure of the size of the difference of the relative variation in your sample data(to see the variation between the estimated value of the parameter and the standard error of your estimator.)
Regarding the intercept, I will give you only a hint, remember the simple Linear Regression Model, Y = a + bX + error, where the slope of the line b is equal to zero, the intercept is a.
Try to think in how this specifically apply to that model.

Here is an example:

However, I sincerely encourage you to begin with the OLS, model or the simple logistic regression model, before trying any variation like OLR, logit and so on.

Hope this helps

About the intercepts. You said that was when everything was zero, but is there a case when one of your observations doesn’t belong to any of the three groups? So I don’t think it would be possible for an all zero situation.