library(EdSurvey)
#> Loading required package: car
#> Loading required package: carData
#> Loading required package: lfactors
#> lfactors v1.0.4
#> Loading required package: Dire
#> Dire v2.1.0
#> EdSurvey v3.0.1
#>
#> Attaching package: 'EdSurvey'
#> The following objects are masked from 'package:base':
#>
#> cbind, rbind
# read in the example data (generated, not real student data)
sdf <- readNAEP(system.file("extdata/data", "M36NT2PM.dat", package = "NAEPprimer"))
# by default uses the jackknife variance method using replicate weights
table(sdf$b013801)
#>
#> 0-10 11-25 26-100 >100 Omitted Multiple
#> 2024 3370 5850 5115 548 8
logit1 <- logit.sdf(I(b013801 %in% c("26-100", ">100")) ~ dsex + b017451, data=sdf)
# use summary to get detailed results
output <- summary(logit1)
class(output)
#> [1] "summary.edsurveyGlm"
# str(output) NOT run--returns very long listing
output
#>
#> Formula: b013801 ~ dsex + b017451
#> Family: binomial (logit)
#>
#> Weight variable: 'origwt'
#> Variance method: jackknife
#> JK replicates: 62
#> full data n: 17606
#> n used: 16302
#>
#> Coefficients:
#> coef se t dof Pr(>|t|)
#> (Intercept) 0.024191 0.060532 0.399647 33.468 0.691956
#> dsexFemale 0.159483 0.051836 3.076653 59.414 0.003164 **
#> b017451Once every few weeks 0.290219 0.065682 4.418577 62.023 4.065e-05 ***
#> b017451About once a week 0.655866 0.069746 9.403688 61.642 1.605e-13 ***
#> b0174512 or 3 times a week 0.973992 0.076716 12.696001 41.514 6.661e-16 ***
#> b017451Every day 1.012180 0.073855 13.704916 46.027 < 2.2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
None of my usual tools worked. In theory, it is possible to pull the pieces of this from the ugly summary.edsurveyGlm that the output is trapped in. I don't think that is feasible even for output as simple as the example. Just too much manual work and opportunity for error.
If I has to do this once, I would cut and paste the output, take the pieces needed, convert to csv, read back in and use gt or another tool for formatting or just cut and paste into Word and do the work there if I were so unfortunate to be restricted to that and couldn't do a proper just in LaTeX.
If I had to do this on a regular basis, I would write a parser script specifically geared to the output produced and the target wanted. For example, I'd use sed to delete the first 11 and last two lines and comma delimit the remainder, then I'd use cut to pull out the first column of the second chunk and figure out how to stick the remainder as a new column in the first chunk.
Thank you @technocrat for the reprex and taking the time to try out your tools! Since I am still fairly new to R and any programming at all, I unfortunately do not know what parser is or how to write such a script.. so I guess I will have to make do with your first suggestion (c+p into word)
A parser is a program to read a file, extract parts, then reassemble them. It can be done in R or with system tools. Requires some experience that most R users will not have.