This is probably more a statistical question rather than an R question, however I want to know how this lm() anaysis comes out with a significant adjusted p-value (p=0.008) when the St Err on the change in IGF2 (-0.04ng/ml) for every Kg increase in weight is huge (0.45ng/ml). The confidence interval of the effect size is therefore massive (-0.9-0.8).

The p-value (0.008154) in the bottom row of the summary table is the p-value for the F-statistic (2.978). The F statistic is a ratio of the variance explained by the regression model relative to a model with just the intercept and no other variables. The p-value is the probability of achieving an F statistic that large under the null hypothesis that your regression model is no better than a model with just the intercept.

In this case, the result means that even though there are no variables in the model that are individually statistically significant, the model overall provides a statistically significantly better fit to the data than a model with just the intercept.

You can also perform an F test to compare several models to see if adding one or more additional variables results in a statistically significantly improved fit. For example, in your case you could do*:

Here is a description of how to calculate the F statistic. Below is an example of calculating the F-statistic in R. Note that the value calculated is the same as the value returned by summary(m1). The values 29 and 31 are the degrees of freedom (df) for models m1 and m2, respectively. df is the number of observations minus the number of parameters (regression coefficients) estimated by the model.

* Note that the model formulas include only the variable names, while the data frame is entered in the data argument. The model should be specified this way, rather than by including the data frame name with each variable in the model.

Along with what Joel has said, I'd like to add a comment. Have you considered multicollinearity?

In your example, most of the regression coefficients are insignificant (only one is being rejected at 5% level of significance, and just marginally), but still the model rejects the null hypothesis of all regression coefficients being zero. This may occur as a result of multicollinearity. Here's from Wikipedia:

Insignificant regression coefficients for the affected variables in the multiple regression, but a rejection of the joint hypothesis that those coefficients are all zero (using an F -test)

I think that this model may be affected by multicollinearity because as far as I understand, this model uses dummy variables for both Cohort1$Sex and Cohort1$ETH.

I'm not saying that it is the case, but it's quite likely (IMO). I've forgotten much about these things, so I'm not very sure. May be Joel or someone else can provide some more helpful input?