(I meant that *in general* questions are more difficult, only)

The codes return the values for the function that they are supposed to return. Whether those values are correct compared to the values that the data is supposed to represent depends on the quality of the data and, in many cases, the quantity.

If you are concerned with the results of the test, assuming data validity, a `reprex`

would be helpful. See the FAQ: How to do a minimal reproducible example `reprex`

for beginners. For know we can just focus on the results at hand.

Q1. The null hypothesis is that the difference between the means **does** differ from zero at the 0.05 significance (95% confidence interval) level. If the p-value is <= 0.05 *Here, I got the next part backwards, illustrating why it's always necessary to use a checklist. We reject the null and accept the alternatives that the difference does *not* differ from zero.

Q2. Here the `p-value`

is very small. In fact it is equal to the floating point error in the computer's hardware. In this case we say that "we fail to reject the null" meaning that we cannot accept the alternative--that the the difference is non-zero.

Q3 is like Q1.

See this MIT handout.