I have two pulses one is a simulation the other is experimental data. I my simulation's voltage outputs are to match the experimental data by 95%. Meaning if the voltage of the experimental is 100 then the corresponding point in the simulation would be between 95-105Volts. I can tweak my simulation as much as I want to get them to match by adding switches and filters in the circuit. But a graph that "looks" like it matches is not the same as it being proven by the data. I ran a t - test and a Mann- Whitney (Wilcoxon) test . I am pretty sure a Chi square is not corrected because that is for discrete variables. I don't think that Correlation test is correct because its not dose/response But I am not even sure that that is the correct tests to choose.

The data looks like it follows a normal distribution. But again I am not really sure. What statistical tests would you choose to verify this assumption. I am including a box plot of the mean and the graph of the voltages . Should I put error on the simulations graph? Is it a problem that the simulation has 30000 observation and the experimental.

This is the output when I run the student t test
Welch Two Sample t-test

data: DS_ks_test and Pulserfile_ks_test
t = -25.909, df = 1117.4, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-12057.75 -10360.05
sample estimates:
mean of x mean of y
-15455.722 -4246.824

If you are not sure what I am saying I will do my best to clarify. Any help is much appreciated

I do not think you want to use a statistical test, in the usual sense, at all. I understand that you want to show that the two pulse shapes are nearly the same. That cannot be shown by proving that the mean amplitudes are the same. Two pulses might have very different shapes but similar mean amplitudes. If the two pulses have identical values along the Time axis, I would start with a point-by-point difference in their amplitudes. The following might be the command you want.

DIFF <- DS_ks_test - Pulserfile_ks_test

You could also calculate the width of each pulse at various amplitudes to show that the rising and falling edges are similar.

The criterion of 95% agreement in amplitude is easily tested near the most negative values. When the amplitude approaches zero, that gets much harder to meet. Look at the curves at times slightly greater than 0.6. The absolutes value of the experimental curve is far larger than the simulation and the amplitudes are of opposite sign!

@FJCC, now my question is can I use the DIFF <- DS_ks_test - Pulserfile_ks_test when not each experimental point has a corresponding Simulation point . The simulation produces about 30,000 points of data and the experimental has about 1000 before I add filters.

If I understand correctly, you're trying to check whether the two distributions are same or not. Please confirm whether that is the case or not.

If it is, then I think the better way to compare the distributions will be to compare their empirical distributions. You can use Kolmogorov - Smirnov test for that, using ks.test function in stats.

Edit

Sorry, I made a mistake. I didn't read the data description thoroughly. KS test is applicable only for independent samples, and here you're dealing with time series, and hence it's dependent. So, you should not use KS test for these.

I'm not familiar with any test for dependent samples. I found this link, which may be useful for you.

@ Yarnabrina I ran a KS goodness of fit to see if experimental fit the theoretical (simulation) . So yes I that is what I am looking for but I am still not sure if I set all the parameters correctly.
Two-sample Kolmogorov-Smirnov test

data: DS_ks_test and PF_ks_test
D = 0.74736, p-value < 2.2e-16
alternative hypothesis: two-sided