Let's say df present aggregated metric in AB test with groups A and B. x is for example number of page visits, n number of users with this number of visits. (In reality, there are way more users and differences are small). Note that there's different number of users per group.

I want to compare tiles of users. By tile, I mean users in group A that have the same x value. For example, I if 34.17% of users in group A has value 0, I want to compare it to average number of x for the lowest 34.17% of users in group B. Next, for example, users with 1 visits in group A are between 34.17% and 74.8% - I want to compare them with the same percentile (but should be more precise) users in group B. Etc...

What I don't like is that I have to create row for each user (and this could be millions) to get the results I want. Is there simpler/better way to do it?

No, please look at the printed result at the end of my post.
To repeat, lowest 34 percentiles of users in A has x value 0. I want to also take the lowest 34 percentiles of users in B and compare their average x to A. Then, users from A with x=1 are between 34 and 74 percentile. Their mean(x)=1. I want to calculate mean(x) for users in B between 34 and 74 percentile. Etc...

Here's an approach using fuzzyjoin to perform a non-equi join between the data and itself, where I've pre-computed the cumulative means for each group.

library(tidyverse); library(fuzzyjoin)
df_share <- df %>%
uncount(n) %>%
group_by(group) %>%
mutate(obs = row_number(),
share = obs / n(),
cuml_avg = cummean(x)) %>%
ungroup()
df_share %>%
filter(x != lag(x, default = -1)) %>% # take first occasion for each x
fuzzy_left_join(df_share, by = c("group", "share"), match_fun = list(`!=`, `>=`)) %>%
group_by(group.x, x.x, obs.x, share.x, group.y) %>%
slice_max(share.y) %>% ungroup()

Result

We see that the first observation in A (#1/100) represents the first percentile. This is matched with the second observation in B (#2/200).

In my data, which I produced using set.seed(0) at the outset, the first observation in A with x of 1 is #32, at percentile 0.32, with a cumulative mean x of 0.0312, or 1/32th. This is matched with the observation 64 in B, which has a cumulative mean of 0.594, since 38 of the first 64 values in B have an x of 1.

# A tibble: 11 × 10
group.x x.x obs.x share.x cuml_avg.x group.y x.y obs.y share.y cuml_avg.y
<chr> <int> <int> <dbl> <dbl> <chr> <int> <int> <dbl> <dbl>
1 A 0 1 0.01 0 B 0 2 0.01 0
2 A 1 32 0.32 0.0312 B 1 64 0.32 0.594
3 A 2 73 0.73 0.589 B 2 146 0.73 1.22
4 A 3 97 0.97 0.948 B 5 194 0.97 1.78
5 A 4 100 1 1.02 B 6 200 1 1.89
6 B 1 27 0.135 0.0370 A 0 13 0.13 0
7 B 2 89 0.445 0.719 A 1 44 0.44 0.295
8 B 3 147 0.735 1.23 A 2 73 0.73 0.589
9 B 4 175 0.875 1.52 A 2 87 0.87 0.816
10 B 5 192 0.96 1.74 A 2 96 0.96 0.927
11 B 6 198 0.99 1.85 A 3 99 0.99 0.990

I think I didn't explain it good enough... I want to get the same result as I got BUT at the same time avoid having one row per user during the calculation since in reality there will be millions of users.

Then please explain better: where are these users?
I understood they belong to group A or B and the calculations are done on group level (not on user level) ??

I understood they belong to group A or B and the calculations are done on group level (not on user level) ??
You understood it correctly.

I'm just afraid of the potential performance issues due to generating large numbers of rows (up to 10 million) and would like to avoid this step if possible.

At the end, I would like to have the same tibble as I have.