Optimize return object preparation from gs_power_npe()#624
Conversation
|
Explanation of changes:
I updated the tests by running local_edition(3)
expect_equal(x1_c, x2, ignore_attr = TRUE)And lastly, we could continue to return a tibble via |
|
To benchmark the improvement, I put my changes in a separate microbenchmark::microbenchmark(
before = gs_power_npe(
theta = c(.1, .2, .3),
info = (1:3) * 40,
upper = gs_b,
upar = c(Inf, 3, 2),
lower = gs_b,
lpar = c(qnorm(.1), -Inf, -Inf)
),
after = gs_power_npe2(
theta = c(.1, .2, .3),
info = (1:3) * 40,
upper = gs_b,
upar = c(Inf, 3, 2),
lower = gs_b,
lpar = c(qnorm(.1), -Inf, -Inf)
),
check = "equivalent"
)
## Unit: milliseconds
## expr min lq mean median uq max neval cld
## before 6.5527 6.81630 7.635802 7.14665 7.4633 41.6138 100 a
## after 4.0639 4.16815 4.518994 4.27110 4.5039 12.0563 100 b |
yihui
left a comment
There was a problem hiding this comment.
No worries. I'm all for using data.frame() here! Thanks!
I simply asked Claude to optimize gs_design_ahr() without specific instructions yesterday, and it naturally chose base R. I hope it didn't do so just to accommodate my personal taste.
No, I doubt it. Base R is clearly the right choice in this context. The data is too small to justify using something like {data.table}. For a simple gathering of results into a table, |
|
@LittleBeannie After you approve, please squash and merge this PR. I don't think it justifies a NEWS bullet, but please let me know if you feel otherwise. |
LittleBeannie
left a comment
There was a problem hiding this comment.
Thanks for the fix!
As discussed in group meeting, this PR optimizes the preparation of the return object by
gs_power_npe().Sorry @yihui. I had prepared this yesterday afternoon, but didn't have time to submit the PR. Could we review/merge this targeted PR first, and then rebase your more comprehensive PR on top?
xref: #623