Add fold_mut alternative to fold#481
Add fold_mut alternative to fold#481mlodato517 wants to merge 1 commit intorust-itertools:masterfrom mlodato517:ml-fold-mut
Conversation
Based on the discussions on rust-lang/rust#76746
| group.bench_function("fold", |b| { | ||
| b.iter(|| { | ||
| (0i64..1_000_000) | ||
| .chain(0i64..1_000_000) |
There was a problem hiding this comment.
Added a benchmark with chain to show that, because fold_mut uses for_each, it benefits from Chain's specialization of fold
| /// # Examples | ||
| /// ``` | ||
| /// # use itertools::Itertools; | ||
| /// let evens = [1, 2, 3, 4, 5, 6].iter().fold_mut(Vec::new(), |evens, &num| { |
There was a problem hiding this comment.
Is this example too lame? I was going to use the example of counting but ... there's .counts right above this so that seemed worse haha
This bugged me in some of my projects, too. However, I realized that I then needed e.g. Thus, I solved this in a different way: I introduced
In my implementation I assumed that the compiler is clever enough to optimize it as good as possible in any case - which, reading your PR, seems wrong. So, my questions:
|
Great question! I have no idea 🤔 I imagine it'd be great to have this variant of all the methods since you can implement
Maybe! There's a little discussion going on here. I do think that, if we could get |
|
Just checking back in for some quick guidance here - should I be working to implement |
|
I think this is sufficiently stale to close :-) |
This PR
Adds a
fold_mutalternative tofoldWhy?
foldrequires an awkward final line to return the accumulator as mentioned here.fold_mutcan be faster. And sometimes it can be slower! See the benchmarks here. TL;DR, if the accumulator is at least as big as a&mutto the accumulator,foldcan be slower thanfold_mutas moving the larger accumulator is more work than moving the&mut.Background
A lot of the background is in this issue and this PR. Based on the discussion there, it seemed that this wasn't warranted for the standard library so I figured I'd pitch it here!
Benchmarks
I ran the benchmarks like:
(not sure if there's a shorter syntax) and saw:
i64vecchained iterator into ani64chained iterator into avecI could also add a benchmark with like an
i8or something to show how much fasterfoldis when folding into something much smaller than a reference if we want that! I just had a hard time coming up with an accumulator that wouldn't overflow but would still be super readable. It might be like(0i8..10).chain(-10i8..0).cycle().take(1_000_000)or some such.