feat(neon_framework): optimize blur hash handling for being performan…#3185
feat(neon_framework): optimize blur hash handling for being performan…#3185vauvenal5 wants to merge 1 commit intonextcloud:mainfrom
Conversation
provokateurin
left a comment
There was a problem hiding this comment.
This is a really nice improvement, thanks!
9aedfca to
b3d4aec
Compare
|
@provokateurin I believe this is ready now for a new review whenever you have a few minutes time. |
…t in galleries Signed-off-by: vauvenal5 <vauvenal5.ndgme@slmails.com>
b3d4aec to
1ed7bec
Compare
|
You need to insert the BlurBloc in a bunch of tests where it is used in the tested widgets. |
|
Yeah I just saw. Will finish this up the next couple of days. And will improve my workflow, sorry for the chaos. |
| /// It uses a [Queue] to limit the maximum number of concurrent decoding tasks, | ||
| /// and a LRU-Cache to cache the results of previously decoded blur hashes so that they can be reused without re-decoding. | ||
| class BlurBloc extends Bloc { | ||
| final _blurHashQueue = Queue(parallel: 10); |
There was a problem hiding this comment.
Did you do some testing with higher parallel count? Would be nice to try it on a low-end Android device and use it as the baseline.
| /// and a LRU-Cache to cache the results of previously decoded blur hashes so that they can be reused without re-decoding. | ||
| class BlurBloc extends Bloc { | ||
| final _blurHashQueue = Queue(parallel: 10); | ||
| final _blurHashCash = SimpleLRUCache<String, BlurHashDecodeTask>(1000); |
There was a problem hiding this comment.
What's the amount of memory used per cache entry? I'm not sure if there is a benefit to making the cache any larger, but if there is then finding a good value based on the minimum hardware requirements (which are not defined, but basically some low-end Android device) would be nice.
| /// If the image is already cached, it returns the cached image. | ||
| /// Otherwise, it creates a new [BlurHashDecodeTask] to decode the blur hash, and returns the result of that task. | ||
| /// If [cache] is `true`, the result will be cached. [cache] does not have any effect on lookup. | ||
| Future<ui.Image> getBlurHash(String blurHash, ui.Size size, {bool cache = true}) { |
There was a problem hiding this comment.
Do we ever not want to cache it? I don't see any reason why we wouldn't and it can still be added later if ever needed.
| final task = BlurHashDecodeTask(blurHash: blurHash, size: size); | ||
|
|
||
| if (_blurHashCash.get(task.key) != null) { | ||
| return _blurHashCash.get(task.key)!.result.future; | ||
| } |
There was a problem hiding this comment.
I think I already commented it on an earlier version: Can you move the key computation outside of the class? Constructing the object just to get the key is unnecessary work. In fact the key is only used inside this method, so you can easily inline the key computation without adding an extra method.
| future: blurHash != null | ||
| ? blurBloc.getBlurHash( | ||
| blurHash!, | ||
| size ?? const Size.square(32), | ||
| cache: false, | ||
| ) | ||
| : null, |
There was a problem hiding this comment.
This would be a bit easier to read if you move it outside of the return statement. Also if the blurHash variable is null you don't need to add the FutureBuilder stuff and can just return the SizedBox directly (which is easier if you move it out of the FutureBuilder as well).
No problem at all! Usually it's not feasible to run all tests locally for all changes, so catching test failures in CI is totally fine :) |
This PR does multiple blur hash improvements: