Skip to content

feat(neon_framework): optimize blur hash handling for being performan…#3185

Open
vauvenal5 wants to merge 1 commit intonextcloud:mainfrom
vauvenal5:upstream/blur-hash-handling
Open

feat(neon_framework): optimize blur hash handling for being performan…#3185
vauvenal5 wants to merge 1 commit intonextcloud:mainfrom
vauvenal5:upstream/blur-hash-handling

Conversation

@vauvenal5
Copy link
Copy Markdown
Collaborator

This PR does multiple blur hash improvements:

  • moves blur hash decoding out of the building path because it leads to UI freezes when opening a gallery with a lot of blur hashes
  • it moves decoding to a separate scheduler to allow Flutter to decide when to decode
  • it adds a result cache for future use to be able to reuse calculated blur hashes, this is especially important when scrolling up and down through a gallery
  • fixes UI flickering when replacing blur hash with actual preview result

Copy link
Copy Markdown
Member

@provokateurin provokateurin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a really nice improvement, thanks!

@vauvenal5 vauvenal5 force-pushed the upstream/blur-hash-handling branch 3 times, most recently from 9aedfca to b3d4aec Compare March 22, 2026 17:31
@vauvenal5
Copy link
Copy Markdown
Collaborator Author

@provokateurin I believe this is ready now for a new review whenever you have a few minutes time.

…t in galleries

Signed-off-by: vauvenal5 <vauvenal5.ndgme@slmails.com>
@vauvenal5 vauvenal5 force-pushed the upstream/blur-hash-handling branch from b3d4aec to 1ed7bec Compare March 22, 2026 20:32
@provokateurin
Copy link
Copy Markdown
Member

You need to insert the BlurBloc in a bunch of tests where it is used in the tested widgets.

@vauvenal5
Copy link
Copy Markdown
Collaborator Author

Yeah I just saw. Will finish this up the next couple of days. And will improve my workflow, sorry for the chaos.

/// It uses a [Queue] to limit the maximum number of concurrent decoding tasks,
/// and a LRU-Cache to cache the results of previously decoded blur hashes so that they can be reused without re-decoding.
class BlurBloc extends Bloc {
final _blurHashQueue = Queue(parallel: 10);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you do some testing with higher parallel count? Would be nice to try it on a low-end Android device and use it as the baseline.

/// and a LRU-Cache to cache the results of previously decoded blur hashes so that they can be reused without re-decoding.
class BlurBloc extends Bloc {
final _blurHashQueue = Queue(parallel: 10);
final _blurHashCash = SimpleLRUCache<String, BlurHashDecodeTask>(1000);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the amount of memory used per cache entry? I'm not sure if there is a benefit to making the cache any larger, but if there is then finding a good value based on the minimum hardware requirements (which are not defined, but basically some low-end Android device) would be nice.

/// If the image is already cached, it returns the cached image.
/// Otherwise, it creates a new [BlurHashDecodeTask] to decode the blur hash, and returns the result of that task.
/// If [cache] is `true`, the result will be cached. [cache] does not have any effect on lookup.
Future<ui.Image> getBlurHash(String blurHash, ui.Size size, {bool cache = true}) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we ever not want to cache it? I don't see any reason why we wouldn't and it can still be added later if ever needed.

Comment on lines +32 to +36
final task = BlurHashDecodeTask(blurHash: blurHash, size: size);

if (_blurHashCash.get(task.key) != null) {
return _blurHashCash.get(task.key)!.result.future;
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I already commented it on an earlier version: Can you move the key computation outside of the class? Constructing the object just to get the key is unnecessary work. In fact the key is only used inside this method, so you can easily inline the key computation without adding an extra method.

Comment on lines +153 to +159
future: blurHash != null
? blurBloc.getBlurHash(
blurHash!,
size ?? const Size.square(32),
cache: false,
)
: null,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be a bit easier to read if you move it outside of the return statement. Also if the blurHash variable is null you don't need to add the FutureBuilder stuff and can just return the SizedBox directly (which is easier if you move it out of the FutureBuilder as well).

@provokateurin
Copy link
Copy Markdown
Member

And will improve my workflow, sorry for the chaos.

No problem at all! Usually it's not feasible to run all tests locally for all changes, so catching test failures in CI is totally fine :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants