Skip to content

Conversation

@madebymozart
Copy link
Contributor

No description provided.

@madebymozart madebymozart merged commit aa9e7c1 into main Oct 28, 2025
1 of 2 checks passed
@gemini-code-assist
Copy link

Summary of Changes

Hello @madebymozart, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly advances the Imagen Editing sample application by completing its foundational implementation. It involves a comprehensive refactoring of the application's package structure for improved modularity and clarity. Crucially, the core image inpainting feature has been fully integrated, enabling the application to leverage Firebase AI's Imagen models for generating and editing images based on user-defined masks and textual prompts. This work transitions the sample from a skeletal structure to a functional demonstration of AI-powered image editing.

Highlights

  • Package Refactoring: The entire com.android.ai.catalog package structure has been refactored to com.android.ai.samples.imagenediting and com.android.ai.samples.imagenediting.sample for better organization and clarity within the Imagen Editing sample.
  • Imagen Model Integration: The ImagenEditingDataSource now correctly initializes and uses the imagen-4.0-ultra-generate-001 and imagen-3.0-capability-001 models from Firebase AI, replacing placeholder code.
  • Inpainting Logic Implementation: The core inpaintImage functionality has been fully implemented in ImagenEditingDataSource, utilizing the editingModel with INPAINT_INSERTION mode to perform image edits based on a source image, mask, and prompt.
  • Mask Editor Functionality: The ImagenEditingMaskEditor now supports drawing masks on images through detectDragGestures, allowing users to define areas for inpainting.
  • ViewModel Integration: The ImagenEditingViewModel has been updated to orchestrate the inpainting process, calling the data source and managing the UI state (loading, success, error) for image generation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully implements the image editing features by completing the pending TODO items and refactoring the package structure for better organization. The core logic for image generation and inpainting is now in place. My review focuses on improving robustness, performance, and maintainability. I've identified a potential crash risk, a performance concern in the mask drawing implementation, a minor formatting issue, and a gap in error logging. The suggested changes will make the code more stable and easier to debug.

editSteps = editSteps,
),
)
return imageResponse.images.first().asBitmap()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using .first() is not safe as it will throw a NoSuchElementException if the images list is empty, causing the app to crash. The API might return an empty list even on a successful response. Using firstOrNull() and then handling the null case by throwing a more descriptive exception is a more robust approach.

Suggested change
return imageResponse.images.first().asBitmap()
return imageResponse.images.firstOrNull()?.asBitmap() ?: throw IllegalStateException("Imagen API returned no images for inpainting.")

Comment on lines +70 to +72
} catch (e: Exception) {
_uiState.value = ImagenEditingUIState.Error(e.localizedMessage ?: "An unknown error occurred during inpainting")
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The exception is caught and an error message is shown to the user, but the exception itself is not logged. This is often referred to as "swallowing an exception" and makes debugging failures very difficult because the stack trace and other details are lost. It's a best practice to always log the caught exception to aid in debugging.

            } catch (e: Exception) {
                Log.e("ImagenEditingViewModel", "An error occurred during inpainting", e)
                _uiState.value = ImagenEditingUIState.Error(e.localizedMessage ?: "An unknown error occurred during inpainting")
            }

(change.position.y - offsetY) / scale,
)
it.lineTo(transformedChange.x, transformedChange.y)
currentPath = Path().apply { addPath(it) }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Creating a new Path object on every drag event by calling Path().apply { addPath(it) } is inefficient and can lead to performance degradation and UI jank, as it allocates a new object on every move. This is a common workaround to trigger recomposition for mutable objects like Path that are not directly observable by Compose.

For better performance, consider a different state management strategy. For example, you could collect the Offsets in a mutableStateListOf during the drag gesture and then construct the Path once within the Canvas onDraw scope. This would avoid the rapid object allocation during the drag.

} catch (e: Exception) {
_uiState.value = ImagenEditingUIState.Error(e.localizedMessage ?: "An unknown error occurred during inpainting")
}
} }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line has a formatting issue with multiple closing braces and incorrect indentation. This makes the code harder to read and maintain. It's better to have each closing brace on its own line with the correct indentation.

        }
    }

@madebymozart madebymozart self-assigned this Nov 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant