[Feature] Bloomed & RAIN_FORECAST_ALERT Push Notification#115
[Feature] Bloomed & RAIN_FORECAST_ALERT Push Notification#115
Conversation
…BLOOMED alert triggers
…fications with distributed lock
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds multi-channel notification features (withered, bloomed, bloomed-spot, rain-forecast), event publishing on blooming, new schedulers and distributed evening lock (Redisson), weather/location API extensions, repository query additions, multiple eligibility/message builders, and related DB/JPA query support. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant BloomingFacade as BloomingFacade
participant EventPub as ApplicationEventPublisher
participant FirstVoteListener as BloomedFirstVoteNotificationEventListener
participant RegionResolver as BloomedRegionResolver
participant FirstVoteChecker as BloomedFirstVoteChecker
participant BloomedService as BloomedNotificationService
participant Fcm as FcmSender
User->>BloomingFacade: uploadBloomingStatus(BLOOMED)
BloomingFacade->>BloomingFacade: persist blooming
BloomingFacade->>EventPub: publish(BloomingAddedEvent)
EventPub->>FirstVoteListener: handleBloomingAddedEvent
FirstVoteListener->>RegionResolver: resolveRegion(lat,long)
FirstVoteListener->>FirstVoteChecker: isFirstBloomedVoteOfYear(region)
alt first vote
FirstVoteListener->>BloomedService: sendBloomedNotificationForRegion(region)
BloomedService->>Fcm: sendAll(messages)
Fcm-->>BloomedService: success
BloomedService->>BloomedService: store notification records
end
sequenceDiagram
participant Scheduler as EveningNotificationScheduler
participant Lock as NotificationExecutionLock
participant Weekday as WeekdayNotificationScheduler
participant Rain as RainForecastNotificationService
participant NotifSvc as NotificationService
Scheduler->>Scheduler: cron trigger (18:00)
Scheduler->>Lock: runWithEveningLock(today, action)
alt Lock ACQUIRED
Lock->>Weekday: executeWeekdayNotification(today)
Weekday->>NotifSvc: sendWeekdayNotificationsSync()
Lock->>Rain: invoke sendRainForecastNotifications()
Rain->>NotifSvc: send notifications
Lock->>Lock: release lock
else SKIPPED_BY_CONTENTION or FAIL_OPEN
Lock-->>Scheduler: handle accordingly (skip or fail-open)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.kt (1)
40-59:⚠️ Potential issue | 🟠 Major
sendWeekdayNotificationsSyncmust usesendAllinstead ofsendAllAsyncto maintain synchronous ordering.The method is documented as synchronous for the evening orchestrator's execution order guarantee (lines 35-39), but line 57 calls
sendAllAsync(), which is annotated with@Asyncand returns immediately without blocking. This causes line 59 to store notification records before message delivery completes, breaking the orchestration ordering and persisting records with unknown delivery status.Replace
fcmSender.sendAllAsync(messages)withfcmSender.sendAll(messages)on line 57 to ensure synchronous execution.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.kt` around lines 40 - 59, The sendWeekdayNotificationsSync method is mistakenly calling the async variant fcmSender.sendAllAsync(messages) which returns immediately and breaks synchronous ordering; change the call in sendWeekdayNotificationsSync to use the synchronous fcmSender.sendAll(messages) so notification sending completes before storeNotificationRecords(eligibleUsers) is invoked, ensuring ordering guarantees are preserved.
🟡 Minor comments (5)
pida-clients/weather-client/src/main/kotlin/com/pida/client/weather/WeatherServiceImpl.kt-116-124 (1)
116-124:⚠️ Potential issue | 🟡 MinorPotential NPE on response property chain.
The chain
response.response.body.items.itemassumes all intermediate objects are non-null. If the KMA API returns a partial response (e.g.,bodyoritemsis null), this will throw an NPE that isn't caught by the outer try-catch (which only wraps the API call itself).Consider adding null-safe handling or moving the property access inside the try-catch block.
🛡️ Proposed fix to include property access in error handling
val response = try { kmaWeatherClient.getVilageForecast(baseDate, baseTime, location.nx, location.ny) } catch (error: Exception) { logger.error("Failed to fetch weather forecast", error) throw ErrorException(ErrorType.WEATHER_API_CALL_FAILED) } - return response.response.body.items.item + return try { + response.response.body.items.item + } catch (error: NullPointerException) { + logger.error("Invalid weather API response structure", error) + throw ErrorException(ErrorType.WEATHER_DATA_NOT_AVAILABLE) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-clients/weather-client/src/main/kotlin/com/pida/client/weather/WeatherServiceImpl.kt` around lines 116 - 124, The current code in WeatherServiceImpl calls kmaWeatherClient.getVilageForecast inside a try-catch but then accesses response.response.body.items.item outside of that block, risking an NPE; move the property access into the try block or add null-safe checks for response.response, body, items and item (e.g., validate each is non-null) and on any missing/invalid part log the error via logger.error and throw ErrorException(ErrorType.WEATHER_API_CALL_FAILED) so all API and parsing failures are handled consistently (locate via the response variable and getVilageForecast call in WeatherServiceImpl).pida-clients/weather-client/src/main/kotlin/com/pida/client/weather/WeatherServiceImpl.kt-42-46 (1)
42-46:⚠️ Potential issue | 🟡 MinorReturning 0 when tomorrow's POP data is missing may silently suppress rain alerts.
The method returns
0if noPOPitems exist for tomorrow (after filtering). This could occur if:
- The forecast doesn't extend to tomorrow yet
- API response lacks tomorrow's data due to timing
This silent fallback to 0 would prevent
RAIN_FORECAST_ALERTnotifications from triggering, potentially causing missed alerts. Consider throwing an exception or logging a warning when no POP data is found for tomorrow, similar to the empty items check above.🛡️ Proposed fix to distinguish missing data from 0% probability
return items .asSequence() .filter { it.category == "POP" && it.fcstDate == tomorrowDate } .mapNotNull { it.fcstValue.toIntOrNull() } - .maxOrNull() ?: 0 + .maxOrNull() + ?: run { + logger.warn("No POP forecast data available for tomorrow ($tomorrowDate) at nx=${location.nx}, ny=${location.ny}") + 0 + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-clients/weather-client/src/main/kotlin/com/pida/client/weather/WeatherServiceImpl.kt` around lines 42 - 46, The current sequence that filters POP for tomorrow in WeatherServiceImpl (the block starting with "return items.asSequence().filter { it.category == \"POP\" && it.fcstDate == tomorrowDate }...") returns 0 when no POP entries are found, silently suppressing a missing-data case; change this to mirror the earlier empty-items handling by detecting the no-results case and either log a warning with context (include tomorrowDate and items size) and then throw an exception (e.g., IllegalStateException or a domain-specific MissingForecastDataException) instead of returning 0 so callers and the RAIN_FORECAST_ALERT path can distinguish “no data” from a genuine 0% POP. Ensure the thrown/logged message references tomorrowDate, "POP", and WeatherServiceImpl to aid debugging.pida-core/core-domain/src/main/kotlin/com/pida/user/location/UserLocationReader.kt-13-17 (1)
13-17:⚠️ Potential issue | 🟡 MinorAdd basic input guards before radius lookup.
Line 13 currently forwards raw geo inputs; invalid values (e.g., negative radius or out-of-range coordinates) should fail fast to avoid invalid query behavior.
🛡️ Suggested defensive validation
fun readUserLocationsWithinRadius( latitude: Double, longitude: Double, radiusMeters: Double, - ) = userLocationRepository.findWithinRadius(latitude, longitude, radiusMeters) + ): List<UserLocation.Info> { + require(latitude in -90.0..90.0) { "latitude must be in [-90, 90]" } + require(longitude in -180.0..180.0) { "longitude must be in [-180, 180]" } + require(radiusMeters > 0.0) { "radiusMeters must be > 0" } + return userLocationRepository.findWithinRadius(latitude, longitude, radiusMeters) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/user/location/UserLocationReader.kt` around lines 13 - 17, The readUserLocationsWithinRadius function currently forwards raw inputs to userLocationRepository.findWithinRadius; add input guards to validate latitude is between -90 and 90, longitude between -180 and 180, and radiusMeters is non-negative (and >0 if your semantics require a positive radius), and throw an IllegalArgumentException with a clear message when any check fails so the invalid call fails fast before invoking userLocationRepository.findWithinRadius.pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/blooming/BloomingCustomRepository.kt-129-145 (1)
129-145:⚠️ Potential issue | 🟡 Minor
Aftercontract is implemented as inclusive boundary.Line 144 uses
>=even though the method name says “After”. This can double-count boundary timestamps across repeated checks. Prefer strict>or rename the API to indicate inclusivity.Suggested fix
- path(BloomingEntity::createdAt).greaterThanOrEqualTo(createdAtAfter), + path(BloomingEntity::createdAt).greaterThan(createdAtAfter),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/blooming/BloomingCustomRepository.kt` around lines 129 - 145, The method countBloomedVotesByRegionAndCreatedAtAfter in BloomingCustomRepository currently uses an inclusive boundary (path(BloomingEntity::createdAt).greaterThanOrEqualTo(createdAtAfter) ) which conflicts with the “After” semantic; change the predicate to a strict comparison by replacing greaterThanOrEqualTo with greaterThan so createdAt > createdAtAfter (or alternatively rename the method to indicate inclusivity if you prefer to keep >=).pida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFacade.kt-74-74 (1)
74-74:⚠️ Potential issue | 🟡 MinorMove event publication after upload URL creation succeeds.
The event should only be published after all operations complete successfully. Currently, if
createUploadUrlfails,BloomingAddedEventhas already been published, creating an inconsistent state where listeners receive notifications for a blooming that failed to generate an upload URL.Suggested fix
suspend fun uploadBloomingStatus(newBlooming: NewBlooming): BloomingImageUploadUrl { bloomingService.add(newBlooming) - eventPublisher.publishEvent(BloomingAddedEvent(newBlooming)) - - return BloomingImageUploadUrl.from( - imageS3Caller.createUploadUrl(newBlooming.userId, ImagePrefix.FLOWERSPOT.value, newBlooming.flowerSpotId), - ) + val uploadUrl = + imageS3Caller.createUploadUrl( + newBlooming.userId, + ImagePrefix.FLOWERSPOT.value, + newBlooming.flowerSpotId, + ) + eventPublisher.publishEvent(BloomingAddedEvent(newBlooming)) + return BloomingImageUploadUrl.from(uploadUrl) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFacade.kt` at line 74, Move the call to eventPublisher.publishEvent(BloomingAddedEvent(newBlooming)) so it runs only after createUploadUrl(...) completes successfully; locate the BloomingFacade method where newBlooming is created and createUploadUrl is invoked, remove or relocate the current publishEvent line that executes before createUploadUrl, and place it after the successful return of createUploadUrl (or after any uploadUrl assignment) so listeners are only notified when the upload URL creation for newBlooming has succeeded.
🧹 Nitpick comments (5)
pida-core/core-domain/src/main/kotlin/com/pida/notification/RegionStatusCount.kt (1)
9-13: Consolidate duplicatedRegionStatusCountmodel.This class duplicates
com.pida.blooming.RegionStatusCount(same structure), which adds avoidable divergence/import ambiguity risk. Prefer one canonical model and reuse it across domains.♻️ One possible consolidation (typealias approach)
package com.pida.notification -import com.pida.blooming.BloomingStatus -import com.pida.support.geo.Region - -/** - * Region과 BloomingStatus별 투표 수 - */ -data class RegionStatusCount( - val region: Region, - val status: BloomingStatus, - val count: Long, -) +typealias RegionStatusCount = com.pida.blooming.RegionStatusCount🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/notification/RegionStatusCount.kt` around lines 9 - 13, RegionStatusCount duplicates the existing com.pida.blooming.RegionStatusCount; remove this duplicate data class and consolidate to the canonical model by either (A) deleting this RegionStatusCount declaration and updating all usages to import com.pida.blooming.RegionStatusCount, or (B) replace this file with a single typealias RegionStatusCount = com.pida.blooming.RegionStatusCount so callers in this package keep the same name; update imports/usages in code that referenced this file and run a build to fix any unresolved references.pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredThresholdChecker.kt (1)
32-62: You can return the pipeline directly.Lines 32-62 create
resultonly to return it immediately; collapsing this removes one temporary and simplifies flow.♻️ Small cleanup
- val result = - regionStats - .groupBy { it.region } - .mapNotNull { (region, counts) -> - val totalVotes = counts.sumOf { it.count } - - if (totalVotes == 0L) { - return@mapNotNull null - } - - val witheredVotes = - counts - .filter { it.status == BloomingStatus.WITHERED } - .sumOf { it.count } - - val percentage = (witheredVotes.toDouble() / totalVotes) * 100 - - if (percentage >= THRESHOLD_PERCENTAGE) { - WitheredRegion( - region = region, - witheredPercentage = percentage, - totalVotes = totalVotes.toInt(), - witheredVotes = witheredVotes.toInt(), - ) - } else { - null - } - } - - return result + return regionStats + .groupBy { it.region } + .mapNotNull { (region, counts) -> + val totalVotes = counts.sumOf { it.count } + if (totalVotes == 0L) return@mapNotNull null + + val witheredVotes = + counts + .filter { it.status == BloomingStatus.WITHERED } + .sumOf { it.count } + + val percentage = (witheredVotes.toDouble() / totalVotes) * 100 + if (percentage >= THRESHOLD_PERCENTAGE) { + WitheredRegion( + region = region, + witheredPercentage = percentage, + totalVotes = totalVotes.toInt(), + witheredVotes = witheredVotes.toInt(), + ) + } else { + null + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredThresholdChecker.kt` around lines 32 - 62, The method builds a temporary `result` variable from `regionStats.groupBy { it.region }...mapNotNull { ... }` only to immediately return it; simplify by returning the pipeline directly instead of assigning to `result`. Update the function in WitheredThresholdChecker (the block that computes WitheredRegion using THRESHOLD_PERCENTAGE, witheredVotes, totalVotes and BloomingStatus.WITHERED) to return the groupBy().mapNotNull() expression directly and remove the `val result =` and trailing `return result` to eliminate the unnecessary temporary.pida-core/core-api/src/main/kotlin/com/pida/scheduler/BloomedNotificationScheduler.kt (1)
20-20: Add explicit timezone to@Scheduledannotations.Line 20 uses
@Scheduledwithout an explicitzoneparameter, making execution time dependent on server timezone. This affects multiple schedulers (BloomedNotificationScheduler, EveningNotificationScheduler, WeekendNotificationScheduler, WitheredNotificationScheduler) and should be addressed consistently. Pinning timezone avoids accidental delivery-time drift across environments where JVM/OS timezone may vary.⏰ Suggested change
- `@Scheduled`(cron = "0 0 9 * * *") + `@Scheduled`(cron = "0 0 9 * * *", zone = "Asia/Seoul")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-api/src/main/kotlin/com/pida/scheduler/BloomedNotificationScheduler.kt` at line 20, The `@Scheduled` annotation in BloomedNotificationScheduler is missing an explicit timezone which causes runtime scheduling to depend on server/JVM timezone; update the `@Scheduled`(cron = "...") on BloomedNotificationScheduler (and similarly on EveningNotificationScheduler, WeekendNotificationScheduler, WitheredNotificationScheduler) to include a zone parameter (e.g. zone = "UTC" or your desired IANA timezone) so the cron fires consistently across environments, ensuring you modify the `@Scheduled` annotations on the respective classes/methods to include the chosen zone value.pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationEligibilityChecker.kt (1)
17-102: Consider extracting shared eligibility logic to reduce drift.This flow is nearly identical to
pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationEligibilityChecker.kt. A shared template/use-case would reduce maintenance overhead and keep rules consistent across alert types.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationEligibilityChecker.kt` around lines 17 - 102, The WitheredNotificationEligibilityChecker's findEligibleUsersForRegion duplicates the flow in BloomedNotificationEligibilityChecker; extract the shared eligibility pipeline (fetch active users with location, resolve regions, filter by target region, query notification counts since start of year, and filter zero-count users) into a reusable component or abstract base (e.g., a NotificationEligibilityChecker or EligibilityService) and have WitheredNotificationEligibilityChecker and BloomedNotificationEligibilityChecker delegate to it; locate logic in WitheredNotificationEligibilityChecker.findEligibleUsersForRegion, helper getYearStartDateTime, and collaborators WitheredNotificationUserReader, WitheredRegionResolver, and NotificationStoredRepository to refactor common steps into shared methods or strategy interfaces so only alert-specific pieces (notification type enum and region parameter) remain in the concrete classes.pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt (1)
68-136: Consider extracting a shared trigger helper to reduce repetition.Lines 68-136 repeat the same timing/response boilerplate in each endpoint; a small helper would improve maintainability.
♻️ Optional refactor sketch
+ private fun trigger(message: String, action: () -> Unit): NotificationTriggerResponse { + val startTime = LocalDateTime.now() + action() + return NotificationTriggerResponse(message = message, triggeredAt = startTime) + } ... - fun triggerWitheredNotification(): NotificationTriggerResponse { - val startTime = LocalDateTime.now() - witheredNotificationService.sendWitheredNotifications() - return NotificationTriggerResponse( - message = "Withered notification triggered successfully", - triggeredAt = startTime, - ) - } + fun triggerWitheredNotification(): NotificationTriggerResponse = + trigger("Withered notification triggered successfully") { + witheredNotificationService.sendWitheredNotifications() + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt` around lines 68 - 136, The four endpoint methods triggerWitheredNotification, triggerBloomedNotification, triggerBloomedSpotNotification, and triggerRainForecastNotification duplicate the same startTime -> call service -> return NotificationTriggerResponse pattern; extract a private helper like triggerNotification(action: () -> Unit, message: String): NotificationTriggerResponse (or an overload that accepts a lambda with parameters) that captures LocalDateTime.now(), invokes the provided action (e.g., witheredNotificationService.sendWitheredNotifications, { bloomedNotificationService.sendBloomedNotificationForRegion(region) }, { bloomedSpotNotificationService.sendBloomedSpotNotification(spotId) }, rainForecastNotificationService.sendRainForecastNotifications), and returns the NotificationTriggerResponse, then replace each endpoint body with a single call to triggerNotification passing the appropriate lambda and message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt`:
- Around line 61-137: The new POST endpoints in NotificationTestController
(triggerWitheredNotification, triggerBloomedNotification,
triggerBloomedSpotNotification, triggerRainForecastNotification) must be locked
down—add an environment and authorization guard so these can only be invoked in
non-production or by admin accounts: e.g., annotate the controller or each
method with a profile restriction (`@Profile` or conditional on an application
property) to disable them in prod AND enforce role-based access via Spring
Security (`@PreAuthorize`("hasRole('ADMIN')") or an explicit SecurityContext
check), or perform an env check at the start of each method that returns HTTP
403 when in production; include an audit/log entry for denied attempts.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/EveningNotificationScheduler.kt`:
- Around line 31-42: A failure in
weekdayNotificationScheduler.executeWeekdayNotification(today) currently bubbles
up and prevents the later
rainForecastNotificationService.sendRainForecastNotifications() from running;
wrap the weekday call in its own safe execution wrapper (use runSchedulerSafely
with an appropriate failureMessage and the same logger) or surround it with a
local try/catch that logs the error and continues, so executeInOrder always
proceeds to call rainForecastNotificationService.sendRainForecastNotifications()
regardless of weekdayNotificationScheduler failures.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.kt`:
- Line 25: The code uses a fixed LEASE_SECONDS (900L) when calling
RLock.tryLock(waitTime, leaseTime, unit), which disables Redisson's watchdog and
can let the lock expire mid-task; update NotificationExecutionLock to stop
passing a fixed lease: remove the LEASE_SECONDS constant and switch the lock
acquisition call from tryLock(waitTime, LEASE_SECONDS, unit) to the
watchdog-enabled overload tryLock(waitTime, unit) (or the equivalent
RLock.tryLock(long waitTime, TimeUnit unit)), so the Redisson lock watchdog will
auto-renew the lock while the client is active.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationSchedulerSupport.kt`:
- Around line 19-35: The check/update in executeOncePerDay is not thread-safe:
wrap the guard check, the call to runSchedulerSafely(...) and the update of
lastExecutedDate in a single synchronized block to prevent multiple threads from
passing the guard and executing the action the same day; e.g., synchronize on a
dedicated private lock (or this) around the sequence in executeOncePerDay so
only one thread performs the check+run+update atomically, referencing
lastExecutedDate, executeOncePerDay and runSchedulerSafely.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekdayNotificationScheduler.kt`:
- Around line 21-22: The in-memory fields executionCountThisWeek and
lastExecutedWeekKey in WeekdayNotificationScheduler must be backed by shared
persistent storage or derived from persisted notification history so
multi-instance schedulers enforce the "max 2 executions per week" globally;
replace reads/writes of executionCountThisWeek and lastExecutedWeekKey with
atomic operations against a shared store (e.g., Redis counter with a week-scoped
key or a DB query that counts persisted notifications for the current ISO week)
and, before scheduling/executing, load the current week key and count from that
store (or compute from persisted notifications) and perform an atomic
check-and-increment to decide whether to run; update any other places that
reference these variables (including the logic currently around where
executionCountThisWeek and lastExecutedWeekKey are used) to use the new
shared-store API so state is consistent across nodes.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekendNotificationScheduler.kt`:
- Around line 47-57: The issue is that
weekendNotificationService.sendWeekendNotifications() is annotated `@Async` and
returns immediately, so runSchedulerSafely only catches dispatch-time exceptions
and lastExecutedWeekKey = currentWeekKey is set on false success; to fix, make
the scheduler wait for the async work to complete and propagate failures before
marking the week executed: change sendWeekendNotifications to return a
CompletableFuture/Mono (or expose a Future) or remove `@Async`, then inside the
runSchedulerSafely block invoke and block/join on the returned future (or
subscribe and await completion), let any execution exceptions surface to
runSchedulerSafely, and only set lastExecutedWeekKey when executed is true;
reference weekendNotificationService.sendWeekendNotifications(),
runSchedulerSafely, lastExecutedWeekKey and currentWeekKey.
- Line 22: WeekendNotificationScheduler currently uses an instance-local
lastExecutedWeekKey which can't prevent duplicate runs across nodes; inject
NotificationExecutionLock into WeekendNotificationScheduler (same way
EveningNotificationScheduler does), then wrap the weekend notification execution
block (the code guarded by lastExecutedWeekKey) inside
notificationExecutionLock.runWithWeekendLock { ... } (or the equivalent
runWith... method used by EveningNotificationScheduler), keeping the
lastExecutedWeekKey check/update only inside that distributed lock so only one
node performs the send logic.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/WitheredNotificationScheduler.kt`:
- Line 25: The `@Scheduled` annotation in WitheredNotificationScheduler is missing
a zone, so the 09:00 job varies by JVM default timezone; fix by adding an
explicit zone to the annotation (e.g., `@Scheduled`(cron = "0 0 9 * * *", zone =
"UTC") or use a property placeholder like zone = "${scheduler.timezone:UTC}") on
the scheduled method in the WitheredNotificationScheduler class so the cron
always runs in the intended timezone.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedFirstVoteChecker.kt`:
- Around line 18-23: The current call to
bloomingRepository.countBloomedVotesByRegionAndCreatedAtAfter(region,
createdAtAfter = yearStartDateTime) in BloomedFirstVoteChecker excludes votes
exactly at YYYY-01-01T00:00:00; change the query to be inclusive of the boundary
(e.g., use/create a repository method like
countBloomedVotesByRegionAndCreatedAtGreaterThanOrEqual or
countBloomedVotesByRegionAndCreatedAtAfterOrEqual and pass yearStartDateTime) so
the count includes votes with createdAt == yearStartDateTime before computing
bloomedVoteCount == 1L.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedFirstVoteNotificationEventListener.kt`:
- Around line 31-38: The code in BloomedFirstVoteNotificationEventListener
currently resolves userRegion from
userLocationReader.readUserLocationByUserId(event.newBlooming.userId) but should
resolve the region from the voted spot; change the logic to read the
flower-spot's location using the flowerSpot (or spot) location reader for
event.newBlooming.flowerSpotId (e.g., call a method like
readFlowerSpotLocationById or equivalent) and then pass that spot's
GeoJson.Point coordinates into regionResolver.resolveRegion(latitude=...,
longitude=...); replace the userLocationReader/readUserLocationByUserId usage
with the spot-location reader and keep the same null checks/returns if the spot
location or point is missing.
- Around line 24-26: Change the listener to run after the publisher's
transaction commits: replace the `@EventListener/`@Async usage on
handleBloomingAddedEvent(event: BloomingAddedEvent) with
`@TransactionalEventListener`(phase =
TransactionalEventListener.Phase.AFTER_COMMIT) and retain async behavior if
needed (e.g., combine with `@Async` or move async execution inside the handler).
Ensure handleBloomingAddedEvent and its BloomingAddedEvent parameter remain
unchanged so the method is invoked only after the outer transaction commits.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationEligibilityChecker.kt`:
- Around line 27-31: The Bloomed flow currently uses a 7-day activity window and
lacks same-day dedup; update BloomedNotificationEligibilityChecker to use a
30-day active-user window (replace any 7-day constant/logic in isEligible or
filterEligibleUsers) to align with other notifications, and add same-day
exclusion by checking users' last push timestamp/history for today (or reuse the
same-day dedup helper from WeekendNotification) so non-forecast BLOOMED_ALERTs
are filtered out before sending; if preference is to keep eligibility pure,
implement the same-day dedup in BloomedNotificationService just before
sendNotifications to ensure no user receives two pushes on the same day.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationService.kt`:
- Around line 70-81: The code persists notification history for eligibleUsers
but only sends messages for users with resolvable device tokens; change
storeNotificationRecords to persist only the actual push-targeted users derived
from the built messages (e.g., extract user IDs or recipient identifiers from
the result of buildNotificationMessages or the messages list) instead of passing
eligibleUsers to storeNotificationRecords in BloomedNotificationService; update
both occurrences (the one after fcmSender.sendAllAsync(messages) and the similar
block at lines 112-129) to compute actualRecipients from messages and pass that
to storeNotificationRecords so only successfully-targeted users are recorded.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationUserReader.kt`:
- Line 25: The ACTIVE_USER_DAYS constant in BloomedNotificationUserReader
(private const val ACTIVE_USER_DAYS = 7L) is set to 7 days but should be 30 days
for non-forecast push; update this constant to 30L and ensure any other
occurrences in the same file (referenced around lines that include the usages in
the reader, e.g., the methods that filter by active user days) are similarly
aligned so the non-forecast notification user-range logic uses 30 days instead
of 7.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedRegionResolver.kt`:
- Line 33: In BloomedRegionResolver (the class containing the logger.warn call),
avoid logging raw PII coordinates; update the warning in the method that
currently logs "No district found for coordinates: ($latitude, $longitude)" to
redact or minimize the location data (for example log a hashed/obfuscated value
or rounded coordinates to a low precision like 2 decimal places, or simply omit
coordinates and log only context), so replace the direct interpolation of
latitude/longitude with a sanitized representation (e.g.,
roundedLatitude/roundedLongitude or "<redacted>" / hash) in the logger.warn
call.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomedspot/BloomedSpotNotificationEventListener.kt`:
- Around line 16-18: Replace the plain `@EventListener` on push-notification
handlers with a transactional event listener that fires after commit: for
BloomedSpotNotificationEventListener change the annotation on
handleBloomingAddedEvent from `@EventListener` to
`@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT) (and keep
`@Async`), and do the same in BloomedFirstVoteNotificationEventListener for its
handler (e.g., handleFirstVoteAddedEvent); also add the necessary import for
org.springframework.transaction.event.TransactionalEventListener and
org.springframework.transaction.event.TransactionPhase so the listener only
executes after the surrounding transaction has committed.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationService.kt`:
- Around line 84-101: The code currently calls
storeNotificationRecords(eligibleUsers) which persists history for all
eligibleUsers even if some had no push target; change it to only persist records
for users that actually had a message sent: after
buildNotificationMessages(eligibleUsers) compute the set/list of recipients from
messages (e.g., map message -> userId or filter eligibleUsers where an FCM
target exists) and pass that filtered collection to storeNotificationRecords
instead of eligibleUsers; keep the early return when messages.isEmpty(), and
ensure the same change is applied in the other block mentioned (lines 127-141)
so only actual push targets are recorded.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationUserReader.kt`:
- Line 25: The constant ACTIVE_USER_DAYS in WitheredNotificationUserReader is
set to 7L but should match the PR targeting rule of 30 days; update
ACTIVE_USER_DAYS to 30L and adjust any logic that uses it (e.g., the query
construction and filtering code where ACTIVE_USER_DAYS is referenced around
lines 34-37) so the reader selects users active in the last 30 days; ensure any
unit types (Long) and tests/validators using WitheredNotificationUserReader are
updated accordingly.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredRegionResolver.kt`:
- Line 33: The warning currently logs precise user coordinates in
WitheredRegionResolver (logger.warn("No district found for coordinates:
($latitude, $longitude)")), which is sensitive; change the log to avoid raw
coordinates by replacing them with a privacy-safe representation (e.g.,
redacted/masked coordinates, a coarse geohash/rounded lat/lon, or only the
request/trace id) and include contextual identifiers like district lookup
attempt or regionKey instead; update the message in WitheredRegionResolver to
emit a generic warning (e.g., "No district found for provided location" plus
safe identifier) and ensure any helper used for masking/rounding is called where
latitude/longitude are referenced.
In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/location/UserLocationJpaRepository.kt`:
- Around line 15-35: The location column used by the proximity query in
findWithinRadius (which queries t_user_location using ST_DWithin/ST_Distance)
needs a spatial GIST index to avoid full-table scans; add a GIST index on the
location column either by updating the UserLocationEntity mapping (e.g., add an
`@Index` on the location field or on the entity/table definition) or by adding a
DB migration that runs CREATE INDEX idx_user_location_location ON
t_user_location USING GIST(location); ensure the location column remains
geometry(Point,4326) so the index is valid.
---
Outside diff comments:
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.kt`:
- Around line 40-59: The sendWeekdayNotificationsSync method is mistakenly
calling the async variant fcmSender.sendAllAsync(messages) which returns
immediately and breaks synchronous ordering; change the call in
sendWeekdayNotificationsSync to use the synchronous fcmSender.sendAll(messages)
so notification sending completes before storeNotificationRecords(eligibleUsers)
is invoked, ensuring ordering guarantees are preserved.
---
Minor comments:
In
`@pida-clients/weather-client/src/main/kotlin/com/pida/client/weather/WeatherServiceImpl.kt`:
- Around line 116-124: The current code in WeatherServiceImpl calls
kmaWeatherClient.getVilageForecast inside a try-catch but then accesses
response.response.body.items.item outside of that block, risking an NPE; move
the property access into the try block or add null-safe checks for
response.response, body, items and item (e.g., validate each is non-null) and on
any missing/invalid part log the error via logger.error and throw
ErrorException(ErrorType.WEATHER_API_CALL_FAILED) so all API and parsing
failures are handled consistently (locate via the response variable and
getVilageForecast call in WeatherServiceImpl).
- Around line 42-46: The current sequence that filters POP for tomorrow in
WeatherServiceImpl (the block starting with "return items.asSequence().filter {
it.category == \"POP\" && it.fcstDate == tomorrowDate }...") returns 0 when no
POP entries are found, silently suppressing a missing-data case; change this to
mirror the earlier empty-items handling by detecting the no-results case and
either log a warning with context (include tomorrowDate and items size) and then
throw an exception (e.g., IllegalStateException or a domain-specific
MissingForecastDataException) instead of returning 0 so callers and the
RAIN_FORECAST_ALERT path can distinguish “no data” from a genuine 0% POP. Ensure
the thrown/logged message references tomorrowDate, "POP", and WeatherServiceImpl
to aid debugging.
In `@pida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFacade.kt`:
- Line 74: Move the call to
eventPublisher.publishEvent(BloomingAddedEvent(newBlooming)) so it runs only
after createUploadUrl(...) completes successfully; locate the BloomingFacade
method where newBlooming is created and createUploadUrl is invoked, remove or
relocate the current publishEvent line that executes before createUploadUrl, and
place it after the successful return of createUploadUrl (or after any uploadUrl
assignment) so listeners are only notified when the upload URL creation for
newBlooming has succeeded.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/user/location/UserLocationReader.kt`:
- Around line 13-17: The readUserLocationsWithinRadius function currently
forwards raw inputs to userLocationRepository.findWithinRadius; add input guards
to validate latitude is between -90 and 90, longitude between -180 and 180, and
radiusMeters is non-negative (and >0 if your semantics require a positive
radius), and throw an IllegalArgumentException with a clear message when any
check fails so the invalid call fails fast before invoking
userLocationRepository.findWithinRadius.
In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/blooming/BloomingCustomRepository.kt`:
- Around line 129-145: The method countBloomedVotesByRegionAndCreatedAtAfter in
BloomingCustomRepository currently uses an inclusive boundary
(path(BloomingEntity::createdAt).greaterThanOrEqualTo(createdAtAfter) ) which
conflicts with the “After” semantic; change the predicate to a strict comparison
by replacing greaterThanOrEqualTo with greaterThan so createdAt > createdAtAfter
(or alternatively rename the method to indicate inclusivity if you prefer to
keep >=).
---
Nitpick comments:
In
`@pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt`:
- Around line 68-136: The four endpoint methods triggerWitheredNotification,
triggerBloomedNotification, triggerBloomedSpotNotification, and
triggerRainForecastNotification duplicate the same startTime -> call service ->
return NotificationTriggerResponse pattern; extract a private helper like
triggerNotification(action: () -> Unit, message: String):
NotificationTriggerResponse (or an overload that accepts a lambda with
parameters) that captures LocalDateTime.now(), invokes the provided action
(e.g., witheredNotificationService.sendWitheredNotifications, {
bloomedNotificationService.sendBloomedNotificationForRegion(region) }, {
bloomedSpotNotificationService.sendBloomedSpotNotification(spotId) },
rainForecastNotificationService.sendRainForecastNotifications), and returns the
NotificationTriggerResponse, then replace each endpoint body with a single call
to triggerNotification passing the appropriate lambda and message.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/BloomedNotificationScheduler.kt`:
- Line 20: The `@Scheduled` annotation in BloomedNotificationScheduler is missing
an explicit timezone which causes runtime scheduling to depend on server/JVM
timezone; update the `@Scheduled`(cron = "...") on BloomedNotificationScheduler
(and similarly on EveningNotificationScheduler, WeekendNotificationScheduler,
WitheredNotificationScheduler) to include a zone parameter (e.g. zone = "UTC" or
your desired IANA timezone) so the cron fires consistently across environments,
ensuring you modify the `@Scheduled` annotations on the respective classes/methods
to include the chosen zone value.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/RegionStatusCount.kt`:
- Around line 9-13: RegionStatusCount duplicates the existing
com.pida.blooming.RegionStatusCount; remove this duplicate data class and
consolidate to the canonical model by either (A) deleting this RegionStatusCount
declaration and updating all usages to import
com.pida.blooming.RegionStatusCount, or (B) replace this file with a single
typealias RegionStatusCount = com.pida.blooming.RegionStatusCount so callers in
this package keep the same name; update imports/usages in code that referenced
this file and run a build to fix any unresolved references.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationEligibilityChecker.kt`:
- Around line 17-102: The WitheredNotificationEligibilityChecker's
findEligibleUsersForRegion duplicates the flow in
BloomedNotificationEligibilityChecker; extract the shared eligibility pipeline
(fetch active users with location, resolve regions, filter by target region,
query notification counts since start of year, and filter zero-count users) into
a reusable component or abstract base (e.g., a NotificationEligibilityChecker or
EligibilityService) and have WitheredNotificationEligibilityChecker and
BloomedNotificationEligibilityChecker delegate to it; locate logic in
WitheredNotificationEligibilityChecker.findEligibleUsersForRegion, helper
getYearStartDateTime, and collaborators WitheredNotificationUserReader,
WitheredRegionResolver, and NotificationStoredRepository to refactor common
steps into shared methods or strategy interfaces so only alert-specific pieces
(notification type enum and region parameter) remain in the concrete classes.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredThresholdChecker.kt`:
- Around line 32-62: The method builds a temporary `result` variable from
`regionStats.groupBy { it.region }...mapNotNull { ... }` only to immediately
return it; simplify by returning the pipeline directly instead of assigning to
`result`. Update the function in WitheredThresholdChecker (the block that
computes WitheredRegion using THRESHOLD_PERCENTAGE, witheredVotes, totalVotes
and BloomingStatus.WITHERED) to return the groupBy().mapNotNull() expression
directly and remove the `val result =` and trailing `return result` to eliminate
the unnecessary temporary.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (61)
pida-clients/weather-client/src/main/kotlin/com/pida/client/weather/WeatherServiceImpl.ktpida-core/core-api/build.gradle.ktspida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/BloomedNotificationScheduler.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/EveningNotificationScheduler.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationSchedulerSupport.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekdayNotificationScheduler.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekendNotificationScheduler.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/WitheredNotificationScheduler.ktpida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingAddedEvent.ktpida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFacade.ktpida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFinder.ktpida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingRepository.ktpida-core/core-domain/src/main/kotlin/com/pida/blooming/RegionStatusCount.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/EligibleUser.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/EligibleUserWithRegion.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/NotificationStoredRepository.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/NotificationType.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/RegionStatusCount.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedFirstVoteChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedFirstVoteNotificationEventListener.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationEligibilityChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationMessageBuilder.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationUserReader.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedRegion.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedRegionResolver.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedThresholdChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomedspot/BloomedSpotNotificationEligibilityChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomedspot/BloomedSpotNotificationEventListener.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomedspot/BloomedSpotNotificationMessageBuilder.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomedspot/BloomedSpotNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/rain/RainForecastNotificationEligibilityChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/rain/RainForecastNotificationMessageBuilder.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/rain/RainForecastNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationEligibilityChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationEligibilityChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationUserReader.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationEligibilityChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationMessageBuilder.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationUserReader.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredRegion.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredRegionResolver.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredThresholdChecker.ktpida-core/core-domain/src/main/kotlin/com/pida/place/DistrictRepository.ktpida-core/core-domain/src/main/kotlin/com/pida/support/geo/Region.ktpida-core/core-domain/src/main/kotlin/com/pida/user/location/UserLocationReader.ktpida-core/core-domain/src/main/kotlin/com/pida/user/location/UserLocationRepository.ktpida-core/core-domain/src/main/kotlin/com/pida/weather/WeatherService.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/blooming/BloomingCoreRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/blooming/BloomingCustomRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/district/DistrictCoreRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/district/DistrictCustomRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/notification/NotificationStoredCoreRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/notification/NotificationStoredCustomRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/location/UserLocationCoreRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/location/UserLocationJpaRepository.kt
| @PostMapping("/withered-notification") | ||
| @Operation( | ||
| summary = "저물었어요 알림 수동 트리거", | ||
| description = | ||
| "WITHERED 상태 푸시 알림을 수동으로 발송합니다. (테스트용)\n\n" + | ||
| "각 지역별 WITHERED 투표 비율을 확인하여 30% 이상인 지역의 사용자들에게 알림을 발송합니다.", | ||
| ) | ||
| fun triggerWitheredNotification(): NotificationTriggerResponse { | ||
| val startTime = LocalDateTime.now() | ||
|
|
||
| witheredNotificationService.sendWitheredNotifications() | ||
|
|
||
| return NotificationTriggerResponse( | ||
| message = "Withered notification triggered successfully", | ||
| triggeredAt = startTime, | ||
| ) | ||
| } | ||
|
|
||
| @PostMapping("/bloomed-notification") | ||
| @Operation( | ||
| summary = "만개했어요 알림 수동 트리거", | ||
| description = | ||
| "BLOOMED 상태 푸시 알림을 수동으로 발송합니다. (테스트용)\n\n" + | ||
| "입력한 지역의 사용자들에게 BLOOMED 알림을 수동 발송합니다.", | ||
| ) | ||
| fun triggerBloomedNotification( | ||
| @RequestParam region: Region, | ||
| ): NotificationTriggerResponse { | ||
| val startTime = LocalDateTime.now() | ||
|
|
||
| bloomedNotificationService.sendBloomedNotificationForRegion(region) | ||
|
|
||
| return NotificationTriggerResponse( | ||
| message = "Bloomed notification triggered successfully", | ||
| triggeredAt = startTime, | ||
| ) | ||
| } | ||
|
|
||
| @PostMapping("/bloomed-spot-notification") | ||
| @Operation( | ||
| summary = "벚꽃길 만개 이벤트 알림 수동 트리거", | ||
| description = | ||
| "특정 벚꽃길의 BLOOMED 투표 이벤트 알림을 수동 발송합니다. (테스트용)\n\n" + | ||
| "조건: 반경 3km 내 위치 권한 허용 사용자\n" + | ||
| "제외: 당일 이미 푸시 수신 사용자, 해당 벚꽃길 시즌 내 알림 기수신 사용자", | ||
| ) | ||
| fun triggerBloomedSpotNotification( | ||
| @RequestParam spotId: Long, | ||
| ): NotificationTriggerResponse { | ||
| val startTime = LocalDateTime.now() | ||
|
|
||
| bloomedSpotNotificationService.sendBloomedSpotNotification(spotId) | ||
|
|
||
| return NotificationTriggerResponse( | ||
| message = "Bloomed spot notification triggered successfully", | ||
| triggeredAt = startTime, | ||
| ) | ||
| } | ||
|
|
||
| @PostMapping("/rain-forecast-notification") | ||
| @Operation( | ||
| summary = "비 예보 알림 수동 트리거", | ||
| description = | ||
| "내일 POP(강수확률) 최대값이 60 이상인 위치의 사용자에게 비 예보 알림을 수동 발송합니다. (테스트용)\n\n" + | ||
| "제외 조건: 당일 다른 푸시 수신 사용자, 당주 이미 비 예보 알림 수신 사용자", | ||
| ) | ||
| fun triggerRainForecastNotification(): NotificationTriggerResponse { | ||
| val startTime = LocalDateTime.now() | ||
|
|
||
| rainForecastNotificationService.sendRainForecastNotifications() | ||
|
|
||
| return NotificationTriggerResponse( | ||
| message = "Rain forecast notification triggered successfully", | ||
| triggeredAt = startTime, | ||
| ) | ||
| } | ||
|
|
There was a problem hiding this comment.
Restrict manual trigger endpoints to non-production/admin-only access.
Lines 61-137 add powerful POST endpoints that can trigger bulk notifications. If exposed in production without strict guardrails, this is abuse-prone and can cause user-impacting spam/cost incidents.
🔒 Suggested hardening
+import org.springframework.context.annotation.Profile
...
`@RestController`
`@RequestMapping`("/test")
+@Profile("local", "dev", "staging")
class NotificationTestController(+import org.springframework.security.access.prepost.PreAuthorize
...
+ `@PreAuthorize`("hasRole('ADMIN')")
`@PostMapping`("/withered-notification")
fun triggerWitheredNotification(): NotificationTriggerResponse { ... }
+ `@PreAuthorize`("hasRole('ADMIN')")
`@PostMapping`("/bloomed-notification")
fun triggerBloomedNotification(`@RequestParam` region: Region): NotificationTriggerResponse { ... }
+ `@PreAuthorize`("hasRole('ADMIN')")
`@PostMapping`("/bloomed-spot-notification")
fun triggerBloomedSpotNotification(`@RequestParam` spotId: Long): NotificationTriggerResponse { ... }
+ `@PreAuthorize`("hasRole('ADMIN')")
`@PostMapping`("/rain-forecast-notification")
fun triggerRainForecastNotification(): NotificationTriggerResponse { ... }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt`
around lines 61 - 137, The new POST endpoints in NotificationTestController
(triggerWitheredNotification, triggerBloomedNotification,
triggerBloomedSpotNotification, triggerRainForecastNotification) must be locked
down—add an environment and authorization guard so these can only be invoked in
non-production or by admin accounts: e.g., annotate the controller or each
method with a profile restriction (`@Profile` or conditional on an application
property) to disable them in prod AND enforce role-based access via Spring
Security (`@PreAuthorize`("hasRole('ADMIN')") or an explicit SecurityContext
check), or perform an env check at the start of each method that returns HTTP
403 when in production; include an audit/log entry for denied attempts.
pida-core/core-api/src/main/kotlin/com/pida/scheduler/EveningNotificationScheduler.kt
Show resolved
Hide resolved
| private val DATE_FORMATTER: DateTimeFormatter = DateTimeFormatter.BASIC_ISO_DATE | ||
| private const val LOCK_KEY_PREFIX = "notification:evening" | ||
| private const val WAIT_SECONDS = 1L | ||
| private const val LEASE_SECONDS = 900L |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.ktRepository: Team-PIDA/Pida-Server
Length of output: 2224
🌐 Web query:
Redisson tryLock watchdog auto-renewal fixed lease time behavior
💡 Result:
Redisson tryLock + watchdog vs fixed leaseTime
1) tryLock() / tryLock(waitTime, unit) (no leaseTime provided) ⇒ watchdog auto-renewal
- Redisson sets the lock key TTL to
lockWatchdogTimeout(default 30,000 ms) and starts the lock watchdog, which keeps extending the TTL while the lock-owning Redisson client is alive. - The
lockWatchdogTimeoutsetting is only used for locks acquired without aleaseTime/leaseTimeout.- Sources: Redisson config docs and wiki describe watchdog + default 30s and that it’s used only when no lease is specified. [1], [2]
2) tryLock(waitTime, leaseTime, unit) (explicit leaseTime) ⇒ fixed lease, no watchdog renewal
- If you pass a positive
leaseTime, the lock is set to expire after exactly that duration and will be released automatically when the TTL elapses—even if your code is still running—so another contender may acquire the lock after expiry.- Sources: Redisson lock docs/wiki and glossary explain that specifying
leaseTimemakes the lock auto-release after that interval. [2], [3]
- Sources: Redisson lock docs/wiki and glossary explain that specifying
3) leaseTime = -1 (where supported) ⇒ “hold until explicitly unlocked” (watchdog-style behavior)
- Redisson’s lock API documents
leaseTime = -1as “hold the lock until explicitly unlocked.” In practice this is the mode associated with watchdog-based renewal (i.e., not a fixed TTL). [4]
Operational takeaway
- Use no
leaseTime(watchdog) for unbounded/variable task duration (but ensure your Redisson client stays healthy), and use a fixedleaseTimeonly when you can bound the critical section (otherwise you risk lock expiry and concurrent execution). [1], [2]
Sources
- Redisson Reference Guide – configuration (
lockWatchdogTimeout, default 30000ms; only used when lock acquired without leaseTimeout). [1] - Redisson GitHub wiki – watchdog prolongs lock; default 30s; configurable;
leaseTimecauses auto-release after interval. [2] - Redisson glossary – examples of
tryLock(wait, lease, unit)and explanation of lease-based auto-release. [3] - Redisson
RedissonLockJavadoc –leaseTime = -1means hold until explicitly unlocked. [4]
🌐 Web query:
Redisson RLock.tryLock(waitTime, leaseTime, TimeUnit) auto-release documentation
💡 Result:
RLock.tryLock(long waitTime, long leaseTime, TimeUnit unit) behavior (auto-release):
- If
tryLock(...)returnstrue, the lock is automatically released afterleaseTime(unless you unlock earlier). This is explicitly stated in theRLock/RedissonLockJavadoc (“released automatically after definedleaseTimeinterval”). [1], [2] - If you want the lock to be held until you explicitly call
unlock(), useleaseTime = -1(“If leaseTime is -1, hold the lock until explicitly unlocked.”). [1] - Separately, Redisson has a lock watchdog mechanism (default timeout 30,000 ms) that extends lock expiration while the owning Redisson client is alive, but it’s only used when the lock is acquired without a lease timeout parameter (per Redisson configuration docs). You can change it via
lockWatchdogTimeout. [3], [4] - If the client disconnects/crashes, Redisson notes the lock “will be removed automatically if client disconnects.” [2]
Sources:
[1] RLock Javadoc (tryLock/leaseTime, -1 semantics) https://www.javadoc.io/static/org.redisson/redisson/3.15.6/org/redisson/api/RLock.html
[2] RedissonLock Javadoc (tryLock auto-release wording; disconnect note) https://javadoc.io/static/org.redisson/redisson/3.24.2/org/redisson/RedissonLock.html
[3] Redisson configuration reference (lockWatchdogTimeout, and when it’s used) https://redisson.pro/docs/configuration
[4] Redisson “What is a Redis lock?” (watchdog overview + tryLock example) https://redisson.pro/glossary/redis-lock.html
Fixed lease disables watchdog renewal; lock may expire mid-execution.
When tryLock(waitTime, leaseTime, unit) is called with an explicit leaseTime, Redisson automatically releases the lock after that interval and does not activate the lock watchdog. Since LEASE_SECONDS is 900 seconds (15 minutes), if the evening notification task exceeds this duration, the lock will auto-release before completion, allowing another node to acquire it and execute concurrently.
Remove the fixed lease to enable watchdog-based renewal:
🔧 Proposed fix
- private const val LEASE_SECONDS = 900L
-
- val acquired =
- try {
- lock.tryLock(WAIT_SECONDS, LEASE_SECONDS, TimeUnit.SECONDS)
+ val acquired =
+ try {
+ lock.tryLock(WAIT_SECONDS, TimeUnit.SECONDS)With this change, Redisson's lock watchdog (default 30-second renewal) will keep the lock held as long as the owning client remains active, eliminating the fixed expiry window.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.kt`
at line 25, The code uses a fixed LEASE_SECONDS (900L) when calling
RLock.tryLock(waitTime, leaseTime, unit), which disables Redisson's watchdog and
can let the lock expire mid-task; update NotificationExecutionLock to stop
passing a fixed lease: remove the LEASE_SECONDS constant and switch the lock
acquisition call from tryLock(waitTime, LEASE_SECONDS, unit) to the
watchdog-enabled overload tryLock(waitTime, unit) (or the equivalent
RLock.tryLock(long waitTime, TimeUnit unit)), so the Redisson lock watchdog will
auto-renew the lock while the client is active.
| private var lastExecutedDate: LocalDate? = null | ||
|
|
||
| protected fun executeOncePerDay( | ||
| logger: Logger, | ||
| failureMessage: String, | ||
| action: () -> Unit, | ||
| ) { | ||
| val today = LocalDate.now() | ||
|
|
||
| if (lastExecutedDate == today) { | ||
| return | ||
| } | ||
|
|
||
| val executed = runSchedulerSafely(logger, failureMessage, action) | ||
| if (executed) { | ||
| lastExecutedDate = today | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all Kotlin files to get type info
echo "=== Searching for executeOncePerDay calls ==="
rg -n -C3 --type=kotlin 'executeOncePerDay\s*\(' 2>/dev/null | head -40
echo ""
echo "=== Searching for scheduler configuration and annotations ==="
rg -n -C3 --type=kotlin '@Scheduled|ThreadPoolTaskScheduler|TaskScheduler|SchedulingConfigurer|setPoolSize|@EnableScheduling' 2>/dev/null | head -60
echo ""
echo "=== Searching for existing locking mechanisms ==="
rg -n -C3 --type=kotlin 'NotificationExecutionLock|withLock|RLock|synchronized|Mutex|AtomicReference' 2>/dev/null | head -40
echo ""
echo "=== Finding the target file to examine class structure ==="
fd 'NotificationSchedulerSupport\.kt' -type fRepository: Team-PIDA/Pida-Server
Length of output: 9352
🏁 Script executed:
#!/bin/bash
# Get context about the target file itself
echo "=== Target file content ==="
fd 'NotificationSchedulerSupport\.kt' -type f -exec wc -l {} \;
# Find and display the file
fd 'NotificationSchedulerSupport\.kt' -type f -exec cat -n {} \;Repository: Team-PIDA/Pida-Server
Length of output: 451
🏁 Script executed:
#!/bin/bash
# Find and display the NotificationSchedulerSupport file
echo "=== NotificationSchedulerSupport.kt content ==="
find . -name "NotificationSchedulerSupport.kt" -exec cat -n {} \;
echo ""
echo "=== NotificationExecutionLock.kt content ==="
find . -name "NotificationExecutionLock.kt" -exec cat -n {} \;
echo ""
echo "=== ScheduleConfig.kt content ==="
find . -name "ScheduleConfig.kt" -exec cat -n {} \;Repository: Team-PIDA/Pida-Server
Length of output: 3815
🏁 Script executed:
#!/bin/bash
# Check if DailyNotificationScheduler is a shared singleton or per-instance
echo "=== DailyNotificationScheduler class and subclasses ==="
find . -name "*NotificationScheduler.kt" | head -10 | xargs grep -l "DailyNotificationScheduler"
echo ""
echo "=== Checking Spring component scopes ==="
find . -name "*NotificationScheduler.kt" | xargs grep -B5 "class.*NotificationScheduler" | grep -E "@Component|@Scope|scope"Repository: Team-PIDA/Pida-Server
Length of output: 835
🏁 Script executed:
#!/bin/bash
# Check for custom TaskScheduler configuration
echo "=== Searching for custom TaskScheduler beans and pool size configuration ==="
find . -name "*.kt" | xargs grep -l "TaskScheduler\|ThreadPoolTaskScheduler\|setPoolSize\|SchedulingConfigurer" 2>/dev/null
echo ""
echo "=== Check actual Spring `@Scheduled` defaults and configuration ==="
find . -name "*.kt" -o -name "*.yml" -o -name "*.yaml" -o -name "*.properties" | xargs grep -B2 -A2 "spring.task.scheduling\|spring.threads.virtual\|TaskScheduler" 2>/dev/null | head -50Repository: Team-PIDA/Pida-Server
Length of output: 208
🏁 Script executed:
#!/bin/bash
# Check all scheduler classes that inherit from DailyNotificationScheduler
echo "=== All DailyNotificationScheduler subclasses ==="
find . -name "*.kt" -exec grep -l "DailyNotificationScheduler" {} \;
echo ""
echo "=== BloomedNotificationScheduler full content ==="
find . -name "BloomedNotificationScheduler.kt" -exec cat -n {} \;
echo ""
echo "=== WitheredNotificationScheduler full content ==="
find . -name "WitheredNotificationScheduler.kt" -exec cat -n {} \;Repository: Team-PIDA/Pida-Server
Length of output: 2680
🏁 Script executed:
#!/bin/bash
# Final check: Look for any Spring Boot or application.properties config affecting scheduler pool
echo "=== Spring Boot configuration files ==="
find . -name "application*.properties" -o -name "application*.yml" -o -name "application*.yaml" | xargs cat 2>/dev/null | grep -i "scheduler\|task\|thread\|pool" || echo "No scheduler configuration found"
echo ""
echo "=== Check if default Spring Scheduler can be multi-threaded ==="
find . -name "pom.xml" -o -name "build.gradle*" | xargs grep -i "spring-boot" 2>/dev/null | head -5Repository: Team-PIDA/Pida-Server
Length of output: 193
🏁 Script executed:
#!/bin/bash
# Check Spring version to understand default scheduler behavior
echo "=== Checking Spring and Spring Boot versions ==="
find . -name "pom.xml" | xargs grep -A2 -B2 "spring-boot-starter\|spring.version\|spring-boot.version" 2>/dev/null | head -30
echo ""
echo "=== Check build.gradle for Spring version ==="
find . -name "build.gradle*" -exec cat {} \; 2>/dev/null | grep -E "spring|kotlin" | head -20Repository: Team-PIDA/Pida-Server
Length of output: 1175
🏁 Script executed:
#!/bin/bash
# Check version catalog file
echo "=== Checking version catalog ==="
find . -name "libs.versions.toml" -o -name "gradle.properties" | xargs cat 2>/dev/null | grep -i "spring\|kotlin" | head -30Repository: Team-PIDA/Pida-Server
Length of output: 1865
Synchronize executeOncePerDay to prevent same-day duplicate execution.
Lines 28–35 check and update lastExecutedDate without synchronization. Since Spring 3.4.3's @EnableScheduling uses a concurrent task scheduler by default, multiple threads can pass the guard check before the first execution updates lastExecutedDate, causing the action to run more than once per day despite the documented intent.
🔧 Proposed fix
abstract class DailyNotificationScheduler {
private var lastExecutedDate: LocalDate? = null
+ private val executionLock = Any()
protected fun executeOncePerDay(
logger: Logger,
failureMessage: String,
action: () -> Unit,
) {
val today = LocalDate.now()
-
- if (lastExecutedDate == today) {
- return
- }
-
- val executed = runSchedulerSafely(logger, failureMessage, action)
- if (executed) {
- lastExecutedDate = today
+ synchronized(executionLock) {
+ if (lastExecutedDate == today) {
+ return
+ }
+ val executed = runSchedulerSafely(logger, failureMessage, action)
+ if (executed) {
+ lastExecutedDate = today
+ }
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private var lastExecutedDate: LocalDate? = null | |
| protected fun executeOncePerDay( | |
| logger: Logger, | |
| failureMessage: String, | |
| action: () -> Unit, | |
| ) { | |
| val today = LocalDate.now() | |
| if (lastExecutedDate == today) { | |
| return | |
| } | |
| val executed = runSchedulerSafely(logger, failureMessage, action) | |
| if (executed) { | |
| lastExecutedDate = today | |
| } | |
| private var lastExecutedDate: LocalDate? = null | |
| private val executionLock = Any() | |
| protected fun executeOncePerDay( | |
| logger: Logger, | |
| failureMessage: String, | |
| action: () -> Unit, | |
| ) { | |
| val today = LocalDate.now() | |
| synchronized(executionLock) { | |
| if (lastExecutedDate == today) { | |
| return | |
| } | |
| val executed = runSchedulerSafely(logger, failureMessage, action) | |
| if (executed) { | |
| lastExecutedDate = today | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationSchedulerSupport.kt`
around lines 19 - 35, The check/update in executeOncePerDay is not thread-safe:
wrap the guard check, the call to runSchedulerSafely(...) and the update of
lastExecutedDate in a single synchronized block to prevent multiple threads from
passing the guard and executing the action the same day; e.g., synchronize on a
dedicated private lock (or this) around the sequence in executeOncePerDay so
only one thread performs the check+run+update atomically, referencing
lastExecutedDate, executeOncePerDay and runSchedulerSafely.
| private var executionCountThisWeek = 0 | ||
| private var lastExecutedWeek: Int? = null | ||
| private var lastExecutedWeekKey: Int? = null |
There was a problem hiding this comment.
Persist weekly execution state in shared storage.
Line 21 and Line 22 keep weekly run state in-memory only. In multi-instance scheduling, node-to-node lock acquisition across days can reset effective state and violate the intended “max 2 executions per week” behavior globally. Move this counter/week-key to shared storage (e.g., Redis/DB) or derive it from persisted notification history before deciding execution.
Also applies to: 45-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekdayNotificationScheduler.kt`
around lines 21 - 22, The in-memory fields executionCountThisWeek and
lastExecutedWeekKey in WeekdayNotificationScheduler must be backed by shared
persistent storage or derived from persisted notification history so
multi-instance schedulers enforce the "max 2 executions per week" globally;
replace reads/writes of executionCountThisWeek and lastExecutedWeekKey with
atomic operations against a shared store (e.g., Redis counter with a week-scoped
key or a DB query that counts persisted notifications for the current ISO week)
and, before scheduling/executing, load the current week key and count from that
store (or compute from persisted notifications) and perform an atomic
check-and-increment to decide whether to run; update any other places that
reference these variables (including the logic currently around where
executionCountThisWeek and lastExecutedWeekKey are used) to use the new
shared-store API so state is consistent across nodes.
| @Async | ||
| @EventListener | ||
| fun handleBloomingAddedEvent(event: BloomingAddedEvent) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find BloomingAddedEvent publication points
rg -n 'publishEvent.*BloomingAddedEvent|BloomingAddedEvent' --type=kt -C3Repository: Team-PIDA/Pida-Server
Length of output: 91
🏁 Script executed:
#!/bin/bash
# Look for event publishing patterns in the codebase
rg -n 'publishEvent|applicationEventPublisher\.publish' --type=kt -C3Repository: Team-PIDA/Pida-Server
Length of output: 91
🏁 Script executed:
#!/bin/bash
# Check for transactional context markers
rg -n '@Transactional|@Transaction' --type=kt -C2Repository: Team-PIDA/Pida-Server
Length of output: 91
🏁 Script executed:
#!/bin/bash
# Find all Kotlin files and search for BloomingAddedEvent
fd --extension=kt --type=f | head -20Repository: Team-PIDA/Pida-Server
Length of output: 1779
🏁 Script executed:
#!/bin/bash
# Search for BloomingAddedEvent publication across all files
rg -n 'publishEvent.*BloomingAddedEvent|BloomingAddedEvent' -C3Repository: Team-PIDA/Pida-Server
Length of output: 5283
🏁 Script executed:
#!/bin/bash
# Search for `@Transactional` annotations
rg -n '@Transactional' -C2Repository: Team-PIDA/Pida-Server
Length of output: 5026
🏁 Script executed:
#!/bin/bash
# Look for the file mentioned in the review
find . -name "BloomedSpotNotificationEventListener.kt" -exec cat {} \;Repository: Team-PIDA/Pida-Server
Length of output: 834
🏁 Script executed:
#!/bin/bash
# Read BloomingFacade to see if uploadBloomingStatus is transactional
cat -n pida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFacade.kt | sed -n '60,85p'Repository: Team-PIDA/Pida-Server
Length of output: 973
🏁 Script executed:
#!/bin/bash
# Look for `@Transactional` on uploadBloomingStatus
rg -n 'uploadBloomingStatus|fun uploadBloomingStatus' -B5 pida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFacade.ktRepository: Team-PIDA/Pida-Server
Length of output: 286
🏁 Script executed:
#!/bin/bash
# Check if BloomingService.add is transactional
rg -n 'class BloomingService|fun add' pida-core/core-domain/src/main/kotlin/com/pida/blooming/ -A3Repository: Team-PIDA/Pida-Server
Length of output: 2307
🏁 Script executed:
#!/bin/bash
# Find where uploadBloomingStatus is called
rg -n 'uploadBloomingStatus' -B3 -A3Repository: Team-PIDA/Pida-Server
Length of output: 1874
🏁 Script executed:
#!/bin/bash
# Check BloomingFacade class definition and method signatures
cat -n pida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingFacade.kt | head -80Repository: Team-PIDA/Pida-Server
Length of output: 3902
🏁 Script executed:
#!/bin/bash
# Check BloomingAppender.add method for `@Transactional`
cat -n pida-core/core-domain/src/main/kotlin/com/pida/blooming/BloomingAppender.ktRepository: Team-PIDA/Pida-Server
Length of output: 393
🏁 Script executed:
#!/bin/bash
# Check BloomingRepository implementation to see if add() is transactional
rg -n 'class.*BloomingRepository|fun add' pida-core/core-domain/src/main/kotlin/com/pida/blooming/ -A5Repository: Team-PIDA/Pida-Server
Length of output: 2421
🏁 Script executed:
#!/bin/bash
# Check if there's a decorator or transaction handler at the repository layer
find pida-storage -name "*BloomingRepository*" -exec cat {} \;Repository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Check the controller to see transaction context
cat -n pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/blooming/BloomingController.kt | head -50Repository: Team-PIDA/Pida-Server
Length of output: 2494
🏁 Script executed:
#!/bin/bash
# Search for BloomingRepository implementation in pida-storage
find pida-storage -type f -name "*.kt" | xargs grep -l "BloomingRepository"Repository: Team-PIDA/Pida-Server
Length of output: 161
🏁 Script executed:
#!/bin/bash
# Check what transaction management exists at the repository implementation level
rg -n 'class.*BloomingRepository' pida-storage -A10Repository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Read the BloomingCoreRepository implementation
cat -n pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/blooming/BloomingCoreRepository.ktRepository: Team-PIDA/Pida-Server
Length of output: 3630
🏁 Script executed:
#!/bin/bash
# Find all EventListener usages in notification package
rg -n '@EventListener' pida-core/core-domain/src/main/kotlin/com/pida/notification/ -B2Repository: Team-PIDA/Pida-Server
Length of output: 825
Use @TransactionalEventListener(AFTER_COMMIT) for push notification handlers.
Both BloomedSpotNotificationEventListener and BloomedFirstVoteNotificationEventListener use plain @EventListener, which risks dispatch before transaction commit if the call chain is wrapped with @Transactional. This is a reliability hazard—notifications must be guaranteed to send only after dependent writes are committed.
Apply this fix to both listeners:
Suggested fix
+import org.springframework.transaction.event.TransactionPhase
+import org.springframework.transaction.event.TransactionalEventListener
...
`@Async`
- `@EventListener`
+ `@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT)
fun handleBloomingAddedEvent(event: BloomingAddedEvent) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Async | |
| @EventListener | |
| fun handleBloomingAddedEvent(event: BloomingAddedEvent) { | |
| `@Async` | |
| `@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT) | |
| fun handleBloomingAddedEvent(event: BloomingAddedEvent) { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomedspot/BloomedSpotNotificationEventListener.kt`
around lines 16 - 18, Replace the plain `@EventListener` on push-notification
handlers with a transactional event listener that fires after commit: for
BloomedSpotNotificationEventListener change the annotation on
handleBloomingAddedEvent from `@EventListener` to
`@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT) (and keep
`@Async`), and do the same in BloomedFirstVoteNotificationEventListener for its
handler (e.g., handleFirstVoteAddedEvent); also add the necessary import for
org.springframework.transaction.event.TransactionalEventListener and
org.springframework.transaction.event.TransactionPhase so the listener only
executes after the surrounding transaction has committed.
| val messages = buildNotificationMessages(eligibleUsers) | ||
|
|
||
| if (messages.isEmpty()) { | ||
| logger.info("Region $region: No FCM messages to send (no valid FCM tokens)") | ||
| return 0 | ||
| } | ||
|
|
||
| logger.info("Region $region: ${messages.size} FCM messages prepared") | ||
|
|
||
| // 3. FCM 발송 | ||
| fcmSender.sendAllAsync(messages) | ||
|
|
||
| // 4. 알림 이력 저장 | ||
| storeNotificationRecords(eligibleUsers) | ||
|
|
||
| logger.info("Region $region: Notifications sent successfully to ${messages.size} users") | ||
|
|
||
| return messages.size |
There was a problem hiding this comment.
Store history only for users who had a push target.
storeNotificationRecords(eligibleUsers) persists notifications for users even when no FCM target exists for them. This can incorrectly exclude them from future sends.
🔧 Proposed fix
- val messages = buildNotificationMessages(eligibleUsers)
+ val userMessages = buildNotificationMessages(eligibleUsers)
- if (messages.isEmpty()) {
+ if (userMessages.isEmpty()) {
logger.info("Region $region: No FCM messages to send (no valid FCM tokens)")
return 0
}
- logger.info("Region $region: ${messages.size} FCM messages prepared")
+ logger.info("Region $region: ${userMessages.size} FCM messages prepared")
// 3. FCM 발송
- fcmSender.sendAllAsync(messages)
+ fcmSender.sendAllAsync(userMessages.values.toSet())
// 4. 알림 이력 저장
- storeNotificationRecords(eligibleUsers)
+ storeNotificationRecords(userMessages.keys.toList())
- logger.info("Region $region: Notifications sent successfully to ${messages.size} users")
+ logger.info("Region $region: Notifications sent successfully to ${userMessages.size} users")
- return messages.size
+ return userMessages.size
@@
- private fun buildNotificationMessages(users: List<EligibleUserWithRegion>): Set<NewFirebaseCloudMessage> =
+ private fun buildNotificationMessages(users: List<EligibleUserWithRegion>): Map<Long, NewFirebaseCloudMessage> =
users
.mapNotNull { user ->
userDeviceReader.readLastByUserId(user.userId)?.let { device ->
- witheredNotificationMessageBuilder.buildMessage(device.fcmToken)
+ user.userId to witheredNotificationMessageBuilder.buildMessage(device.fcmToken)
}
- }.toSet()
+ }.toMap()
@@
- private fun storeNotificationRecords(users: List<EligibleUserWithRegion>) {
+ private fun storeNotificationRecords(userIds: List<Long>) {
val commands =
- users.map { user ->
+ userIds.distinct().map { userId ->
CreateNotificationStoredCommand(
notificationStoredKey = "", // 서비스에서 재생성됨
- userId = user.userId,
+ userId = userId,
type = NotificationType.WITHERED_ALERT,Also applies to: 127-141
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationService.kt`
around lines 84 - 101, The code currently calls
storeNotificationRecords(eligibleUsers) which persists history for all
eligibleUsers even if some had no push target; change it to only persist records
for users that actually had a message sent: after
buildNotificationMessages(eligibleUsers) compute the set/list of recipients from
messages (e.g., map message -> userId or filter eligibleUsers where an FCM
target exists) and pass that filtered collection to storeNotificationRecords
instead of eligibleUsers; keep the early return when messages.isEmpty(), and
ensure the same change is applied in the other block mentioned (lines 127-141)
so only actual push targets are recorded.
| private val logger by logger() | ||
|
|
||
| companion object { | ||
| private const val ACTIVE_USER_DAYS = 7L |
There was a problem hiding this comment.
Active-user window is inconsistent with the PR targeting rule.
For non-forecast notifications, the objective specifies users active in the last 30 days, but this reader uses 7 days (Line 25), which under-targets recipients.
🔧 Proposed fix
- private const val ACTIVE_USER_DAYS = 7L
+ private const val ACTIVE_USER_DAYS = 30LAlso applies to: 34-37
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationUserReader.kt`
at line 25, The constant ACTIVE_USER_DAYS in WitheredNotificationUserReader is
set to 7L but should match the PR targeting rule of 30 days; update
ACTIVE_USER_DAYS to 30L and adjust any logic that uses it (e.g., the query
construction and filtering code where ACTIVE_USER_DAYS is referenced around
lines 34-37) so the reader selects users active in the last 30 days; ensure any
unit types (Long) and tests/validators using WitheredNotificationUserReader are
updated accordingly.
| val district = districtRepository.findNearestDistrict(latitude, longitude) | ||
|
|
||
| if (district == null) { | ||
| logger.warn("No district found for coordinates: ($latitude, $longitude)") |
There was a problem hiding this comment.
Do not log raw coordinates at WARN level.
Line 33 writes precise user location to logs, which is sensitive personal data and increases privacy/compliance risk.
🔒 Privacy-safe logging
- logger.warn("No district found for coordinates: ($latitude, $longitude)")
+ logger.warn("No district found for provided coordinates")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.warn("No district found for coordinates: ($latitude, $longitude)") | |
| logger.warn("No district found for provided coordinates") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredRegionResolver.kt`
at line 33, The warning currently logs precise user coordinates in
WitheredRegionResolver (logger.warn("No district found for coordinates:
($latitude, $longitude)")), which is sensitive; change the log to avoid raw
coordinates by replacing them with a privacy-safe representation (e.g.,
redacted/masked coordinates, a coarse geohash/rounded lat/lon, or only the
request/trace id) and include contextual identifiers like district lookup
attempt or regionKey instead; update the message in WitheredRegionResolver to
emit a generic warning (e.g., "No district found for provided location" plus
safe identifier) and ensure any helper used for masking/rounding is called where
latitude/longitude are referenced.
.../db-core/src/main/kotlin/com/pida/storage/db/core/user/location/UserLocationJpaRepository.kt
Show resolved
Hide resolved
…lock and batch device lookup
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.kt (1)
50-59:⚠️ Potential issue | 🟠 MajorPersist weekday history only for actual push targets.
Line 59 stores history for all
eligibleUsers, but Lines 67-75 only build messages for users with a resolvable device. This can mark non-targeted users as already notified.🔧 Suggested fix
- val messages = buildNotificationMessages(eligibleUsers) + val userMessages = buildNotificationMessages(eligibleUsers) - if (messages.isEmpty()) { + if (userMessages.isEmpty()) { return } // Step 3: FCM 발송 - fcmSender.sendAllAsync(messages) + fcmSender.sendAllAsync(userMessages.values.toSet()) // Step 4: 알림 이력 저장 - storeNotificationRecords(eligibleUsers) + storeNotificationRecords(userMessages.keys.toList()) @@ - private fun buildNotificationMessages(users: List<EligibleUser>): Set<NewFirebaseCloudMessage> = + private fun buildNotificationMessages(users: List<EligibleUser>): Map<Long, NewFirebaseCloudMessage> = userDeviceReader.readLastByUserIds(users.map { it.userId }).let { latestDevicesByUserId -> users .mapNotNull { user -> latestDevicesByUserId[user.userId]?.let { device -> - weekdayNotificationMessageBuilder.buildMessage(device.fcmToken) + user.userId to weekdayNotificationMessageBuilder.buildMessage(device.fcmToken) } - }.toSet() + }.toMap() } - private fun storeNotificationRecords(users: List<EligibleUser>) { + private fun storeNotificationRecords(userIds: List<Long>) { val commands = - users.map { user -> + userIds.distinct().map { userId -> CreateNotificationStoredCommand( @@ - userId = user.userId, + userId = userId,Also applies to: 67-75, 77-91
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.kt` around lines 50 - 59, The code currently calls storeNotificationRecords(eligibleUsers) which persists history for all eligibleUsers even though buildNotificationMessages(eligibleUsers) and the message-send via fcmSender.sendAllAsync(messages) only target users with resolvable devices; change the flow to compute the actual push targets from buildNotificationMessages (or its internal mapping) and pass only that filtered list to storeNotificationRecords so only users who received (or were attempted to receive) a push are recorded; locate and update calls around buildNotificationMessages, fcmSender.sendAllAsync, and storeNotificationRecords to use the messages-derived user list (or return both messages and target users from buildNotificationMessages) rather than eligibleUsers.
♻️ Duplicate comments (3)
pida-core/core-api/src/main/kotlin/com/pida/scheduler/EveningNotificationScheduler.kt (1)
47-58:⚠️ Potential issue | 🟠 MajorGuard weekday scheduler call so rain forecast always gets a chance to run.
Line 48 can throw and abort
executeInOrder, preventing Lines 53-58 from running.🔧 Proposed fix
if (isWeekday(today.dayOfWeek)) { - weekdayNotificationScheduler.executeWeekdayNotification(today) + runSchedulerSafely( + logger = logger, + failureMessage = "Failed to execute weekday notification scheduler", + ) { + weekdayNotificationScheduler.executeWeekdayNotification(today) + } } else { logger.info("Skipping weekday notification flow on weekend: ${today.dayOfWeek}") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-api/src/main/kotlin/com/pida/scheduler/EveningNotificationScheduler.kt` around lines 47 - 58, The call to weekdayNotificationScheduler.executeWeekdayNotification in EveningNotificationScheduler can throw and abort the surrounding executeInOrder flow, preventing rainForecastNotificationService.sendRainForecastNotifications from running; wrap the weekday scheduler invocation in a safe-guard (e.g., try/catch or runCatching) so any exception is caught and logged (use the existing logger) and does not propagate, or call it inside runSchedulerSafely similar to how runSchedulerSafely is used for rainForecastNotificationService.sendRainForecastNotifications; reference: EveningNotificationScheduler, isWeekday(...), weekdayNotificationScheduler.executeWeekdayNotification, runSchedulerSafely, rainForecastNotificationService.sendRainForecastNotifications.pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.kt (1)
25-55:⚠️ Potential issue | 🟠 MajorUse watchdog-enabled lock acquisition instead of fixed
leaseTime.Line 54 uses
tryLock(wait, lease, unit)withLEASE_SECONDS(Line 25). This can auto-release the lock mid-execution and allow duplicate evening runs on another node.🔧 Proposed fix
- private const val LEASE_SECONDS = 3600L @@ - lock.tryLock(WAIT_SECONDS, LEASE_SECONDS, TimeUnit.SECONDS) + lock.tryLock(WAIT_SECONDS, TimeUnit.SECONDS)#!/bin/bash # Verify fixed-lease lock usages in scheduler lock wrappers rg -nP 'tryLock\s*\(\s*[^,]+,\s*[^,]+,\s*TimeUnit\.[A-Z]+\s*\)' pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.kt🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.kt` around lines 25 - 55, The current runWithEveningLock uses lock.tryLock(WAIT_SECONDS, LEASE_SECONDS, TimeUnit.SECONDS) which sets a fixed lease and can auto-release mid-action; change to the watchdog-enabled overload lock.tryLock(WAIT_SECONDS, TimeUnit.SECONDS) so Redisson's watchdog will auto-extend the lock for the duration of the process; update the call in runWithEveningLock, remove/stop using LEASE_SECONDS for this lock (and delete the constant if no longer used elsewhere), and keep existing unlock/error handling around the same lock variable to ensure proper release and logging (symbols: runWithEveningLock, tryLock, LEASE_SECONDS, redissonClient.getLock, ExecutionResult).pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationService.kt (1)
70-81:⚠️ Potential issue | 🟠 MajorRecord BLOOMED history only for users who actually receive a push.
Line 79 persists
eligibleUsers, while Lines 95-106 only create messages for users with a device token. This can falsely exclude users from later notifications.🔧 Suggested fix
- val messages = buildNotificationMessages(eligibleUsers, region) + val userMessages = buildNotificationMessages(eligibleUsers, region) - if (messages.isEmpty()) { + if (userMessages.isEmpty()) { return 0 } // 3. FCM 발송 - fcmSender.sendAllAsync(messages) + fcmSender.sendAllAsync(userMessages.values.toSet()) // 4. 알림 이력 저장 - storeNotificationRecords(eligibleUsers, region) + storeNotificationRecords(userMessages.keys.toList(), region) - return messages.size + return userMessages.size @@ - ): Set<NewFirebaseCloudMessage> = + ): Map<Long, NewFirebaseCloudMessage> = userDeviceReader.readLastByUserIds(users.map { it.userId }).let { latestDevicesByUserId -> users .mapNotNull { user -> latestDevicesByUserId[user.userId]?.let { device -> - bloomedNotificationMessageBuilder.buildMessage(device.fcmToken, region) + user.userId to bloomedNotificationMessageBuilder.buildMessage(device.fcmToken, region) } - }.toSet() + }.toMap() } @@ - users: List<EligibleUserWithRegion>, + userIds: List<Long>, region: Region, ) { val commands = - users.map { user -> + userIds.distinct().map { userId -> CreateNotificationStoredCommand( @@ - userId = user.userId, + userId = userId,Also applies to: 95-106, 114-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationService.kt` around lines 70 - 81, storeNotificationRecords is currently called with eligibleUsers but messages are only built for users with device tokens, causing history to be saved for users who did not receive pushes; update BloomedNotificationService to persist records only for users actually included in messages (e.g., pass the built messages or filter eligibleUsers to those with a device token before calling storeNotificationRecords). Locate buildNotificationMessages, fcmSender.sendAllAsync(messages) and storeNotificationRecords and change the record-saving call to use the same filtered set used to construct/send messages (this also applies to the other places around the message-building blocks at the commented ranges).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/district/DistrictCustomRepository.kt`:
- Around line 21-24: The findNearestDistrict(latitude: Double, longitude:
Double) function accepts coordinates without validation; add guard clauses at
the start of DistrictCustomRepository.findNearestDistrict to ensure latitude is
within -90..90 and longitude within -180..180 and fail fast (e.g., throw
IllegalArgumentException with a clear message) or return null per project
convention; ensure the checks run before any DB/query logic so invalid inputs
cannot produce incorrect results.
---
Outside diff comments:
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.kt`:
- Around line 50-59: The code currently calls
storeNotificationRecords(eligibleUsers) which persists history for all
eligibleUsers even though buildNotificationMessages(eligibleUsers) and the
message-send via fcmSender.sendAllAsync(messages) only target users with
resolvable devices; change the flow to compute the actual push targets from
buildNotificationMessages (or its internal mapping) and pass only that filtered
list to storeNotificationRecords so only users who received (or were attempted
to receive) a push are recorded; locate and update calls around
buildNotificationMessages, fcmSender.sendAllAsync, and storeNotificationRecords
to use the messages-derived user list (or return both messages and target users
from buildNotificationMessages) rather than eligibleUsers.
---
Duplicate comments:
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/EveningNotificationScheduler.kt`:
- Around line 47-58: The call to
weekdayNotificationScheduler.executeWeekdayNotification in
EveningNotificationScheduler can throw and abort the surrounding executeInOrder
flow, preventing rainForecastNotificationService.sendRainForecastNotifications
from running; wrap the weekday scheduler invocation in a safe-guard (e.g.,
try/catch or runCatching) so any exception is caught and logged (use the
existing logger) and does not propagate, or call it inside runSchedulerSafely
similar to how runSchedulerSafely is used for
rainForecastNotificationService.sendRainForecastNotifications; reference:
EveningNotificationScheduler, isWeekday(...),
weekdayNotificationScheduler.executeWeekdayNotification, runSchedulerSafely,
rainForecastNotificationService.sendRainForecastNotifications.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.kt`:
- Around line 25-55: The current runWithEveningLock uses
lock.tryLock(WAIT_SECONDS, LEASE_SECONDS, TimeUnit.SECONDS) which sets a fixed
lease and can auto-release mid-action; change to the watchdog-enabled overload
lock.tryLock(WAIT_SECONDS, TimeUnit.SECONDS) so Redisson's watchdog will
auto-extend the lock for the duration of the process; update the call in
runWithEveningLock, remove/stop using LEASE_SECONDS for this lock (and delete
the constant if no longer used elsewhere), and keep existing unlock/error
handling around the same lock variable to ensure proper release and logging
(symbols: runWithEveningLock, tryLock, LEASE_SECONDS, redissonClient.getLock,
ExecutionResult).
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationService.kt`:
- Around line 70-81: storeNotificationRecords is currently called with
eligibleUsers but messages are only built for users with device tokens, causing
history to be saved for users who did not receive pushes; update
BloomedNotificationService to persist records only for users actually included
in messages (e.g., pass the built messages or filter eligibleUsers to those with
a device token before calling storeNotificationRecords). Locate
buildNotificationMessages, fcmSender.sendAllAsync(messages) and
storeNotificationRecords and change the record-saving call to use the same
filtered set used to construct/send messages (this also applies to the other
places around the message-building blocks at the commented ranges).
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
pida-core/core-api/src/main/kotlin/com/pida/scheduler/EveningNotificationScheduler.ktpida-core/core-api/src/main/kotlin/com/pida/scheduler/NotificationExecutionLock.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomed/BloomedNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/bloomedspot/BloomedSpotNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/rain/RainForecastNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationService.ktpida-core/core-domain/src/main/kotlin/com/pida/user/device/UserDeviceReader.ktpida-core/core-domain/src/main/kotlin/com/pida/user/device/UserDeviceRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/district/DistrictCustomRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/device/UserDeviceCoreRepository.ktpida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/device/UserDeviceJpaRepository.kt
🚧 Files skipped from review as they are similar to previous changes (3)
- pida-core/core-domain/src/main/kotlin/com/pida/notification/rain/RainForecastNotificationService.kt
- pida-core/core-domain/src/main/kotlin/com/pida/notification/withered/WitheredNotificationService.kt
- pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationService.kt
🌱 관련 이슈
📌 작업 내용 및 특이 사항
develop대비 알림 도메인/스케줄러/이벤트 기반 푸시 기능을 확장했습니다.NotificationType확장:WITHERED_ALERTBLOOMED_ALERTBLOOMED_SPOT_ALERTRAIN_FORECAST_ALERTWITHERED) 지역 비율 30% 이상BLOOMED) 지역 비율 80% 이상RAIN_FORECAST_ALERT)BLOOMED투표 이벤트 발생 시 반경 알림(BLOOMED_SPOT_ALERT)EveningNotificationScheduler도입(매일 18:00)NotificationExecutionLock(Redisson 분산 락) 도입으로 동시 실행 경합 방지NotificationStoredRepository집계 메서드 추가(타입 제외/전체 당일/타입+파라미터 기준)UserLocationRepository반경 조회(ST_DWithin) 추가📝 참고
📌 체크 리스트
Summary by CodeRabbit
New Features
Testing
Chores