[Feature] Airkorea external client, Weekend&Weekday push notification#86
[Feature] Airkorea external client, Weekend&Weekday push notification#86
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughAdds an Air Korea Feign client and service to fetch air quality, integrates air-quality checks into new weekend/weekday notification pipelines, introduces schedulers and eligibility/filtering components, extends repositories for batch/spatial queries, updates configuration and build to include the airquality client. Changes
Sequence Diagram(s)sequenceDiagram
participant Scheduler as WeekendScheduler
participant UserReader as WeekendNotificationUserReader
participant LocChecker as WeekendNotificationLocationChecker
participant AQChecker as WeekendNotificationAirQualityChecker
participant Builder as WeekendNotificationMessageBuilder
participant Fcm as FcmSender
participant Store as NotificationService
Scheduler->>UserReader: findActiveUsersWithLocation()
UserReader-->>Scheduler: List<EligibleUser>
loop per user
Scheduler->>LocChecker: hasNearbyBloomingSpots(lat,lng)
LocChecker-->>Scheduler: true/false
alt true
Scheduler->>AQChecker: hasGoodAirQuality(lat,lng)
AQChecker-->>Scheduler: true/false
alt true
Scheduler->>Builder: buildMessage(fcmToken)
Builder-->>Scheduler: NewFirebaseCloudMessage
end
end
end
Scheduler->>Fcm: sendAllAsync(messages)
Fcm-->>Scheduler: results
Scheduler->>Store: appendAll(notificationRecords)
Store-->>Scheduler: ok
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
🤖 Fix all issues with AI agents
In
`@pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaClient.kt`:
- Around line 44-47: The catch block currently logs any Exception and always
throws a new ErrorException, which double-logs and loses original stack context;
update the handler to follow the getNearbyStation pattern: if (e is
ErrorException) rethrow e, otherwise log the error with logger.error("Failed to
fetch air quality data", e) and then throw
ErrorException(ErrorType.AIR_QUALITY_API_CALL_FAILED). Keep references to
ErrorException, ErrorType.AIR_QUALITY_API_CALL_FAILED, logger.error and mirror
the getNearbyStation approach so existing ErrorException instances propagate
unchanged.
In
`@pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirQualityServiceImpl.kt`:
- Around line 48-53: The current mapping in AirQualityServiceImpl returns
AirQuality with pm10 and pm25 defaulting to 0 which masks missing/unparseable
data; update the mapping to preserve nulls instead: change the AirQuality domain
model fields pm10 and pm25 to nullable Int? and set them from
item.pm10Value?.toIntOrNull() and item.pm25Value?.toIntOrNull() (keep
parseDataTime and stationName as-is), then update downstream consumers such as
WeekendNotificationAirQualityChecker to explicitly handle nulls (e.g., treat
null as "data unavailable" or fail eligibility) or alternatively throw a
descriptive exception when values are missing if you prefer failing fast.
- Around line 77-138: The convertWgs84ToTm function declares latitudeOfOrigin
but never uses it, causing a systematic northing error; either remove
latitudeOfOrigin to eliminate confusion or update the meridional arc calculation
(m) to compute relative to the origin by substituting lat with (lat -
Math.toRadians(latitudeOfOrigin)) where m is computed; locate the m computation
inside convertWgs84ToTm and apply the offset (or delete the latitudeOfOrigin
declaration and any related comment) so the code and intent are consistent.
In `@pida-core/core-api/src/main/kotlin/com/pida/config/SecurityConfig.kt`:
- Line 139: The /test/** endpoint is currently whitelisted in SecurityConfig via
permitAll() which can expose test notification endpoints in production; restrict
these endpoints by adding `@Profile`({"dev","local"}) to the
NotificationTestController class (referencing NotificationTestController, `@Tag`,
`@RestController`, `@RequestMapping`("/test")) so the controller is only active in
non-prod profiles, or alternatively remove "/test/**" from the permitAll() list
in SecurityConfig and create a separate security config class annotated with
`@Profile`({"dev","local"}) that explicitly permits only the test endpoints;
implement one of these two fixes to ensure test endpoints are not reachable in
production.
In
`@pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt`:
- Around line 12-18: The NotificationTestController currently exposes
unauthenticated test endpoints; restrict them for non-production by adding a
profile restriction or enforcing admin auth. Locate the
NotificationTestController class and either annotate it with a profile exclusion
like `@Profile`("!prod") to disable the whole controller in production or add an
authentication/authorization requirement (e.g., require ADMIN role) to its
endpoints; alternatively remove/disable the "/test/**" allowance from your
security config (or add rate limiting) so weekendNotificationService and
weekdayNotificationService cannot be invoked unauthenticated in prod.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekdayNotificationScheduler.kt`:
- Around line 101-104: The current Friday branch in WeekdayNotificationScheduler
calls executeIfEligible(day) immediately which can produce back-to-back
notifications when it becomes the second weekly execution; change the behavior
so the second notification is not sent synchronously on the same tick: either
schedule the second run with a short delay (create a helper like
scheduleDelayedExecution(day, delayMillis) and invoke it instead of
executeIfEligible when executionCountThisWeek == MAX_EXECUTIONS_PER_WEEK - 1 and
day == DayOfWeek.FRIDAY) or skip the second notification entirely for that week
(add a guard around executeIfEligible to require a minimum interval since
lastNotificationTimestamp). Update references to MAX_EXECUTIONS_PER_WEEK and
executionCountThisWeek accordingly and ensure the new scheduling/guard logic
lives with WeekdayNotificationScheduler so tests for executeIfEligible behavior
reflect the change.
- Around line 22-23: WeekdayNotificationScheduler currently uses mutable fields
executionCountThisWeek and lastExecutedWeek without synchronization, which is
unsafe for concurrent scheduled runs; make these fields thread-safe by replacing
executionCountThisWeek with an AtomicInteger (or AtomicLong) and
lastExecutedWeek with an AtomicInteger/AtomicReference<Integer>, and update
usages in methods that read/modify them (e.g., increment, compare-and-set) to
use atomic operations; alternatively, for multi-instance deployments persist
these values to a shared store (DB/Redis) and read/update them transactionally
instead of relying on in-memory fields.
In
`@pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekendNotificationScheduler.kt`:
- Around line 23-24: The in-memory variable lastExecutedWeek in
WeekendNotificationScheduler is unsafe for restarts and multi-instance
deployments; replace it with a persisted or distributed flag by reading/writing
a durable store (e.g., a DB table row or a Redis key) inside the scheduled job
so the scheduler atomically checks whether the week notification has already
been sent before sending and then updates the persisted value; update
WeekendNotificationScheduler to use a repository/service (e.g.,
WeekNotificationStateRepository or DistributedLockService) to load the last
executed week at job start, perform an atomic compare-and-set or transactional
update when sending notifications, and fall back to acquiring a distributed lock
if needed to prevent duplicate sends across instances.
- Around line 47-63: The current executeIfNotThisWeek uses independent coin
flips per day and ALIGNED_WEEK_OF_YEAR for dedupe, causing a 75% weekly
execution probability and year-boundary collisions; change to pick a single
deterministic day-per-week and persist the week with year. Specifically, replace
the per-day Random.nextBoolean logic in executeIfNotThisWeek with a
deterministic selection using the week+year seed (e.g., hash of
LocalDate.now().get(ALIGNED_WEEK_OF_YEAR) plus year) to choose one DayOfWeek,
only call weekendNotificationService.sendWeekendNotifications when the current
day equals that chosen day, and update lastExecutedWeek to include the year
(store a YearWeek or LocalDate of the execution) so comparisons are unique
across years.
In `@pida-core/core-api/src/main/resources/airkorea.yml`:
- Around line 1-3: Add the missing airkorea.api.base-url property with the
correct default endpoint used by AirKoreaApi (the Feign client). Update the YAML
to include airkorea.api.base-url:
${AIR_KOREA_API_BASE_URL:http://apis.data.go.kr} so the Feign `@FeignClient` can
read the default base URL while still allowing override via
AIR_KOREA_API_BASE_URL; ensure the key name matches exactly
"airkorea.api.base-url" referenced by AirKoreaApi.
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationService.kt`:
- Around line 82-97: The current storeNotificationRecords function persists
notification entries immediately (notificationService.appendAll in
storeNotificationRecords) before FCM delivery completes, causing records for
messages that may fail to deliver; modify the flow so records are only
created/updated on successful FCM callbacks (or add and use a delivery status
field on CreateNotificationStoredCommand and update it from the FCM callback).
Concretely: remove or change the immediate notificationService.appendAll call in
storeNotificationRecords, instead create records after sendAllAsync succeeds for
each recipient or insert with a PENDING status and update to SENT/FAILED in the
FCM send callback handler (reference sendAllAsync, storeNotificationRecords,
CreateNotificationStoredCommand, and notificationService.appendAll/update
methods).
In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/flowerspot/FlowerSpotJpaRepository.kt`:
- Around line 55-76: The query in findWithinRadius uses pin_point::geography
which prevents index use; fix by adding a spatial index or persisted geography
column: either create a GiST index on the geometry column (pin_point) so
ST_DWithin/ST_Distance can use it, or add a persisted geography column (e.g.
pin_point_geog) populated from pin_point and create a GiST index on that
geography column, then update findWithinRadius to reference the indexed column
(pin_point or pin_point_geog) instead of casting at runtime; update
FlowerSpotEntity to include the persisted geography property if chosen and
ensure migrations create the column and index.
In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/location/UserLocationJpaRepository.kt`:
- Around line 9-12: The generated IN(...) SQL can break when userIds is empty;
update the repository to guard against empty collections by adding a
short-circuit check around findByUserIdIn: implement a default method on
UserLocationJpaRepository (or change callers) that returns emptyList()
immediately when userIds.isEmpty(), otherwise delegates to the existing
findByUserIdIn(List<Long>) query; reference the repository interface name
UserLocationJpaRepository and the method findByUserIdIn to locate where to add
this guard.
🧹 Nitpick comments (19)
pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/auth/AuthenticationHistoryCustomRepository.kt (2)
21-33: Consider pagination or batching for large user bases.For push notification systems with many active users, this query could return a very large result set. If the user base grows significantly, consider:
- Adding pagination support with limit/offset
- Using a streaming/cursor-based approach
- Batching the results
For now, this implementation is acceptable if the expected result set size is manageable.
♻️ Optional: Add pagination support
- fun findActiveUserIdsSince(sinceDate: LocalDateTime): List<Long> { + fun findActiveUserIdsSince(sinceDate: LocalDateTime, limit: Int? = null, offset: Int? = null): List<Long> { val query = jpql { selectDistinct(path(AuthenticationHistoryEntity::userId)) .from(entity(AuthenticationHistoryEntity::class)) .whereAnd( path(AuthenticationHistoryEntity::updatedAt).greaterThanOrEqualTo(sinceDate), path(AuthenticationHistoryEntity::entityStatus).eq(AuthenticationEntityStatus.ACTIVE), ) } - return entityManager.createQuery(query, jdslRenderContext).resultList + return entityManager.createQuery(query, jdslRenderContext).apply { + limit?.let { maxResults = it } + offset?.let { firstResult = it } + }.resultList }
26-29: Add a composite index onupdatedAtandentityStatusfor query optimization.The query filters on both
updatedAtandentityStatuscolumns, but no indexes are defined on theAuthenticationHistoryEntityclass. Consider adding a composite index via JPA (@Indexannotation on the@Table) or through database migration to optimize this query as the authentication history table grows.pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationMessageBuilder.kt (1)
24-29: Guard empty content and avoid repeated splitting.
Line 27 usesMESSAGE_CONTENT.lines()[0], which will throw if the content is ever edited to empty, andlines()is computed twice. Consider pre-splitting with safe defaults.♻️ Proposed refactor
- fun buildMessage(fcmToken: String): NewFirebaseCloudMessage = - NewFirebaseCloudMessage( - fcmToken = fcmToken, - title = MESSAGE_CONTENT.lines()[0], // 첫 번째 줄을 제목으로 - body = MESSAGE_CONTENT.lines().drop(1).joinToString("\n"), // 나머지를 본문으로 - destination = DESTINATION, - ) + fun buildMessage(fcmToken: String): NewFirebaseCloudMessage { + val lines = MESSAGE_CONTENT.lines() + val title = lines.firstOrNull().orEmpty() + val body = if (lines.size > 1) lines.drop(1).joinToString("\n") else "" + return NewFirebaseCloudMessage( + fcmToken = fcmToken, + title = title, + body = body, + destination = DESTINATION, + ) + }pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeatherScheduler.kt (1)
8-13: Documentation inconsistency with class rename.The class was renamed from
WeatherTestSchedulertoWeatherScheduler, suggesting a production use, but the KDoc (line 9) still describes it as "날씨 API 테스트를 위한 스케줄러" (scheduler for weather API testing). Consider updating the documentation to reflect the intended purpose, or keep the "Test" suffix if this remains a test utility.pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/notification/NotificationStoredCustomRepository.kt (1)
29-35: Consider makingUserNotificationCountprivate.This data class is only used internally to map JPQL query results to the returned
Map<Long, Long>. Making itprivatewould clarify that it's an implementation detail and not part of the repository's public API.♻️ Suggested change
- /** - * 사용자별 알림 횟수 조회 결과 DTO - */ - data class UserNotificationCount( + /** + * 사용자별 알림 횟수 조회 결과 DTO (내부 사용) + */ + private data class UserNotificationCount( val userId: Long, val count: Long, )pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekdayNotificationScheduler.kt (1)
70-76: Year-boundary edge case with week tracking.
ALIGNED_WEEK_OF_YEARcan produce the same week number at year boundaries (e.g., late December and early January may both show week 1 or week 52/53). This could cause unexpected counter resets.Consider combining year and week for uniqueness:
Proposed fix
- val currentWeek = LocalDate.now().get(ChronoField.ALIGNED_WEEK_OF_YEAR) + val today = LocalDate.now() + val currentWeek = today.year * 100 + today.get(ChronoField.ALIGNED_WEEK_OF_YEAR)pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaFeignConfig.kt (1)
6-8: Consider specifyingbasePackagesfor@EnableFeignClients.Without explicit
basePackages, Feign will scan from the package of this configuration class. While this works, explicitly specifying the package improves clarity and prevents unintended client discovery if the package structure changes.♻️ Suggested improvement
`@Configuration` -@EnableFeignClients +@EnableFeignClients(basePackages = ["com.pida.client.airquality"]) internal class AirKoreaFeignConfigpida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt (2)
24-33: Consider adding error handling and response enrichment.The notification service calls are synchronous and could take significant time for large user bases. If an exception occurs mid-way, users receive a generic 500 error with no insight into partial progress.
Consider:
- Wrapping in try-catch to return a meaningful error response
- Adding a count of notifications sent in the response
- Making this async with a job ID for status polling (for longer-running operations)
51-54: Consider moving the response class to a shared location or top-level.
NotificationTriggerResponseis nested inside the controller. If other test endpoints need similar responses, consider extracting it to a shared DTO package.pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationAirQualityChecker.kt (1)
33-33: Use the existingisBadPm10()method instead of duplicating threshold logic.The
AirQualityclass already providesisBadPm10()which encapsulates this comparison. Using the existing method improves maintainability and ensures consistency if the threshold changes.♻️ Suggested change
- val isGood = airQuality.pm10 < AirQuality.PM10_BAD_THRESHOLD + val isGood = !airQuality.isBadPm10()pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationEligibilityChecker.kt (2)
48-50: Add logging for consistency when no users have nearby blooming spots.Step 1 logs when no active users are found (line 35), but Step 2 silently returns an empty list. Add a log statement for debugging and operational consistency.
♻️ Suggested change
if (usersWithBloomingSpots.isEmpty()) { + logger.info("No users with nearby blooming spots found") return emptyList() }
52-64: Consider the performance impact of per-user air quality API calls.Each user triggers a separate external API call to check air quality. For a large user base, this could:
- Cause rate limiting from the external API
- Significantly slow down the eligibility check process
- Risk timeouts during batch notification processing
Consider batching users by location clusters or caching air quality data by geographic region (e.g., station-level caching since nearby users likely share the same measurement station).
Also, lines 61-62 have redundant log messages—both log the same
eligibleUsers.size.♻️ Consolidate redundant log statements
- logger.info("${eligibleUsers.size} users passed air quality check") - logger.info("Final eligible users: ${eligibleUsers.size}") + logger.info("Final eligible users: ${eligibleUsers.size} (passed air quality check)")pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationLocationChecker.kt (1)
32-65: AvoidrunBlockingin production code called repeatedly in a loop.
runBlockingblocks the calling thread and is generally discouraged, especially when called in a tight loop (fromWeekendNotificationEligibilityChecker.findEligibleUsers()which filters each user). This can cause thread pool exhaustion during batch processing with many users.Consider making this method
suspendand propagating coroutines up to the caller, or restructure to batch the location checks.♻️ Suggested change to make method suspend
- fun hasNearbyBloomingSpots( + suspend fun hasNearbyBloomingSpots( latitude: Double, longitude: Double, - ): Boolean = - runBlocking { + ): Boolean { // 1. 반경 3km 내 FlowerSpot 조회 val nearbySpots = flowerSpotRepository.findWithinRadius( latitude = latitude, longitude = longitude, radiusMeters = RADIUS_METERS, ) if (nearbySpots.isEmpty()) { logger.debug("No flower spots found within ${RADIUS_METERS}m of ($latitude, $longitude)") - return@runBlocking false + return false } logger.debug("Found ${nearbySpots.size} flower spots within ${RADIUS_METERS}m") // 2. 해당 spot들 중 개화 상태인 것이 있는지 확인 val spotIds = nearbySpots.map { it.id } val bloomedSpotIds = bloomingRepository.findBloomedSpotIdsByFlowerSpotIds(spotIds) val hasBloomingSpots = bloomedSpotIds.isNotEmpty() if (hasBloomingSpots) { logger.debug("Found ${bloomedSpotIds.size} blooming spots near ($latitude, $longitude)") } else { logger.debug("No blooming spots found near ($latitude, $longitude)") } - hasBloomingSpots + return hasBloomingSpots - } + }This would require updating
WeekendNotificationEligibilityChecker.findEligibleUsers()to use coroutines as well.pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirQualityServiceImpl.kt (1)
65-69: Parsing may fail silently on unexpected date formats.If the Air Korea API returns an unexpected date format,
DateTimeParseExceptionwill be caught by the outer handler and converted to a genericAIR_QUALITY_API_CALL_FAILED. This is acceptable but could obscure the root cause during debugging.pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationService.kt (1)
34-61: Inconsistent implementation compared toWeekdayNotificationService.This service differs from
WeekdayNotificationServicein several ways:
- Missing start logging (
"Starting weekend notification process")- Missing user count logging after finding eligible users
- No callback for
fcmSender.sendAllAsyncto track success/failure countsConsider aligning implementations for consistency and better observability.
♻️ Suggested alignment with WeekdayNotificationService
`@Async` fun sendWeekendNotifications() { try { + logger.info("Starting weekend notification process") + // Step 1: 대상 사용자 조회 val eligibleUsers = weekendNotificationEligibilityChecker.findEligibleUsers() if (eligibleUsers.isEmpty()) { + logger.info("No eligible users found for weekend notification") return } + logger.info("Found ${eligibleUsers.size} eligible users for weekend notification") + // Step 2: FCM 메시지 생성 val messages = buildNotificationMessages(eligibleUsers) if (messages.isEmpty()) { + logger.warn("No FCM tokens found for eligible users") return } + logger.info("Sending ${messages.size} FCM messages") + // Step 3: FCM 발송 - fcmSender.sendAllAsync(messages) + fcmSender.sendAllAsync(messages) { results -> + val successCount = results.count { it.sent } + val failCount = results.count { !it.sent } + logger.info("Weekend notification FCM send completed: $successCount succeeded, $failCount failed") + } // Step 4: 알림 이력 저장 storeNotificationRecords(eligibleUsers)pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationService.kt (1)
9-9: Cross-package dependency:EligibleUserimported fromweekendpackage.The weekday service depends on
com.pida.notification.weekend.EligibleUser. This tight coupling suggestsEligibleUsershould be extracted to a shared/common package that both weekday and weekend modules can use.Consider moving
EligibleUserto a shared location likecom.pida.notification.commonorcom.pida.notification.pida-core/core-domain/src/main/kotlin/com/pida/notification/weekday/WeekdayNotificationEligibilityChecker.kt (1)
19-24: Weekday checker depends onWeekend*components - naming confusion and tight coupling.The weekday eligibility checker injects
WeekendNotificationUserReader,WeekendNotificationLocationChecker, andWeekendNotificationAirQualityChecker. This creates confusing naming (weekend components used for weekday logic) and tight coupling between packages.Consider extracting shared logic into common components (e.g.,
NotificationUserReader,NotificationLocationChecker,NotificationAirQualityChecker) that both weekday and weekend checkers can use.pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaClient.kt (2)
16-17: Consider removing the empty default forserviceKeyor adding startup validation.An empty default (
":}") means the application will start successfully even when the API key is not configured, but API calls will fail with potentially confusing errors from the external service. Consider either:
- Removing the default to fail fast at startup if the property is missing.
- Adding a
@PostConstructvalidation to check for a non-blank key.Option 1: Remove default to fail fast
- `@param`:Value("\${airkorea.api.service-key:}") + `@param`:Value("\${airkorea.api.service-key}") private val serviceKey: String,
82-86: Consider cachingstations.first()to avoid redundant call.
stations.first()is evaluated twice. While minor, storing it in a local variable improves clarity and avoids the duplicate call.♻️ Proposed refactor
// 가장 가까운 측정소 반환 (첫 번째 항목) - val nearestStation = stations.first().stationName - logger.info("Found nearest station: $nearestStation (distance: ${stations.first().tm}m)") - - nearestStation + val nearest = stations.first() + logger.info("Found nearest station: ${nearest.stationName} (distance: ${nearest.tm}m)") + nearest.stationName
pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaClient.kt
Show resolved
Hide resolved
| return AirQuality( | ||
| pm10 = item.pm10Value?.toIntOrNull() ?: 0, | ||
| pm25 = item.pm25Value?.toIntOrNull() ?: 0, | ||
| measurementTime = parseDataTime(item.dataTime), | ||
| stationName = item.stationName, | ||
| ) |
There was a problem hiding this comment.
Defaulting PM values to 0 may cause incorrect eligibility filtering.
When pm10Value or pm25Value is null or unparseable, defaulting to 0 makes the air quality appear excellent. Downstream filters (like WeekendNotificationAirQualityChecker checking PM10 < 81) would pass users through when data is actually unavailable.
Consider using nullable Int? or throwing an error when air quality data is missing.
🐛 Suggested approach using nullable values
return AirQuality(
- pm10 = item.pm10Value?.toIntOrNull() ?: 0,
- pm25 = item.pm25Value?.toIntOrNull() ?: 0,
+ pm10 = item.pm10Value?.toIntOrNull(),
+ pm25 = item.pm25Value?.toIntOrNull(),
measurementTime = parseDataTime(item.dataTime),
stationName = item.stationName,
)This requires updating the AirQuality domain model to use nullable types, but provides accurate representation of missing data.
🤖 Prompt for AI Agents
In
`@pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirQualityServiceImpl.kt`
around lines 48 - 53, The current mapping in AirQualityServiceImpl returns
AirQuality with pm10 and pm25 defaulting to 0 which masks missing/unparseable
data; update the mapping to preserve nulls instead: change the AirQuality domain
model fields pm10 and pm25 to nullable Int? and set them from
item.pm10Value?.toIntOrNull() and item.pm25Value?.toIntOrNull() (keep
parseDataTime and stationName as-is), then update downstream consumers such as
WeekendNotificationAirQualityChecker to explicitly handle nulls (e.g., treat
null as "data unavailable" or fail eligibility) or alternatively throw a
descriptive exception when values are missing if you prefer failing fast.
...lients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirQualityServiceImpl.kt
Show resolved
Hide resolved
| "/api/v1/flower-spot", | ||
| "/api/v1/auth", | ||
| "/api/v1/blooming/{spotId}/details", | ||
| "/test/**", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and read the SecurityConfig.kt file
fd -n "SecurityConfig.kt" --type fRepository: Team-PIDA/Pida-Server
Length of output: 295
🏁 Script executed:
# Check the context around line 139 in SecurityConfig.kt
cat -n pida-core/core-api/src/main/kotlin/com/pida/config/SecurityConfig.kt | sed -n '130,150p'Repository: Team-PIDA/Pida-Server
Length of output: 912
🏁 Script executed:
# Search for test endpoints and their configuration
rg -n "test|Test" pida-core/core-api/src/main/kotlin/com/pida/config/SecurityConfig.kt | head -20Repository: Team-PIDA/Pida-Server
Length of output: 100
🏁 Script executed:
# Look for profile-specific security configurations
fd -n "SecurityConfig\|*Security*Config" --type f | xargs rg -l "@Profile" 2>/dev/nullRepository: Team-PIDA/Pida-Server
Length of output: 297
🏁 Script executed:
# Search for test controller endpoints
rg -n "NotificationTestController\|@RequestMapping.*test\|@GetMapping.*test\|@PostMapping.*test" --type=kotlin -A3 | head -30Repository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
# Get the full SecurityConfig.kt file to understand the complete context
wc -l pida-core/core-api/src/main/kotlin/com/pida/config/SecurityConfig.ktRepository: Team-PIDA/Pida-Server
Length of output: 137
🏁 Script executed:
# Read the entire SecurityConfig.kt file
cat -n pida-core/core-api/src/main/kotlin/com/pida/config/SecurityConfig.ktRepository: Team-PIDA/Pida-Server
Length of output: 7422
🏁 Script executed:
# Search for test controllers in the codebase
find . -type f -name "*TestController*" -o -name "*Test.kt" | grep -i controllerRepository: Team-PIDA/Pida-Server
Length of output: 169
🏁 Script executed:
# Look for any `@Profile` annotations in SecurityConfig
rg "@Profile|@ConditionalOnProperty" pida-core/core-api/src/main/kotlin/com/pida/config/SecurityConfig.ktRepository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
# Search for test path mappings in kotlin files
rg "/test" --type kotlin | grep -i "mapping\|route\|path" | head -20Repository: Team-PIDA/Pida-Server
Length of output: 192
🏁 Script executed:
# Read the NotificationTestController
cat -n pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.ktRepository: Team-PIDA/Pida-Server
Length of output: 2321
🏁 Script executed:
# Check for application.yml or application-*.yml files to see deployment profiles
find . -name "application*.yml" -o -name "application*.yaml" -o -name "application*.properties" | head -20Repository: Team-PIDA/Pida-Server
Length of output: 120
🏁 Script executed:
# Look for any `@Profile` or environment-specific config
rg "@Profile" --type kotlin | head -20Repository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
# Check pom.xml or build.gradle for active profiles
rg "spring.profiles.active|activeProfiles" pida-core/core-api/Repository: Team-PIDA/Pida-Server
Length of output: 168
🏁 Script executed:
# Check for Docker/deployment configurations
find . -name "Dockerfile*" -o -name "docker-compose*" -o -name "*.yml" | grep -E "(docker|kube|deploy)" | head -10Repository: Team-PIDA/Pida-Server
Length of output: 305
🏁 Script executed:
# Check root level docker/deployment files
ls -la | grep -i dockerRepository: Team-PIDA/Pida-Server
Length of output: 122
🏁 Script executed:
# Check if there's a build.gradle or pom.xml that sets profiles
fd "pom.xml|build.gradle" --type f | head -5Repository: Team-PIDA/Pida-Server
Length of output: 254
🏁 Script executed:
# Look for any Spring profile configuration in gradle/maven files
rg "profiles|spring.profiles" pida-core/ --type xml --type gradle | head -20Repository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
# Check production Dockerfile
cat -n ./docker/DockerfileProdRepository: Team-PIDA/Pida-Server
Length of output: 473
🏁 Script executed:
# Check production deployment CI/CD config
cat -n ./.github/workflows/production_build_deploy.ymlRepository: Team-PIDA/Pida-Server
Length of output: 4424
🏁 Script executed:
# Check develop deployment for comparison
cat -n ./.github/workflows/develop_build_deploy.yml | head -100Repository: Team-PIDA/Pida-Server
Length of output: 3612
🏁 Script executed:
# Check for profile-specific application configs (application-prod.yml, etc.)
find . -path "*/resources/*" -name "application*.yml" -o -name "application*.yaml" -o -name "application*.properties"Repository: Team-PIDA/Pida-Server
Length of output: 120
🏁 Script executed:
# Check if there's any conditional logic that excludes test endpoints in prod profile
rg "application-prod|application-production" --type kotlin --type yamlRepository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
# Check if test controller itself has any profile restrictions
rg -A5 -B5 "NotificationTestController" pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.ktRepository: Team-PIDA/Pida-Server
Length of output: 426
Restrict test endpoints to non-production profiles via code-level configuration.
The /test/** path is exposed in permitAll() without profile restrictions. While production deployment sets PROFILE=prod via Docker build arguments, this relies on deployment configuration rather than code-level safety. The test endpoints trigger actual notification services without authentication.
Add @Profile annotation to the NotificationTestController to ensure test endpoints cannot be accidentally exposed if deployment profiles are misconfigured:
`@Profile`({"dev", "local"})
`@Tag`(name = "🧪 Test", description = "테스트용 API")
`@RestController`
`@RequestMapping`("/test")
class NotificationTestController(...)Alternatively, remove /test/** from SecurityConfig.kt and create a separate @Profile({"dev", "local"}) security configuration class that permits only these endpoints.
🤖 Prompt for AI Agents
In `@pida-core/core-api/src/main/kotlin/com/pida/config/SecurityConfig.kt` at line
139, The /test/** endpoint is currently whitelisted in SecurityConfig via
permitAll() which can expose test notification endpoints in production; restrict
these endpoints by adding `@Profile`({"dev","local"}) to the
NotificationTestController class (referencing NotificationTestController, `@Tag`,
`@RestController`, `@RequestMapping`("/test")) so the controller is only active in
non-prod profiles, or alternatively remove "/test/**" from the permitAll() list
in SecurityConfig and create a separate security config class annotated with
`@Profile`({"dev","local"}) that explicitly permits only the test endpoints;
implement one of these two fixes to ensure test endpoints are not reachable in
production.
| @Tag(name = "🧪 Test", description = "테스트용 API") | ||
| @RestController | ||
| @RequestMapping("/test") | ||
| class NotificationTestController( | ||
| private val weekendNotificationService: WeekendNotificationService, | ||
| private val weekdayNotificationService: WeekdayNotificationService, | ||
| ) { |
There was a problem hiding this comment.
Ensure test endpoints are disabled or protected in production.
This test controller allows unauthenticated triggering of notification batch jobs (per the security config allowing /test/**). In production, this could be abused to spam users with notifications or cause resource exhaustion.
Consider one of:
- Restrict this controller to specific profiles (e.g.,
@Profile("!prod")) - Require admin authentication for these endpoints
- Add rate limiting
🔒 Suggested profile restriction
+import org.springframework.context.annotation.Profile
+
`@Tag`(name = "🧪 Test", description = "테스트용 API")
`@RestController`
`@RequestMapping`("/test")
+@Profile("!prod")
class NotificationTestController(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Tag(name = "🧪 Test", description = "테스트용 API") | |
| @RestController | |
| @RequestMapping("/test") | |
| class NotificationTestController( | |
| private val weekendNotificationService: WeekendNotificationService, | |
| private val weekdayNotificationService: WeekdayNotificationService, | |
| ) { | |
| import org.springframework.context.annotation.Profile | |
| `@Tag`(name = "🧪 Test", description = "테스트용 API") | |
| `@RestController` | |
| `@RequestMapping`("/test") | |
| `@Profile`("!prod") | |
| class NotificationTestController( | |
| private val weekendNotificationService: WeekendNotificationService, | |
| private val weekdayNotificationService: WeekdayNotificationService, | |
| ) { |
🤖 Prompt for AI Agents
In
`@pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/notification/NotificationTestController.kt`
around lines 12 - 18, The NotificationTestController currently exposes
unauthenticated test endpoints; restrict them for non-production by adding a
profile restriction or enforcing admin auth. Locate the
NotificationTestController class and either annotate it with a profile exclusion
like `@Profile`("!prod") to disable the whole controller in production or add an
authentication/authorization requirement (e.g., require ADMIN role) to its
endpoints; alternatively remove/disable the "/test/**" allowance from your
security config (or add rate limiting) so weekendNotificationService and
weekdayNotificationService cannot be invoked unauthenticated in prod.
pida-core/core-api/src/main/kotlin/com/pida/scheduler/WeekendNotificationScheduler.kt
Show resolved
Hide resolved
| airkorea: | ||
| api: | ||
| service-key: ${AIR_KOREA_API_SERVICE_KEY:} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "base-url|baseUrl" --type=kotlin -C3Repository: Team-PIDA/Pida-Server
Length of output: 1709
🏁 Script executed:
fd "AirKorea" --type=fRepository: Team-PIDA/Pida-Server
Length of output: 436
Add missing base-url configuration with correct default value.
The AirKoreaApi uses airkorea.api.base-url property via @FeignClient. The configuration file should include this property with the correct default endpoint.
Required addition
airkorea:
api:
+ base-url: ${AIR_KOREA_API_BASE_URL:http://apis.data.go.kr/B090041/openapi/service/RltmMsrstnInfoInqireSvc}
service-key: ${AIR_KOREA_API_SERVICE_KEY:}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| airkorea: | |
| api: | |
| service-key: ${AIR_KOREA_API_SERVICE_KEY:} | |
| airkorea: | |
| api: | |
| base-url: ${AIR_KOREA_API_BASE_URL:http://apis.data.go.kr/B090041/openapi/service/RltmMsrstnInfoInqireSvc} | |
| service-key: ${AIR_KOREA_API_SERVICE_KEY:} |
🤖 Prompt for AI Agents
In `@pida-core/core-api/src/main/resources/airkorea.yml` around lines 1 - 3, Add
the missing airkorea.api.base-url property with the correct default endpoint
used by AirKoreaApi (the Feign client). Update the YAML to include
airkorea.api.base-url: ${AIR_KOREA_API_BASE_URL:http://apis.data.go.kr} so the
Feign `@FeignClient` can read the default base URL while still allowing override
via AIR_KOREA_API_BASE_URL; ensure the key name matches exactly
"airkorea.api.base-url" referenced by AirKoreaApi.
| private fun storeNotificationRecords(users: List<EligibleUser>) { | ||
| val commands = | ||
| users.map { user -> | ||
| CreateNotificationStoredCommand( | ||
| notificationStoredKey = "", // 서비스에서 재생성됨 | ||
| userId = user.userId, | ||
| type = NotificationType.WEEKEND_HEALING, | ||
| parameterValue = "", | ||
| topic = "weekend_healing", | ||
| contents = weekendNotificationMessageBuilder.getMessageContent(), | ||
| readStatus = ReadStatus.UNREAD, | ||
| ) | ||
| } | ||
|
|
||
| notificationService.appendAll(commands) | ||
| } |
There was a problem hiding this comment.
Notification records stored before confirming FCM delivery.
Records are persisted immediately after calling sendAllAsync, but the actual send happens asynchronously. If FCM delivery fails, users will have notification records for messages they never received.
Consider storing records in the FCM callback (for successfully sent messages only), or adding a status field to track delivery state.
🤖 Prompt for AI Agents
In
`@pida-core/core-domain/src/main/kotlin/com/pida/notification/weekend/WeekendNotificationService.kt`
around lines 82 - 97, The current storeNotificationRecords function persists
notification entries immediately (notificationService.appendAll in
storeNotificationRecords) before FCM delivery completes, causing records for
messages that may fail to deliver; modify the flow so records are only
created/updated on successful FCM callbacks (or add and use a delivery status
field on CreateNotificationStoredCommand and update it from the FCM callback).
Concretely: remove or change the immediate notificationService.appendAll call in
storeNotificationRecords, instead create records after sendAllAsync succeeds for
each recipient or insert with a PENDING status and update to SENT/FAILED in the
FCM send callback handler (reference sendAllAsync, storeNotificationRecords,
CreateNotificationStoredCommand, and notificationService.appendAll/update
methods).
| @Query( | ||
| """ | ||
| SELECT * | ||
| FROM t_flower_spot | ||
| WHERE ST_DWithin( | ||
| pin_point::geography, | ||
| ST_SetSRID(ST_MakePoint(:lng, :lat), 4326)::geography, | ||
| :radiusMeters | ||
| ) | ||
| AND deleted_at IS NULL | ||
| ORDER BY ST_Distance( | ||
| pin_point::geography, | ||
| ST_SetSRID(ST_MakePoint(:lng, :lat), 4326)::geography | ||
| ) | ||
| """, | ||
| nativeQuery = true, | ||
| ) | ||
| fun findWithinRadius( | ||
| @Param("lat") lat: Double, | ||
| @Param("lng") lng: Double, | ||
| @Param("radiusMeters") radiusMeters: Double, | ||
| ): List<FlowerSpotEntity> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for spatial indexes, migrations, or references to t_flower_spot/pin_point
rg -n -C3 't_flower_spot|pin_point|gist|spgist|geography' --type sql --type kotlinRepository: Team-PIDA/Pida-Server
Length of output: 13905
🏁 Script executed:
#!/bin/bash
# Look for migration files and their location
find . -type f \( -name "*migration*" -o -name "*migrate*" -o -name "*.sql" \) | head -20Repository: Team-PIDA/Pida-Server
Length of output: 82
🏁 Script executed:
#!/bin/bash
# Search more broadly for index creation patterns
rg -n 'CREATE.*INDEX|@Index|@Column.*geography' --type sql --type kotlinRepository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Read the docker init.sql file
cat ./docker/init.sqlRepository: Team-PIDA/Pida-Server
Length of output: 104
🏁 Script executed:
#!/bin/bash
# Search for Flyway migration files (typical pattern: V*.sql)
find . -type f -name "V*.sql" -o -name "*.sql" | grep -E "(migration|flyway|db)" | head -30Repository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
#!/bin/bash
# Search for database migration directories
find . -type d \( -name "db" -o -name "migration*" -o -name "flyway*" \) 2>/dev/nullRepository: Team-PIDA/Pida-Server
Length of output: 123
🏁 Script executed:
#!/bin/bash
# Explore the database directory structure
find ./pida-storage/db-core -type f | head -40Repository: Team-PIDA/Pida-Server
Length of output: 3652
🏁 Script executed:
#!/bin/bash
# Look for Liquibase or Flyway migration files
find . -type f \( -name "*.xml" -o -name "*.yaml" -o -name "*.yml" \) | grep -iE "(liquibase|flyway|migration|db)" | head -20Repository: Team-PIDA/Pida-Server
Length of output: 118
🏁 Script executed:
#!/bin/bash
# Search in resources directories for SQL or migration configs
find ./pida-storage -type d -name resources -exec find {} -type f \; | head -40Repository: Team-PIDA/Pida-Server
Length of output: 168
🏁 Script executed:
#!/bin/bash
# Read the db-core.yml configuration file
cat ./pida-storage/db-core/src/main/resources/db-core.ymlRepository: Team-PIDA/Pida-Server
Length of output: 1962
🏁 Script executed:
#!/bin/bash
# Search for SQL files in resources, particularly migration/changelog files
find ./pida-storage/db-core/src/main/resources -type fRepository: Team-PIDA/Pida-Server
Length of output: 118
🏁 Script executed:
#!/bin/bash
# Read the FlowerSpotEntity to see how pin_point column is defined
cat -n ./pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/flowerspot/FlowerSpotEntity.ktRepository: Team-PIDA/Pida-Server
Length of output: 1953
Add a spatial index on pin_point or create a persisted geography column.
The pin_point column is defined as geometry(Point, 4326) (line 28-29 in FlowerSpotEntity.kt) but the query casts it to geography on every execution (pin_point::geography at lines 60, 66). This cast prevents index usage, forcing full table scans. Create either a GiST index on the geometry column or a persisted geography column with its own index to support the ST_DWithin and ST_Distance operations efficiently.
🤖 Prompt for AI Agents
In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/flowerspot/FlowerSpotJpaRepository.kt`
around lines 55 - 76, The query in findWithinRadius uses pin_point::geography
which prevents index use; fix by adding a spatial index or persisted geography
column: either create a GiST index on the geometry column (pin_point) so
ST_DWithin/ST_Distance can use it, or add a persisted geography column (e.g.
pin_point_geog) populated from pin_point and create a GiST index on that
geography column, then update findWithinRadius to reference the indexed column
(pin_point or pin_point_geog) instead of casting at runtime; update
FlowerSpotEntity to include the persisted geography property if chosen and
ensure migrations create the column and index.
| fun findByUserId(userId: Long): UserLocationEntity? | ||
|
|
||
| fun findByUserIdIn(userIds: List<Long>): List<UserLocationEntity> | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd pida-storage/db-core && find . -name "UserLocationJpaRepository.kt" -type fRepository: Team-PIDA/Pida-Server
Length of output: 150
🌐 Web query:
Spring Data JPA empty collection IN clause behavior List
💡 Result:
In JPQL/HQL, binding a collection-valued parameter to an IN/NOT IN predicate is not allowed to be empty (an empty list would effectively become ... IN (), which is invalid SQL). Hibernate’s docs explicitly state that a collection_valued_input_parameter “must not be empty; it must contain at least one value.” [1]
What you’ll see in Spring Data JPA
- For repository methods like
findByXIn(List<?> xs)or@Query("... x in :xs"), passing an emptyListcommonly fails at runtime with an SQL/JPQL error rather than returning “no rows”. A typical solution is to short-circuit in Java and returnCollections.emptyList()when the input list is empty. [2] - For
NOT IN, the situation is similarly problematic; Spring Data JPA does not “fix” empty-listNOT INfor you (see the Spring Data JPA issue closed as “external-project / not planned”). [3]
Common fixes
- Guard in service/repository default method
if (ids == null || ids.isEmpty()) return List.of(); return repo.findByIdIn(ids);
- Build the predicate dynamically (Criteria API / Specification): only add the
INpredicate when the list is non-empty. - JPQL with explicit empty handling (SpEL workaround) (use carefully; behavior varies by provider), e.g. patterns that replace empty with
nulland add anORbranch (examples discussed in the JPA/Spring community). [2]
Sources: Hibernate docs on non-empty collection parameters [1]; Spring Data JPA empty-collection IN failures + recommended guard/workarounds [2]; Spring Data JPA issue showing empty NOT IN is not handled automatically [3].
[1] (docs.hibernate.org)
[2] (stackoverflow.com)
[3] (github.com)
Citations:
- 1: https://docs.hibernate.org/orm/5.0/userguide/en-US/html/ch13.html?utm_source=openai
- 2: https://stackoverflow.com/questions/48071990/spring-data-jpa-how-to-return-empty-result-when-repository-method-parameter-of
- 3: @Query with "not in" not work with empty List parameter spring-projects/spring-data-jpa#3078
🏁 Script executed:
cat -n pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/location/UserLocationJpaRepository.ktRepository: Team-PIDA/Pida-Server
Length of output: 588
Guard against empty userIds list in findByUserIdIn.
Spring Data JPA does not gracefully handle empty collections in IN clauses. Passing an empty list will generate invalid SQL (IN ()) and raise a runtime JPQL/SQL error instead of returning an empty result set. Short-circuit this method by checking if the list is empty upstream and returning emptyList() directly, or apply Criteria API for dynamic predicate building to avoid the issue entirely.
🤖 Prompt for AI Agents
In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/user/location/UserLocationJpaRepository.kt`
around lines 9 - 12, The generated IN(...) SQL can break when userIds is empty;
update the repository to guard against empty collections by adding a
short-circuit check around findByUserIdIn: implement a default method on
UserLocationJpaRepository (or change callers) that returns emptyList()
immediately when userIds.isEmpty(), otherwise delegates to the existing
findByUserIdIn(List<Long>) query; reference the repository interface name
UserLocationJpaRepository and the method findByUserIdIn to locate where to add
this guard.
LeeHanEum
left a comment
There was a problem hiding this comment.
깔끔하게 잘 짜주셨네요 👍
주석 덜어낼 수 있는 부분은 덜어내면 좋을 것 같아요!
위도 경도 처럼 저희 서비스 도메인의 핵심 키워드는 파라미터명 만으로도 알 수 있어서
고런 부분은 없어도 될 것 같구요~
개인적으론 외부 API 호출 클라이언트에서도 꼭 필요한 로그만 남겨도 충분하다고 생각합니다
이번 작업 단위가 꽤 컸는데 너무 고생 많으셨습니다~~
| data class NotificationTriggerResponse( | ||
| val message: String, | ||
| val triggeredAt: LocalDateTime, | ||
| ) |
There was a problem hiding this comment.
요거 data class 컨트롤러에 둔거는 의도하신 걸까요?
There was a problem hiding this comment.
네엡 저 TestController는 우선적으로 테스트를 위함이니 추후 클래스 자체를 삭제할 계획입니다!
| @@ -0,0 +1,3 @@ | |||
| airkorea: | |||
| api: | |||
| service-key: ${AIR_KOREA_API_SERVICE_KEY:} | |||
…/airquality/AirKoreaClient.kt Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…otificationScheduler.kt Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaClient.kt`:
- Around line 84-87: The code currently assumes the first element from
getNearbyMsrstnList is nearest (using stations.first()), which is unsafe; change
the selection to compute the station with minimum distance (e.g., use
stations.minByOrNull { it.tm } / minBy { it.tm }) and assign that to
nearestStation (and handle null if stations is empty), then log using that
selected station and its tm value instead of stations.first().
🧹 Nitpick comments (2)
pida-core/core-domain/src/main/kotlin/com/pida/flowerspot/FlowerSpotRepository.kt (1)
19-31: Document parameter constraints + ordering tie-breakerConsider tightening the contract (valid coordinate ranges, radius must be positive) and define tie-breaker behavior for equal distances to keep implementations consistent and avoid flaky ordering in consumers/tests.
Proposed KDoc refinement
/** * 주어진 좌표 기준 반경 내의 FlowerSpot 목록 조회 * - * `@param` latitude 위도 - * `@param` longitude 경도 - * `@param` radiusMeters 반경 (미터 단위) - * `@return` FlowerSpot 목록 (거리 순으로 정렬) + * `@param` latitude 위도 (-90.0..90.0) + * `@param` longitude 경도 (-180.0..180.0) + * `@param` radiusMeters 반경 (미터 단위, 양수) + * `@return` FlowerSpot 목록 (거리 순으로 정렬, 동일 거리일 경우 고정된 tie-breaker 기준 사용) + * `@throws` IllegalArgumentException 유효하지 않은 좌표 또는 반경이 전달된 경우 */pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaClient.kt (1)
15-20: Fail fast ifairkorea.api.service-keyis blank.
With the default empty value, calls will repeatedly fail at runtime; a startup-time guard makes misconfiguration explicit.♻️ Proposed fix
class AirKoreaClient internal constructor( `@param`:Value("\${airkorea.api.service-key:}") private val serviceKey: String, private val airKoreaApi: AirKoreaApi, ) { private val logger by logger() + + init { + require(serviceKey.isNotBlank()) { "airkorea.api.service-key must be configured" } + }
| // 가장 가까운 측정소 반환 (첫 번째 항목) | ||
| val nearestStation = stations.first().stationName | ||
| logger.info("Found nearest station: $nearestStation (distance: ${stations.first().tm}m)") | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
AirKorea getNearbyMsrstnList API response ordering distance
💡 Result:
For B552584 / MsrstnInfoInqireSvc / getNearbyMsrstnList, the response is intended to be a “nearby station list” based on the provided TM coordinates (근접측정소 목록 조회). (data.go.kr)
However, the Open Data Portal page doesn’t explicitly guarantee the returned items order (e.g., “sorted by distance ascending”)—so you should not rely on the array order as a contract. (data.go.kr)
In practice, many implementations sort the returned items themselves using the distance-like field (commonly treated as tm) and then pick the first element as the nearest station. (biolkj28.tistory.com)
So: assume “not guaranteed,” and explicitly sort by the distance field (tm) ascending if you need the nearest station deterministically. (biolkj28.tistory.com)
Citations:
- 1: https://www.data.go.kr/data/15073877/openapi.do
- 2: https://www.data.go.kr/data/15073877/openapi.do
- 3: https://biolkj28.tistory.com/10?utm_source=openai
- 4: https://biolkj28.tistory.com/10?utm_source=openai
🏁 Script executed:
# Find and examine the AirKoreaClient file to understand the context
find . -name "AirKoreaClient.kt" -type fRepository: Team-PIDA/Pida-Server
Length of output: 158
🏁 Script executed:
# Search for AirKorea API integration and data models
fd -e kt -e java | xargs rg -l "getNearbyMsrstnList\|AirKorea" | head -20Repository: Team-PIDA/Pida-Server
Length of output: 47
🏁 Script executed:
# Look for any API response models or documentation comments
rg "tm|distance" -t kt -t java -i | head -30Repository: Team-PIDA/Pida-Server
Length of output: 91
🏁 Script executed:
# Read the AirKoreaClient.kt file to see the full context
cat -n "./pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaClient.kt" | head -100Repository: Team-PIDA/Pida-Server
Length of output: 3847
🏁 Script executed:
# Search for data models or Station class definitions
find . -name "*.kt" -o -name "*.java" | xargs grep -l "stationName\|class.*Station" | head -10Repository: Team-PIDA/Pida-Server
Length of output: 630
🏁 Script executed:
# Look for tm field definition/usage in data classes
grep -r "tm" --include="*.kt" pida-clients/airquality-client/ | head -20Repository: Team-PIDA/Pida-Server
Length of output: 2890
🏁 Script executed:
# Read the AirKoreaResponse.kt file to see the data class definition
cat -n "./pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaResponse.kt"Repository: Team-PIDA/Pida-Server
Length of output: 2615
Use explicit distance-based selection instead of assuming API ordering.
The AirKorea API documentation does not guarantee that getNearbyMsrstnList returns stations ordered by distance. Assuming the first item is the nearest station (lines 84-87) is unsafe. Explicitly select by minimum distance:
🔍 Defensive distance-based selection
- // 가장 가까운 측정소 반환 (첫 번째 항목)
- val nearestStation = stations.first().stationName
- logger.info("Found nearest station: $nearestStation (distance: ${stations.first().tm}m)")
-
- nearestStation
+ // 가장 가까운 측정소 반환 (거리 최소값 기준)
+ val nearest = stations.minByOrNull { it.tm }
+ ?: throw ErrorException(ErrorType.AIR_QUALITY_STATION_NOT_FOUND)
+ val nearestStation = nearest.stationName
+ logger.info("Found nearest station: $nearestStation (distance: ${nearest.tm}m)")
+
+ nearestStation📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 가장 가까운 측정소 반환 (첫 번째 항목) | |
| val nearestStation = stations.first().stationName | |
| logger.info("Found nearest station: $nearestStation (distance: ${stations.first().tm}m)") | |
| // 가장 가까운 측정소 반환 (거리 최소값 기준) | |
| val nearest = stations.minByOrNull { it.tm } | |
| ?: throw ErrorException(ErrorType.AIR_QUALITY_STATION_NOT_FOUND) | |
| val nearestStation = nearest.stationName | |
| logger.info("Found nearest station: $nearestStation (distance: ${nearest.tm}m)") | |
| nearestStation |
🤖 Prompt for AI Agents
In
`@pida-clients/airquality-client/src/main/kotlin/com/pida/client/airquality/AirKoreaClient.kt`
around lines 84 - 87, The code currently assumes the first element from
getNearbyMsrstnList is nearest (using stations.first()), which is unsafe; change
the selection to compute the station with minimum distance (e.g., use
stations.minByOrNull { it.tm } / minBy { it.tm }) and assign that to
nearestStation (and handle null if stations is empty), then log using that
selected station and its tm value instead of stations.first().
🌱 관련 이슈
📌 작업 내용 및 특이 사항
📝 참고
📌 체크 리스트
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.