Skip to content

[Feature] Integrate with Kakao Map API to implement search landmark#85

Merged
LeeHanEum merged 13 commits intodevelopfrom
feature/searchby-landmark
Jan 27, 2026
Merged

[Feature] Integrate with Kakao Map API to implement search landmark#85
LeeHanEum merged 13 commits intodevelopfrom
feature/searchby-landmark

Conversation

@LeeHanEum
Copy link
Collaborator

@LeeHanEum LeeHanEum commented Jan 26, 2026

🌱 관련 이슈

  • close #

📌 작업 내용 및 특이 사항

  • 카카오맵 API 연동
  • 검색 기록 감사 데이터 저장
  • 랜드마크 및 벚꽃길 검색 기능 구현
  • 랜드마크 데이터 캐싱
  • 외부 API 호출 클라이언트 로깅
  • @LeeHanEum 코드오너 추가
  • GeoJson, Region 패키지 이동
  • 랜드마크 검색 시 tsvector 사용하여 성능 개선 (캐시 미스 시 100ms 대 응답)

📝 참고

  • 미완료 작업
  • 페이지네이션 및 fallback API 추가 작업이 필요합니다
  • 행정구역, 지하철역, 랜드마크 순으로 정렬하여 응답해야 합니다.

📌 체크 리스트

  • 리뷰어를 추가하셨나요 ?
  • 변경사항에 대해 충분히 설명하고 있나요 ?

Summary by CodeRabbit

  • New Features

    • Kakao Map integration to power landmark keyword search.
    • Public search API to find landmarks and flower spots by query.
    • Automatic search history recording for authenticated and anonymous users.
    • Enhanced landmark management and search (caching, full-text name search, region mapping).
  • Chores

    • Updated repository ownership metadata.

✏️ Tip: You can customize this high-level summary in your review settings.

@LeeHanEum LeeHanEum self-assigned this Jan 26, 2026
@LeeHanEum LeeHanEum requested a review from char-yb as a code owner January 26, 2026 04:50
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 26, 2026

📝 Walkthrough

Walkthrough

Adds Kakao map integration and client, introduces landmark domain (search, caching, persistence, events), moves GeoJson/Region to a shared support package, adds search history persistence and async recording, and exposes a /flower-spot/search API. Also updates CODEOWNERS.

Changes

Cohort / File(s) Summary
Code Ownership
\.github/CODEOWNERS
Added @LeeHanEum as an additional code owner for all files.
Kakao map client & config
pida-clients/map-client/src/main/kotlin/com/pida/client/map/..., pida-clients/map-client/src/main/resources/map.yml
Replaced old KaKaoMapApi with new KakaoMapApi (two endpoints), added KakaoMapClient (LandmarkSearchClient impl), response models (KakaoRegionResponse), KakaoMapProperties, and enabled config binding; added map.yml API key entry.
Landmark domain & services
pida-core/core-domain/src/main/kotlin/com/pida/landmark/*
New landmark types and interfaces: NewLandmark, LandmarkRepository, LandmarkSearchClient, LandmarkFinder (caching), LandmarkService (event handler), LandmarkAppender, and LandmarkFetchEvent.
FlowerSpot facade & controller
pida-core/core-domain/.../flowerspot/*, pida-core/core-api/.../flowerspot/*
Added FlowerSpotFacade.search(...) with event publishing, controller endpoint /flower-spot/search, and response DTOs FlowerSpotSearchResponse / PlaceSearchResponse.
Geo support (shared types)
pida-core/core-domain/src/main/kotlin/com/pida/support/geo/*
Moved/added GeoJson, introduced shared Region enum and String.toRegion() mapping, and updated imports across modules to com.pida.support.geo.
Search history domain & persistence
pida-core/core-domain/src/main/kotlin/com/pida/history/*, pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/history/*
New domain models SearchHistory/NewSearchHistory, SearchHistoryAppender, SearchHistoryRepository, async SearchHistoryService (event listener), JPA entity SearchHistoryEntity, SearchHistoryJpaRepository, and core repository implementation.
Storage — landmark & flowerspot
pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/landmark/*, .../flowerspot/*
Added/updated JPA repositories and core repositories for landmarks and flower-spots (search by name/street, saveAll with geometry, update name TSV), entity package moves and a new nameTsv column on LandmarkEntity.
Config / Feign config & imports
multiple files
Added @EnableConfigurationProperties(KakaoMapProperties::class) in MapFeignConfig and many import adjustments to use com.pida.support.geo.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client (HTTP)
  participant Controller as FlowerSpotController
  participant Facade as FlowerSpotFacade
  participant Cache as LandmarkFinder
  participant External as KakaoMapApi
  participant EventPub as ApplicationEventPublisher
  participant LandmarkSvc as LandmarkService
  participant HistorySvc as SearchHistoryService
  participant DB as Database

  Client->>Controller: GET /flower-spot/search?query
  Controller->>Facade: search(query, user)
  Facade->>EventPub: publishSearchEvent(query, user)
  Facade->>Cache: findAll / searchByName
  alt cached enough
    Cache-->>Facade: landmarks
  else not enough
    Facade->>External: searchKeyword(query, Authorization)
    External-->>Facade: NewLandmark[]
    Facade->>EventPub: publishLandmarkFetchEvent(query, fetched)
  end
  Facade-->>Controller: FlowerSpotSearchResponse
  Note right of EventPub: async listeners (after commit)
  EventPub->>LandmarkSvc: LandmarkFetchEvent
  LandmarkSvc->>DB: persist new landmarks (via LandmarkAppender/CoreRepo)
  EventPub->>HistorySvc: FlowerSpotSearchEvent
  HistorySvc->>DB: persist SearchHistoryEntity
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • char-yb

"🐰 I hopped through code and found the trail,
Kakao maps and regions in my tail,
Events recorded, caches warmed anew,
Landmarks planted, history saved too,
Blooming searches — a hop, skip, and a sail!" 🌸

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 3.45% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title '[Feature] Integrate with Kakao Map API to implement search landmark' clearly and directly describes the main change in this changeset: integrating Kakao Map API to enable landmark search functionality.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🤖 Fix all issues with AI agents
In
`@pida-clients/map-client/src/main/kotlin/com/pida/client/map/KakaoMapClient.kt`:
- Line 27: Replace the info-level log that prints the raw query in
KakaoMapClient (the line using logger.info("Kakao Map search API requested:
query='$query', found=${response.documents.size}")) with a safer debug-level log
and avoid storing raw PII: compute either a truncated version (e.g., first N
chars + "...") or a non-reversible hash of query and log that value along with
response.documents.size using logger.debug; ensure the original info-level entry
is removed or downgraded so raw query strings are never written to info logs.
- Around line 16-36: The parsing of coordinates in searchByKeyword is unsafe:
document.x.toDouble() and document.y.toDouble() can throw NumberFormatException;
update the mapping so you first safely parse coordinates with toDoubleOrNull(),
filter out documents where x or y are null (or otherwise invalid), and only
create NewLandmark for entries with valid numeric x/y, e.g., parse document.x
and document.y with toDoubleOrNull() inside the response.documents.map (or use
mapNotNull) and skip or log malformed entries so NewLandmark(...) receives
non-null Double coordinates.

In
`@pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/flowerspot/response/FlowerSpotSearchResponse.kt`:
- Around line 17-21: The `@ArraySchema` on the flowerSpots property references the
wrong DTO type: change the Schema(implementation = FlowerSpotResponseDto::class)
used on the flowerSpots field to Schema(implementation =
PlaceSearchResponse::class) so the annotation matches the actual property type
List<PlaceSearchResponse> (mirror how the landmarks field is annotated); update
the Schema implementation class in the flowerSpots annotation and run the
OpenAPI generation to verify documentation now shows PlaceSearchResponse.

In
`@pida-core/core-domain/src/main/kotlin/com/pida/flowerspot/FlowerSpotFacade.kt`:
- Around line 84-89: The external API call path in FlowerSpotFacade (inside the
else branch that calls landmarkSearchClient.searchByKeyword and
publishLandmarkFetchEvent) lacks exception handling; wrap the call to
landmarkSearchClient.searchByKeyword in a try/catch, add a logger to
FlowerSpotFacade, and on exception log the error with context and return the
fallback list (the existing flowerSpots mapping) so failures in the Kakao API do
not propagate to callers; still publish the fetch event only on success and do
not rethrow the exception.
- Around line 69-70: The use of `@Transactional` on the suspend function search()
risks losing transaction context across coroutine suspension; remove the
annotation from search() and either (a) move any needed transactional work into
a non-suspending helper (e.g., create a blocking method or a separate
non-suspend function like saveXWithinTransaction() called from search) annotated
with `@Transactional`, or (b) wrap the transactional operations with a reactive/
coroutine-friendly TransactionalOperator.executeAndAwait() call; specifically
ensure publishSearchEvent() and external call
landmarkSearchClient.searchByKeyword() remain outside the transaction if they
don’t need atomicity, and refactor code paths in FlowerSpotFacade.search() to
call the new non-suspend transaction boundary or use TransactionalOperator for
DB writes.

In
`@pida-core/core-domain/src/main/kotlin/com/pida/flowerspot/FlowerSpotService.kt`:
- Around line 24-26: The searchFlowerSpots service currently forwards the raw
query into flowerSpotFinder.searchByStreetName which can allow blank/whitespace
queries to reach repository methods like findByStreetNameContaining("") and
trigger full-table scans; update searchFlowerSpots (and/or a small helper inside
FlowerSpotService) to trim the incoming query, check isEmpty after trimming, and
immediately return an empty List<FlowerSpot> for blank input, otherwise call
flowerSpotFinder.searchByStreetName with the trimmed query so downstream
finder/repository never receive empty strings.

In
`@pida-core/core-domain/src/main/kotlin/com/pida/history/SearchHistoryService.kt`:
- Around line 18-25: The log in handleSearchEvent currently includes PII
(userId) and uses an unnecessary return; change the logger.info call to avoid
logging event.userId (e.g., log only query and an authentication flag like
isAuthenticated = event.userId != null or omit user info) and remove the
redundant return before calling recordSearchHistory(query = event.query, userId
= event.userId) so the function simply calls recordSearchHistory without
returning its Unit; update references in handleSearchEvent and keep
recordSearchHistory signature unchanged.

In `@pida-core/core-domain/src/main/kotlin/com/pida/landmark/LandmarkService.kt`:
- Around line 41-49: The addLandmarks function currently calls
landmarkFinder.existsByName for each NewLandmark, causing N+1 queries; change it
to batch-check names by adding a new method like
landmarkFinder.findExistingNames(names: List<String>): Set<String> (and
corresponding repository implementation) then in addLandmarks (function
addLandmarks) return early on empty input, collect landmarks.map { it.name },
call findExistingNames once to get existingNames, filter landmarks with
filterNot { it.name in existingNames }, and finally call
landmarkAppender.addAll(newLandmarks) if not empty.

In `@pida-core/core-domain/src/main/kotlin/com/pida/support/geo/Region.kt`:
- Around line 23-42: The toRegion() extension currently maps unknown strings to
Region.SEOUL; change this to surface unexpected inputs by adding an explicit
Region.UNKNOWN constant to the Region enum (or choose to throw
IllegalArgumentException) and update the toRegion() function to return
Region.UNKNOWN (or throw) in the else branch instead of defaulting to SEOUL;
update any callers that rely on the old default to handle Region.UNKNOWN (or
handle the exception) appropriately, and add unit tests for toRegion() covering
unknown/empty inputs.

In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/landmark/LandmarkCoreRepository.kt`:
- Around line 31-42: The saveAll method builds LandmarkEntity objects and calls
landmarkJpaRepository.saveAll(entities) but then unnecessarily calls .map {
it.toLandmark() } whose result is discarded; remove the trailing .map {
it.toLandmark() } so saveAll simply calls
landmarkJpaRepository.saveAll(entities) (keep creation of LandmarkEntity and
GEOMETRY_FACTORY.createPoint usage intact) to avoid computing unused mapping in
saveAll.
🧹 Nitpick comments (12)
pida-clients/map-client/src/main/kotlin/com/pida/client/map/KakaoMapProperties.kt (1)

5-7: Add validation to fail fast on empty REST API key, and include the validation starter dependency.

An empty restApiKey will otherwise slip through and fail at runtime. Add @Validated + @NotBlank for startup validation, but this requires first adding the validation starter to build.gradle.kts.

♻️ Proposed changes

1. Update pida-clients/map-client/build.gradle.kts:

dependencies {
    implementation(libs.bundles.openfeign)
+   implementation(libs.spring_boot_starter_validation)

    implementation(project(":pida-core:core-domain"))
}

2. Update KakaoMapProperties.kt:

+import jakarta.validation.constraints.NotBlank
+import org.springframework.validation.annotation.Validated
+
+@Validated
 `@ConfigurationProperties`(prefix = "kakao")
 data class KakaoMapProperties(
-    val restApiKey: String,
+    `@field`:NotBlank
+    val restApiKey: String,
 )
pida-clients/map-client/src/main/resources/map.yml (1)

20-21: Consider documenting the new required env var.

KAKAO_REST_API_KEY is now required for startup; a quick note in local/dev setup docs (or a sample env file) will reduce onboarding friction.

pida-core/core-domain/src/main/kotlin/com/pida/flowerspot/FlowerSpotFinder.kt (1)

51-52: Inconsistent use of suspend modifier.

All other query methods in this class (readAll, readAllByRegion, readBy, findByCondition, readAllByLocation, readAllByLocationAndRegion) are suspend functions, but searchByStreetName is synchronous. This inconsistency could block the coroutine dispatcher thread when called from a coroutine context.

Consider making this method a suspend function to maintain consistency with the rest of the class:

♻️ Suggested fix
-    fun searchByStreetName(streetName: String): List<FlowerSpot> = flowerSpotRepository.findByStreetNameContaining(streetName)
+    suspend fun searchByStreetName(streetName: String): List<FlowerSpot> = flowerSpotRepository.findByStreetNameContaining(streetName)

This would also require updating FlowerSpotRepository.findByStreetNameContaining to be a suspend function.

pida-core/core-domain/src/main/kotlin/com/pida/flowerspot/FlowerSpotRepository.kt (1)

18-19: Inconsistent suspend modifier in repository interface.

All other methods in this interface are suspend functions, but findByStreetNameContaining is synchronous. This breaks the consistent async pattern established in the repository.

♻️ Suggested fix
-    fun findByStreetNameContaining(streetName: String): List<FlowerSpot>
+    suspend fun findByStreetNameContaining(streetName: String): List<FlowerSpot>
pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/landmark/LandmarkJpaRepository.kt (1)

8-8: Consider indexing strategy for name search.

The findByNameContainingAndDeletedAtIsNull method generates a LIKE '%name%' query which cannot leverage standard B-tree indexes. If the landmark table grows large and search performance degrades, consider:

  • A trigram index (PostgreSQL pg_trgm extension) for efficient substring matching
  • Full-text search if more sophisticated search is needed

This is not blocking for initial implementation but worth monitoring.

pida-core/core-api/src/main/kotlin/com/pida/presentation/v1/flowerspot/FlowerSpotController.kt (1)

59-63: Add @NotBlank validation to the query parameter to prevent unnecessary downstream operations.

Empty or blank search queries would trigger operations across all layers: event publishing, database queries, cache lookups, and external API calls. This can be prevented at the controller level.

Suggested change
+import jakarta.validation.constraints.NotBlank
+
 suspend fun searchFlowerSpot(
-    `@RequestParam` `@Parameter`(name = "query", description = "검색 키워드") query: String,
+    `@RequestParam` `@NotBlank` `@Parameter`(name = "query", description = "검색 키워드") query: String,
     `@Parameter`(hidden = true, required = false) user: User?,
 ): FlowerSpotSearchResponse {
pida-core/core-domain/src/main/kotlin/com/pida/history/SearchHistoryService.kt (1)

40-42: Consider inlining this delegation method.

The add method simply delegates to searchHistoryAppender.add without any additional logic. Consider calling searchHistoryAppender.add(searchHistory) directly from recordSearchHistory to reduce unnecessary indirection.

pida-clients/map-client/src/main/kotlin/com/pida/client/map/KakaoMapApi.kt (3)

23-36: Remove unnecessary consumes attribute for GET requests.

GET requests don't have a request body, so consumes is not applicable and may cause confusion. The @RequestParam annotations correctly handle query parameters.

Suggested fix
     `@RequestMapping`(
         method = [RequestMethod.GET],
         value = ["/v2/local/search/keyword"],
-        consumes = ["application/x-www-form-urlencoded;charset=utf-8"],
         produces = [MediaType.APPLICATION_JSON_VALUE],
     )
     fun searchKeyword(

46-56: Same issue: remove consumes for this GET endpoint.

Apply the same fix to convertToRegion.

Suggested fix
     `@RequestMapping`(
         method = [RequestMethod.GET],
         value = ["/v2/local/geo/coord2regioncode"],
-        consumes = ["application/x-www-form-urlencoded;charset=utf-8"],
         produces = [MediaType.APPLICATION_JSON_VALUE],
     )

10-11: Consider externalizing the base URL to configuration.

Hardcoding the URL works, but externalizing it to application properties (e.g., ${kakao.api.base-url}) would allow easier testing with mock servers or different environments.

pida-core/core-domain/src/main/kotlin/com/pida/landmark/LandmarkFinder.kt (1)

27-34: Consider sanitizing the query in the cache key.

Using the raw query directly in the cache key ("$SEARCH_KEY:$query") could cause issues with special characters (e.g., :, spaces, unicode). Consider normalizing or encoding the query for a safer cache key.

Example
suspend fun searchByName(query: String): List<Landmark> {
    val sanitizedKey = "$SEARCH_KEY:${query.trim().lowercase()}"
    return cacheAdvice.invoke(
        ttl = 180L,
        key = sanitizedKey,
        typeReference = object : TypeReference<List<Landmark>>() {},
    ) {
        landmarkRepository.findByNameContaining(query)
    }
}
pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/landmark/LandmarkCoreRepository.kt (1)

15-16: Remove unused tx dependency.

The TransactionTemplates dependency is injected but never used in this class.

Suggested fix
 `@Repository`
 class LandmarkCoreRepository(
     private val landmarkJpaRepository: LandmarkJpaRepository,
-    private val tx: TransactionTemplates,
 ) : LandmarkRepository {

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In
`@pida-core/core-domain/src/main/kotlin/com/pida/flowerspot/FlowerSpotFacade.kt`:
- Around line 81-93: The returned landmarks are not ordered as required
(administrative district → subway station → landmark); update the code in
FlowerSpotFacade (around cachedLandmarks handling and the fetched path using
landmarkSearchClient.searchByKeyword) to sort landmarks using a comparator by
category/type before constructing the FlowerSpotSearchResult: for the cached
branch sort cachedLandmarks, and for the fetched branch map { it.toLandmark() }
then sort that list; ensure publishLandmarkFetchEvent is still called in both
branches and the sorted list is passed into FlowerSpotSearchResult.

In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/landmark/LandmarkJpaRepository.kt`:
- Around line 10-19: The native query in LandmarkJpaRepository's
findByNameContainingAndDeletedAtIsNull uses to_tsquery which fails on raw user
input; replace to_tsquery('simple', :query) with plainto_tsquery('simple',
:query) in the `@Query` string so plain-text user queries (including special
characters) are safely parsed while keeping the same parameter name and return
type.
🧹 Nitpick comments (1)
pida-core/core-domain/src/main/kotlin/com/pida/landmark/LandmarkFinder.kt (1)

27-33: Normalize/trim query before building cache keys.
Trimming reduces cache churn from accidental whitespace and stabilizes keys.

♻️ Proposed diff
-    suspend fun searchByName(query: String): List<Landmark> =
-        cacheAdvice.invoke(
-            ttl = 0L,
-            key = "$SEARCH_KEY:$query",
-            typeReference = object : TypeReference<List<Landmark>>() {},
-        ) {
-            landmarkRepository.findByNameContaining(query)
-        }
+    suspend fun searchByName(query: String): List<Landmark> {
+        val normalizedQuery = query.trim()
+        return cacheAdvice.invoke(
+            ttl = 0L,
+            key = "$SEARCH_KEY:$normalizedQuery",
+            typeReference = object : TypeReference<List<Landmark>>() {},
+        ) {
+            landmarkRepository.findByNameContaining(normalizedQuery)
+        }
+    }

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@pida-core/core-domain/src/main/kotlin/com/pida/landmark/LandmarkFinder.kt`:
- Around line 27-34: Normalize and validate the incoming query inside
searchByName before using it for caching or searching: trim whitespace and
convert to a canonical case (e.g., lowerCase) into a local variable, guard
against blank/empty queries (return an empty List<Landmark> immediately) to
avoid caching a key like "landmark:search:" and only pass the normalized query
into cacheAdvice.invoke (key using SEARCH_KEY and the normalized query) and into
landmarkRepository.findByNameContaining so both cache keys and repository calls
use the same sanitized value.

In
`@pida-storage/db-core/src/main/kotlin/com/pida/storage/db/core/landmark/LandmarkJpaRepository.kt`:
- Around line 23-33: The updateNameTsv() repository method is annotated with
`@Modifying` but lacks `@Transactional`, risking TransactionRequiredException if
called outside the current TransactionTemplate context; add the Spring
`@Transactional` annotation to the LandmarkJpaRepository.updateNameTsv() method so
the modifying native query always executes within a transaction and make sure
the import is from org.springframework.transaction.annotation.Transactional.

@@ -1 +1,2 @@
* @char-yb
* @LeeHanEum
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

이거 빼먹었네용 감사합니당👍

Comment on lines +72 to +94
suspend fun search(
query: String,
user: User?,
): FlowerSpotSearchResult {
publishSearchEvent(query, user)

val flowerSpots = flowerSpotService.searchFlowerSpots(query)
val cachedLandmarks = landmarkService.searchLandmarks(query)

val landmarks =
if (hasEnough(cachedLandmarks)) {
// 캐시된 랜드마크가 충분한 경우 즉시 응답 후에 비동기적으로 보정
publishLandmarkFetchEvent(query, null)
cachedLandmarks
} else {
// 캐시된 랜드마크가 충분하지 않은 경우, 외부 API 호출 대기
val fetched = landmarkSearchClient.searchByKeyword(query)
publishLandmarkFetchEvent(query, fetched)
fetched.map { it.toLandmark() }
}

return FlowerSpotSearchResult(landmarks, flowerSpots)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

해당 메서드에서 eventPublisher.publishEvent 이벤트 발생 순서는 순차적인가요??
아님 순차적이지 않아도 되는건지 확인가능할까요?!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

음 좋은 부분 말씀해주셨네요~
검색 기록 저장 이벤트가 외부 API 호출을 통한 랜드마크 보정 이벤트보다 항상 선행되어야하는 작업은 아닙니다.
순차적이지 않아도 될것 같아요~!

Copy link
Member

@char-yb char-yb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
고생많으셨어요~!

@LeeHanEum LeeHanEum merged commit e6b0c7a into develop Jan 27, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants