CASSSIDECAR-226 Adding endpoint for verifying files post data copy during live migration#309
CASSSIDECAR-226 Adding endpoint for verifying files post data copy during live migration#309nvharikrishna wants to merge 4 commits intoapache:trunkfrom
Conversation
| String fullURI = seed != null | ||
| ? String.format("%s?%s=%s&%s=%d", requestURI, DIGEST_ALGORITHM_PARAM, digestAlgorithm, SEED_PARAM, seed) |
There was a problem hiding this comment.
One learning from the RestoreJob work is that the custom seed does not provide benefit for data integrity validation, but only adds code complexity. I would just drop the support of custom seed support to simplify the implementation, and use the fixed seed 0, which also makes the client-server communication simpler.
Not strong on removing the seed support, but feel ideal to do so.
There was a problem hiding this comment.
I agree with the code complexity and simplifying code suggestions. I can remove the support for seed for live migration.
9e9da17 to
39cb915
Compare
| if (request.maxConcurrency() > liveMigrationConfiguration.maxConcurrentFileRequests()) | ||
| { | ||
| throw new IllegalArgumentException("Invalid maxConcurrency " + request.maxConcurrency() + | ||
| ". It cannot be greater than " + | ||
| liveMigrationConfiguration.maxConcurrentFileRequests()); | ||
| } |
There was a problem hiding this comment.
In FilesVerificationTaskManager, it handles maxConcurrency differently. Can you address the inconsistency or the duplication? There seems to be sufficient to have one validation only.
if (request.maxConcurrency() > maxPossibleConcurrency)
{
return Future.failedFuture(
new LiveMigrationInvalidRequestException("max concurrency can not be more than " + maxPossibleConcurrency));
}
There was a problem hiding this comment.
De-duplicated the checks
| { | ||
| LOGGER.error("Cannot start a new files verification task for host {} " + | ||
| "while another live migration task is in progress.", host); | ||
| context.fail(wrapHttpException(FORBIDDEN, throwable.getMessage(), throwable)); |
There was a problem hiding this comment.
Should the status code be 409 Conflict, instead of Forbidden?
Forbidden typically means no permission to perform an action, not the cause here.
There was a problem hiding this comment.
CONFLICT makes more sense. Updated it.
| * executed asynchronously to validate file integrity between source and destination nodes. | ||
| */ | ||
| @Singleton | ||
| public class FilesVerificationTaskManager |
There was a problem hiding this comment.
Should FilesVerificationTaskManager and DataCopyTaskManager have a common base class? There are several almost identical methods, e.g. getAllTasks(), getTask() and cancelTask().
There was a problem hiding this comment.
Started with inheriting LiveMigrationTaskManager, but had to switch to association. Since FilesVerificationTaskManager and DataCopyTaskManager are different types, Guice creates a separate instance for each, meaning each would get its own currentTasks map (instance to task map in LiveMigrationTaskManager). This breaks the invariant that only one task of any type can be active per instance at a time (line 44). Don't want to use a static mutable map for inheritance. So, used association.
| { | ||
| return Collections.emptyList(); | ||
| } | ||
| return Collections.singletonList(currentTasks.get(localInstance.id())); |
There was a problem hiding this comment.
localInstance.id() could potentially be removed at this step due to race condition. Instead, let's get the value at line#99 and return based on whether value is null or not.
There was a problem hiding this comment.
Fetching task only once and returning the value.
...main/java/org/apache/cassandra/sidecar/livemigration/LiveMigrationFilesVerificationTask.java
Outdated
Show resolved
Hide resolved
...main/java/org/apache/cassandra/sidecar/livemigration/LiveMigrationFilesVerificationTask.java
Outdated
Show resolved
Hide resolved
| if (digestAlgorithm.equalsIgnoreCase(MD5Digest.MD5_ALGORITHM)) | ||
| { | ||
| return Future.succeededFuture(new MD5Digest(digestResponse.digest)); | ||
| } | ||
| else if (digestAlgorithm.equalsIgnoreCase(XXHash32Digest.XXHASH_32_ALGORITHM)) | ||
| { | ||
| return Future.succeededFuture(new XXHash32Digest(digestResponse.digest)); | ||
| } |
There was a problem hiding this comment.
Should it be in DigestAlgorithmFactory?
There was a problem hiding this comment.
Felt both Digest and DigestResponse are not very relative to DigestAlgorithm. So, did not placed it DigestAlgorithmFactory. Moved it to DigestResponse.
...r/src/main/java/org/apache/cassandra/sidecar/livemigration/FilesVerificationTaskManager.java
Outdated
Show resolved
Hide resolved
...r/src/main/java/org/apache/cassandra/sidecar/livemigration/FilesVerificationTaskManager.java
Outdated
Show resolved
Hide resolved
| Future<String> verifyDigest(InstanceFileInfo fileInfo) | ||
| { | ||
| return getSourceFileDigest(fileInfo) | ||
| .compose(digest -> { | ||
| String path = localPath(fileInfo.fileUrl, instanceMetadata).toAbsolutePath().toString(); | ||
| return digestVerifierFactory.verifier(MultiMap.caseInsensitiveMultiMap().addAll(digest.headers())) | ||
| .verify(path) | ||
| .compose(verified -> Future.succeededFuture(path)) | ||
| .recover(cause -> Future.failedFuture( | ||
| new DigestMismatchException(path, fileInfo.fileUrl, cause))); | ||
| }) | ||
| .onSuccess(filePath -> LOGGER.debug("{} Verified file {}", logPrefix, fileInfo.fileUrl)) | ||
| .onFailure(cause -> LOGGER.error("{} Failed to verify file {}", logPrefix, fileInfo.fileUrl, cause)); | ||
| } | ||
|
|
||
| private Future<Digest> getSourceFileDigest(InstanceFileInfo fileInfo) | ||
| { | ||
| return Future.fromCompletionStage(sidecarClient.liveMigrationFileDigestAsync(new SidecarInstanceImpl(source, port), | ||
| fileInfo.fileUrl, | ||
| request.digestAlgorithm())) | ||
| .compose(this::toDigest); | ||
| } |
There was a problem hiding this comment.
It makes 1 http request to source per file to get the digest. According to LiveMigrationConcurrencyLimitHandler, TOO_MANY_REQUESTS can be thrown. There is no retry implemented to handle it, due to SingleInstanceSelectionPolicy + default retry policy. I think you want to add custom retry policy for the applicable requests.
Beside no retry and fail silently, 1 request per file seems to ensure slowness already. Maybe we should revisit the design decision later.
There was a problem hiding this comment.
After giving some more thought, I feel the 429 status code (TOO_MANY_REQUESTS) is not appropriate and the 503 (SERVICE UNAVAILABLE) is more appropriate as maxConcurrentFileRequests (used by LiveMigrationConcurrencyLimitHandler) is a server-side concurrency cap shared across all clients, and an individual client hitting this limit hasn't done anything wrong - the server is simply busy with other requests. 503 describes this situation.
SidecarclientProvider is using ExponentialBackoffRetryPolicy as the default retry policy, which does retries. If we suspect that the default retry policy can be overridden, then I can explicitly initiate an instance of ExponentialBackoffRetryPolicy and use it.
What do you think?
There was a problem hiding this comment.
5xx status code are considered server side error (unrelated to client activities).
4xx status codes are errors triggered by client.
IMO, 429 makes sense in this scenario. The error happens only when there are enough concurrent requests are issued by client.
The default retry policy does not react on 429 status code and no retries are performed.
There was a problem hiding this comment.
The concurrency limit used here (sidecarConfiguration.liveMigrationConfiguration().maxConcurrentFileRequests()) is global across all clients, not per-client. A client sending its very first request can get rejected because other clients have exhausted the pool.
429 means "the user has sent too many requests" — implying the specific client is at fault, which I feel misleading here. 503 could also mean "the server is unable to handle the request due to a temporary overload" - which I think describes the situation in this case.
If this were a per-client throttle, 429 would be correct. Since maxConcurrentFileRequests is a global limit protecting the source node, I feel 503 is more accurate.
Happy to change it to 429 if you feel differently though! Open to any other suggestions as well.
39cb915 to
639c95a
Compare
CASSSIDECAR-226 Adding an endpoint for verifying files between source and destination post data copy.
This implementation uses a two-task approach (data copy + file verification) rather than inline digest verification during data copy (as originally proposed in CEP-40). This design choice is motivated by:
Here are the endpoint details:
Sample files verification task submission request:
It supports XXHash32 algorithm too and
seedas additional input in the payload.Sample response:
{ "taskId": "b8e4f3d2-5c6b-5d9e-0f2g-3b4c5d6e7f8g", "statusUrl": "/api/v1/live-migration/files-verification-tasks/b8e4f3d2-5c6b-5d9e-0f2g-3b4c5d6e7f8g" }Fetching files verification task status
Sample response:
{ "id": "b8e4f3d2-5c6b-5d9e-0f2g-3b4c5d6e7f8g ", "digestAlgorithm": "md5", "seed": null, "state": "COMPLETED", "source": "localhost1", "port": 9043, "filesNotFoundAtSource": 0, "filesNotFoundAtDestination": 0, "metadataMatched": 379, "metadataMismatches": 0, "digestMismatches": 0, "digestVerificationFailures": 0, "filesMatched": 323 }Also made additional changes to ensure that either data copy task or file verification task can be executed at any point of time.