Conversation
Add a test that checks that "tcp server mode" works. Server meaning that this plugin acts as the TCP server and the langserver connects as TCP client.
Add a test that checks that "tcp server mode" works. Server meaning that this plugin acts as the TCP server and the langserver connects as TCP client.
Conflicts: tests/server.py
✅ Deploy Preview for sublime-lsp ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
This comment was marked as resolved.
This comment was marked as resolved.
|
It's okay if #2739 is merged before this. I'll take care of the merge conflicts. |
Also write the rest of the requests in terms of Session.request
|
Would be useful to split it into smaller chunks, if possible. For example it would likely be possible to make Lots of assumptions on my side but that's what I feel. I was actually looking into that before as I wanted to move start_async into dedicated thread so that it doesn't black other plugins (kinda opposite goal of yours but also kinda similar as I guess with asyncio it will also run on dedicated thread). See #2863 It will be hard to review it properly with a big dump of code that refactors most of the code base. |
| self._new_session = None | ||
| sublime.set_timeout_async(self._dequeue_listener_async) | ||
| self._window.status_message(message) | ||
| async def start(self, config: ClientConfig, listener: AbstractViewListener) -> None: |
There was a problem hiding this comment.
Not sure if relevant, and my understanding of asyncio is also poor at the moment, but not too long ago I added a lsp_check_applicable command that can be used by plugins to start/stop the session (or attach/detach) for a given view (used by LSP-copilot on changes to .copilotignore file):
Lines 258 to 262 in 59c9d86
This uses start_async and also a few additional lines that were required to start a new session properly.
That code probably needs to be adapted to this rewrite as well.
- Add sublime.set_timeout executor wrapper - Make all request handlers `async` - Define a CancellableInflightStreamingRequest class that enables `async for` syntax - Start inheriting DocumentSyncListener from sublime_aio.ViewEventListener (This one doesn't work yet) The state is fairly broken at this point.
| self._on_sheet_opened(view.sheet(), uri, r).then(resolve) | ||
| sheet = view.sheet() | ||
| if sheet and (view := sheet.view()): | ||
| view.settings().set('lsp_uri', uri) # Preserve original URI given by the language server | ||
| if r: | ||
| center_selection(view, r) | ||
| return view | ||
| return None |
There was a problem hiding this comment.
Probably wrong conflict resolution?
There was a problem hiding this comment.
Sorry, could you point out what’s wrong? I remember spending time on this exact conflict and thinking this was the right resolution. But I’ve been wrong many times in the past.
There was a problem hiding this comment.
It's not calling _on_sheet_opened anymore. The logic was moved there.
| self.workspace_diagnostics_pending_responses[identifier] = inflight_request = self.request( | ||
| Request.workspaceDiagnostic(params), | ||
| partial_results=True | ||
| ) | ||
| try: | ||
| async for partial_response in inflight_request: | ||
| for diagnostic_report in partial_response['items']: | ||
| uri = normalize_uri(diagnostic_report['uri']) | ||
| version = diagnostic_report['version'] | ||
| # Skip if outdated | ||
| if isinstance(version, int) and (session_buffer := self.get_session_buffer_for_uri(uri)) and \ | ||
| version < session_buffer.last_synced_version: | ||
| continue | ||
| self.diagnostics_result_ids[(uri, identifier)] = diagnostic_report.get('resultId') | ||
| if is_workspace_full_document_diagnostic_report(diagnostic_report): | ||
| self.handle_diagnostics(uri, identifier, version, diagnostic_report['items']) | ||
| await inflight_request | ||
| self.workspace_diagnostics_pending_responses[identifier] = None | ||
| except ResponseException as e: | ||
| if e.error['code'] == LSPErrorCodes.ServerCancelled: | ||
| data = e.error.get('data') | ||
| if is_diagnostic_server_cancellation_data(data) and data['retriggerRequest']: | ||
| # Retrigger the request after a short delay, but don't reset the pending response variable for this | ||
| # moment, to prevent new requests of this type in the meanwhile. The delay is used in order to prevent | ||
| # infinite cycles of cancel -> retrigger, in case the server is busy. | ||
| sublime.set_timeout_async( | ||
| lambda: self._do_workspace_diagnostics_async(identifier), | ||
| WORKSPACE_DIAGNOSTICS_RETRIGGER_DELAY | ||
| ) |
There was a problem hiding this comment.
Do we need to create such specific code to handle partial result?
The original logic seemed so much simpler, we just provide a handler for partial result and server either uses it or not.
With this new approach we need to create quite a bit different handler if partial result is possible and in practice it's possible in many more cases (that we currently don't handle).
Is it not possible to provide a callback for handling results (basically like before) but also await the request? Would awaited request block the thread or it would work fine to hang there for potentially a long time?
There was a problem hiding this comment.
Good questions. This code, handling diagnostics, is currently broken. I'm still working out the details for partial results.
The latest iteration on this is the CancellableInflightStreamingRequest class I wrote in core/sessions.py for partial results. It is returned from a new Session.stream request:
async for partial_result in session.stream(Request("foobar")):
# do something with partial_resultThe (broken) code that you highlighted here also does a final await inflight_request, but I thought that was not intuitive. For partial requests, only a single async for loop should be necessary.
The original logic seemed so much simpler, we just provide a handler for partial result and server either uses it or not.
I don't agree, respectfully. Having the logic split over various callback functions is a worse experience than having the logic be in a single function.
However, setting a on_partial_result will still be needed for server requests that do not stream their partial results in a list[R] fashion. The fact that the final partial result will be an empty response list allows the async for logic to detect when to end the loop. Moreover I plan to just keep the on_partial_result callback logic available.
| message = self._reader.readline().decode("utf-8", "replace") | ||
| if not message: | ||
| continue | ||
| raw = await self._reader.readline() | ||
| if not raw: | ||
| break |
There was a problem hiding this comment.
Why break instead of continue?
This code is pretty well battle-tested so changing the logic can be quite risky.
There was a problem hiding this comment.
I tend to agree with you here. And I’ll try to revert this. But, my experience was: using continue here caused 100% CPU usage, suggesting an infinite loop. I’ll reconsider this.
| async def parse_headers(reader: asyncio.StreamReader) -> dict[str, str] | None: | ||
| headers: dict[str, str] = {} | ||
| while True: | ||
| line = await reader.readline() | ||
| if not line: | ||
| return None | ||
| line = line.decode("ascii").strip() | ||
| if not line: | ||
| break | ||
| key, value = line.split(":", 1) | ||
| headers[key.strip().lower()] = value.strip() | ||
| return headers |
There was a problem hiding this comment.
Can we not use http.client.parse_headers still? I feel that this implementation might be a bit naive compared to the original one.
There was a problem hiding this comment.
Unfortunately no. There is no equivalent in asyncio. But JSON-RPC is simpler than HTTP anyway. There is only the Content-Length header from what I know. I do agree this manual header parser is less ideal than something from the stdlib.
| from ..protocol import ConfigurationItem | ||
| from ..protocol import DocumentUri | ||
| from ..protocol import ExecuteCommandParams | ||
| from ..protocol import LSPAny | ||
| from .core.constants import ST_STORAGE_PATH | ||
| from .core.logging import exception_log | ||
| from .core.logging import debug | ||
| from .core.protocol import Notification | ||
| from .core.protocol import Request |
There was a problem hiding this comment.
All of those additions can be removed (debug unused, the rest duplicated)
|
An idea: keep original method names with _async suffix while working on the task and only rename in a dedicated PR after asyncio is actually merged. Why? Because renaming introduces a significant amount of changes which means:
|
|
Agreed, removing async suffices (suffixes?) was a mistake, I’ll revert that. |
d716ebb to
2649a6f
Compare
This is for now a draft PR where I'm trying to introduce asyncio and replace callback-based code with
asyncfunctions. I'll keep a todo list here for the things that I believe need to be done before marking this work as ready for review. You are free to comment, make suggestions, and dispute, during the draft of course. Lots of functionality does not work at all at the moment of opening this work as draft.The main driver for doing this is to decrease the thread usage of this plugin from O(n) to O(1) threads, where
nis the number of language servers running. The secondary driver is syntax sugar.WindowManager.startasyncWindowManager.startworkWindowManager, replace by async locksublime.set_timeout_asyncsublime_aio.call_soon_threadsafedef f() -> Promise[T]: ...async def f() -> T: ...Promise.then(lambda x: ...)x = await f()session.send_request_async(R(), lambda x: ...)x = await session.request(R())try ... except ResponseException:blockasync for partial_result in session.stream(R()):(caveat: only works forlist[...]-style responses)sublime_aio.run_coroutine(f())PromiseobjectsPromise.thenawait promisePromise.allasyncio.gathersublime.set_timeout_async(f, timeout_ms=1000)await asyncio.sleep(1)threading.Lock, or write very complicated queueing logicasyncio.Locksublime_aio.run_coroutineThe Plan
Make "most" code run on the sublime_aio thread
Most code is doing bookkeeping. This type of code used to run on the Sublime "async" thread. It should run on the asyncio loop thread.
Previously, the code attempted to make most code run on the ST async thread. We never really enforced this. We tried to make it clear that a function/method should be running on the ST async thread by suffixing it with
_async.If you have an
async defcoroutine function, then such a coroutine function is forced to run on the asyncio loop thread. So enforcement becomes automatic.Make compute-intensive function run on the Sublime "async" thread
The only compute-intensive code we deal with are parsing and emitting JSON. Only the JSON parser/emitter should run on the ST async thread.
Bridging code for existing LSP-* plugins
We made sure that all AbstractPlugin and LspPlugin related (class)methods ran on the ST async thread. I want to now make sure all these (class)methods run on the sublime_aio thread with this pull request.
Certain methods may also be marked
asyncfor LspPlugin, most notablyon_pre_startand perhapson_initialize.The
Promiseobject can be awaited, so older AbstractPlugin/LspPlugin-related functionality returning promises from request handlers should work (although currently untested).TODO: work this out futher (there's a
Promise.__await__defined, so at least we can await promises)