chore: add tn_local extension with local storage schema#1339
Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (8)
📝 WalkthroughWalkthroughA new tn_local extension package is introduced to provide local database storage capabilities. The implementation includes a PostgreSQL schema wrapper, transaction management via pgxpool, database operations layer, extension singleton initialization, and dependency version updates. Changes
Sequence DiagramssequenceDiagram
participant Init as Startup Init
participant Ext as Extension
participant LocalDB as LocalDB
participant Pool as PoolDBWrapper
participant PG as PostgreSQL
Init->>Ext: GetExtension()
Ext->>Ext: Create singleton (once)
Init->>LocalDB: NewLocalDB(pool, logger)
LocalDB->>Ext: configure(logger, pool, localDB)
Init->>LocalDB: SetupSchema(ctx)
LocalDB->>Pool: BeginTx(ctx)
Pool->>PG: Start transaction
PG-->>Pool: pgx.Tx
Pool-->>LocalDB: sql.Tx
LocalDB->>Pool: Execute(CREATE SCHEMA ext_tn_local...)
Pool->>PG: Execute query
PG-->>Pool: Result
LocalDB->>Pool: Execute(CREATE TABLE streams...)
LocalDB->>Pool: Execute(CREATE TABLE primitive_events...)
LocalDB->>Pool: Execute(CREATE TABLE taxonomies...)
LocalDB->>Pool: Commit(ctx)
Pool->>PG: Commit transaction
PG-->>Pool: Success
LocalDB->>LocalDB: Log completion
sequenceDiagram
participant Client as Application
participant Wrapper as PoolDBWrapper
participant Pool as pgxpool.Pool
participant PG as PostgreSQL
Client->>Wrapper: Execute(ctx, stmt, args)
Wrapper->>Pool: Acquire(ctx)
Pool-->>Wrapper: pgx.Conn
Wrapper->>PG: Query(stmt, args)
PG-->>Wrapper: Rows + TypeMap
Wrapper->>Wrapper: Decode using oidToDataType
Wrapper->>Wrapper: Build ResultSet (Columns, Rows, Status)
Wrapper-->>Client: *sql.ResultSet
Wrapper->>Pool: Release connection
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Time Submission Status
You can submit time with the command. Example: See available commands to help comply with our Guidelines. |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
extensions/tn_local/test_init.go (1)
7-19: Keep this DB override out of production builds.Because this lives in a normal
.gofile,SetTestDBbecomes part of the runtime package API and any non-test code can bypass the real pool setup globally. If this is only a test seam, prefer a production stub wheregetTestDB()always returnsniland move the setter behind a build tag or_test.go.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/tn_local/test_init.go` around lines 7 - 19, The test DB override (testOverrides, SetTestDB, getTestDB) is exposed in production; move the setter and test-only state behind a test-only build so non-test code cannot override the real pool. Create a production stub (in the same package) where getTestDB() always returns nil, and move SetTestDB and the testOverrides definition into a _test.go or a file with a build tag like // +build test so they are included only in test builds; ensure call sites use getTestDB() so production builds cannot inject a DB.extensions/tn_local/types.go (1)
4-8: Use typed constants forstream_typevalues.
stream_typeis currently a raw string in request/response models. Adding a small enum-like type + constants (primitive,composed) will reduce invalid value propagation and keep API/DB constraints aligned.♻️ Suggested change
+type StreamType string + +const ( + StreamTypePrimitive StreamType = "primitive" + StreamTypeComposed StreamType = "composed" +) + type CreateStreamRequest struct { DataProvider string `json:"data_provider"` StreamID string `json:"stream_id"` - StreamType string `json:"stream_type"` // "primitive" or "composed" + StreamType StreamType `json:"stream_type"` } @@ type StreamInfo struct { DataProvider string `json:"data_provider"` StreamID string `json:"stream_id"` - StreamType string `json:"stream_type"` + StreamType StreamType `json:"stream_type"` CreatedAt int64 `json:"created_at"` }Also applies to: 93-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/tn_local/types.go` around lines 4 - 8, Create a new typed alias (e.g., type StreamType string) and declare constants for the allowed values (e.g., const StreamTypePrimitive StreamType = "primitive" and StreamTypeComposed StreamType = "composed"), then replace the raw string field types for stream_type in CreateStreamRequest and the other request/response structs noted in the file (the occurrences around the later model definitions) to use StreamType instead of string; update any JSON tags to remain `json:"stream_type"` and adjust any validation/usage sites to compare against the new constants (StreamTypePrimitive / StreamTypeComposed).extensions/tn_local/pool_wrapper.go (1)
27-75: Extract shared execute/decode logic to avoid divergence.
PoolDBWrapper.ExecuteandpoolTxWrapper.Executeduplicate the same result-building path. Please move this into a shared helper (accepting a query function + typemap source) to keep behavior consistent as this evolves.Also applies to: 99-141
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/tn_local/pool_wrapper.go` around lines 27 - 75, PoolDBWrapper.Execute and poolTxWrapper.Execute duplicate the same result-building/decoding steps; extract that logic into a single helper (e.g., buildResultSetFromRows) that accepts a rows provider function (query func(ctx, stmt string, args ...any) (pgx.Rows, error) or similar) and a TypeMap provider (typemap func() pg.TypeMap) so callers (PoolDBWrapper.Execute and poolTxWrapper.Execute) only perform Acquire/Begin/Query and pass the rows+typemap sources into the helper; the helper should handle FieldDescriptions -> columns/oids, call pg.OidTypesMap using the provided typemap, iterate rows, call rows.Values(), pg.DecodeFromPG with oids and oidToDataType, check rows.Err(), and return *sql.ResultSet with CommandTag info so behavior remains identical for both wrappers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@extensions/tn_local/extension.go`:
- Around line 18-37: The current use of sync.Once with a replaceable pointer
causes a race where SetExtension(ext) can be overwritten by GetExtension()'s
once.Do and existing callers can hold stale pointers; instead, remove the
pointer-replacement pattern and make a single shared Extension instance that is
created once and mutated in-place under a mutex: replace sync.Once/SetExtension
logic so GetExtension() always returns the same singleton object (create it
eagerly or on first access with once) and provide a guarded update method (e.g.,
Extension.UpdateFrom(ext) or a package-level SetEnabled/Configure function) that
acquires a sync.Mutex to modify fields of extensionInstance in place; update
callers (engineReadyHook/adminServerHook) to use the in-place updater rather
than swapping the extensionInstance pointer.
In `@extensions/tn_local/handlers.go`:
- Around line 12-39: All exposed RPC handlers currently always return a "not
implemented" internal error which causes valid requests to hard-fail; update
each handler (CreateStream, InsertRecords, InsertTaxonomy, GetRecord, GetIndex,
ListStreams) to provide real implementations or remove/unregister their RPC
registration until implemented. For a fix: implement the actual logic in each
method (or delegate to existing helper functions/services used elsewhere in
Extension), ensure proper request validation, error handling and returned
response types match CreateStreamResponse, InsertRecordsResponse,
InsertTaxonomyResponse, GetRecordResponse, GetIndexResponse and
ListStreamsResponse, and write unit tests; alternatively, if you cannot
implement now, remove the registration of these methods from the RPC server
initialization so they are not exposed.
In `@extensions/tn_local/service.go`:
- Around line 15-29: The Methods() map currently exposes RPC names (e.g.,
"local.create_stream", "local.insert_records", etc.) backed by handlers in
extensions/tn_local/handlers.go that return jsonrpc.ErrorInternal("not
implemented"); remove or stop registering these method entries from
Extension.Methods() (or register them conditionally only when ext.isEnabled and
the handlers are implemented) so clients aren't advertised endpoints that will
hard-fail, and/or change Extension.Health() to report unavailable until
implementations exist (use ext.isEnabled gating) to avoid a healthy-looking
service surface that is not implemented.
In `@extensions/tn_local/tn_local.go`:
- Around line 40-54: If createIndependentConnectionPool(ctx, app.Service,
logger) returns a pool and you construct wrapper := NewPoolDBWrapper(pool)
before calling localDB.SetupSchema(ctx), ensure the underlying pool is closed
when SetupSchema fails to avoid leaking connections: after creating wrapper (or
pool) register a cleanup that closes the pool/wrapper on error (e.g., defer-like
behavior) so that if localDB.SetupSchema returns an error you call the
pool/wrapper Close method before returning the wrapped error; update the path
that returns fmt.Errorf("failed to setup local schema: %w", err) to first close
the created pool/wrapper (using the appropriate Close method on the pool or
NewPoolDBWrapper) to guarantee no connection leak.
- Around line 72-79: Replace the manual concatenated connStr and the hardcoded
"sslmode=disable" by calling pgxpool.ParseConfig with a minimal DSN and then
populate the parsed config fields from dbConfig: use
pgxpool.ParseConfig("postgres://localhost/postgres") (or similar), then set
cfg.ConnConfig.Host, cfg.ConnConfig.Port, cfg.ConnConfig.User,
cfg.ConnConfig.Password, cfg.ConnConfig.Database from dbConfig and map
dbConfig's SSL/TLS setting into the appropriate TLS/SSL fields on cfg (rather
than appending "sslmode" to a string); finally use the populated cfg (previously
poolConfig) when creating the pool and remove the unescaped password
concatenation logic.
---
Nitpick comments:
In `@extensions/tn_local/pool_wrapper.go`:
- Around line 27-75: PoolDBWrapper.Execute and poolTxWrapper.Execute duplicate
the same result-building/decoding steps; extract that logic into a single helper
(e.g., buildResultSetFromRows) that accepts a rows provider function (query
func(ctx, stmt string, args ...any) (pgx.Rows, error) or similar) and a TypeMap
provider (typemap func() pg.TypeMap) so callers (PoolDBWrapper.Execute and
poolTxWrapper.Execute) only perform Acquire/Begin/Query and pass the
rows+typemap sources into the helper; the helper should handle FieldDescriptions
-> columns/oids, call pg.OidTypesMap using the provided typemap, iterate rows,
call rows.Values(), pg.DecodeFromPG with oids and oidToDataType, check
rows.Err(), and return *sql.ResultSet with CommandTag info so behavior remains
identical for both wrappers.
In `@extensions/tn_local/test_init.go`:
- Around line 7-19: The test DB override (testOverrides, SetTestDB, getTestDB)
is exposed in production; move the setter and test-only state behind a test-only
build so non-test code cannot override the real pool. Create a production stub
(in the same package) where getTestDB() always returns nil, and move SetTestDB
and the testOverrides definition into a _test.go or a file with a build tag like
// +build test so they are included only in test builds; ensure call sites use
getTestDB() so production builds cannot inject a DB.
In `@extensions/tn_local/types.go`:
- Around line 4-8: Create a new typed alias (e.g., type StreamType string) and
declare constants for the allowed values (e.g., const StreamTypePrimitive
StreamType = "primitive" and StreamTypeComposed StreamType = "composed"), then
replace the raw string field types for stream_type in CreateStreamRequest and
the other request/response structs noted in the file (the occurrences around the
later model definitions) to use StreamType instead of string; update any JSON
tags to remain `json:"stream_type"` and adjust any validation/usage sites to
compare against the new constants (StreamTypePrimitive / StreamTypeComposed).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 655e684a-e2ed-46be-8ba1-635bb55d01f4
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (13)
extensions/register.goextensions/tn_local/constants.goextensions/tn_local/db_ops.goextensions/tn_local/extension.goextensions/tn_local/handlers.goextensions/tn_local/pool_wrapper.goextensions/tn_local/schema.goextensions/tn_local/service.goextensions/tn_local/test_init.goextensions/tn_local/tn_local.goextensions/tn_local/tn_local_test.goextensions/tn_local/types.gogo.mod
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
extensions/tn_local/tn_local.go (1)
80-87:⚠️ Potential issue | 🟠 MajorStop constructing the pgx DSN with string concatenation.
This hardcodes
sslmode=disable, leaves passwords unescaped, and appears to rely ondatabase=rather than the documenteddbnamekeyword. Build a URL or parsed config instead so TLS settings and special-character credentials work reliably.For github.com/jackc/pgx/v5/pgxpool.ParseConfig, what keyword/value DSN parameters are accepted (`dbname` vs `database`), and what is the recommended way to safely pass passwords and SSL/TLS settings?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extensions/tn_local/tn_local.go` around lines 80 - 87, The current connStr is built by naive string concatenation (using "database=" and hardcoding "sslmode=disable") which mis-handles special characters and TLS; instead construct a proper URL via net/url (use url.URL{Scheme:"postgres", User: url.UserPassword(dbConfig.User, dbConfig.Pass), Host: net.JoinHostPort(dbConfig.Host, dbConfig.Port), Path: "/"+dbConfig.DBName} and set any TLS/sslmode via url.Query().Set before calling pgxpool.ParseConfig), so pgxpool.ParseConfig receives a well-formed postgres URI (postgres://user:pass@host:port/dbname?sslmode=...) and you no longer concatenate passwords or use the undocumented "database=" keyword. Use the conn URL's Path for dbname and url.UserPassword to safely escape credentials when calling pgxpool.ParseConfig.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@extensions/tn_local/tn_local_test.go`:
- Around line 95-103: The test must not copy the package-level sync.Once value;
remove the prevOnce := once and the assignment once = prevOnce. Keep backing
up/restoring extensionInstance (prev) as before, but reinitialize once to a
fresh sync.Once{} instead of copying it. Concretely, set once = sync.Once{} in
setup and in the t.Cleanup restore extensionInstance = prev and set once =
sync.Once{}; if you need to mark the restored Once as already done, call
once.Do(func(){}) on the new once rather than copying the old one.
In `@extensions/tn_local/tn_local.go`:
- Around line 30-33: The code has a data race on the extension singleton's
isEnabled field accessed by adminServerHook/Health and mutated in
engineReadyHook via configure; protect isEnabled by adding synchronization
(either a sync.Mutex/sync.RWMutex on the extension struct or use atomic.Bool)
and update configure to set the value with the writer (Lock/atomic.Store) and
update Health to read it with the reader (RLock/atomic.Load); ensure
GetExtension() still returns the singleton but all reads/writes of isEnabled go
through the chosen locking/atomic accessors.
---
Duplicate comments:
In `@extensions/tn_local/tn_local.go`:
- Around line 80-87: The current connStr is built by naive string concatenation
(using "database=" and hardcoding "sslmode=disable") which mis-handles special
characters and TLS; instead construct a proper URL via net/url (use
url.URL{Scheme:"postgres", User: url.UserPassword(dbConfig.User, dbConfig.Pass),
Host: net.JoinHostPort(dbConfig.Host, dbConfig.Port), Path: "/"+dbConfig.DBName}
and set any TLS/sslmode via url.Query().Set before calling pgxpool.ParseConfig),
so pgxpool.ParseConfig receives a well-formed postgres URI
(postgres://user:pass@host:port/dbname?sslmode=...) and you no longer
concatenate passwords or use the undocumented "database=" keyword. Use the conn
URL's Path for dbname and url.UserPassword to safely escape credentials when
calling pgxpool.ParseConfig.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 33ac6695-283d-4846-85a8-4f6a7f947395
📒 Files selected for processing (3)
extensions/tn_local/extension.goextensions/tn_local/tn_local.goextensions/tn_local/tn_local_test.go
extensions/tn_local/tn_local_test.go
Outdated
| // Reset for test isolation | ||
| prev := extensionInstance | ||
| prevOnce := once | ||
| extensionInstance = nil | ||
| once = sync.Once{} | ||
| t.Cleanup(func() { | ||
| extensionInstance = prev | ||
| once = prevOnce | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Go's sync package, can sync.Once be copied after first use, and why does govet report assignments of sync.Once as copylocks?
💡 Result:
No. sync.Once must not be copied after its first use (after any call that may observe/mutate its state, typically once.Do). The sync package docs state this explicitly: “A Once must not be copied after first use.” [1]
Why: the Once value contains internal synchronization state—currently a done flag plus a Mutex (and a noCopy marker) [2]. Copying it after use duplicates that internal state, so different copies may disagree about whether initialization has happened and/or interact with duplicated mutex state, which can lead to incorrect behavior (e.g., executing the function more than once or other synchronization failures).
Why govet reports sync.Once assignments as copylocks: the -copylocks analyzer flags any code paths that copy values containing locks or “do-not-copy” markers. It warns because copying “a value containing a lock, such as sync.Mutex or sync.WaitGroup, may cause both copies to malfunction” [3], and its detection includes:
- structs that contain lock-like fields recursively, and
- structs that contain
sync.noCopy(special-cased) [4].
Since sync.Once contains both _ noCopy and m Mutex [2], assigning it (or passing/returning it by value) is exactly the pattern govet is designed to catch.
Sources
[1] sync package docs (type Once): “A Once must not be copied after first use.” (pkg.go.dev)
[2] sync.Once implementation (once.go): _ noCopy, done atomic.Bool, m Mutex. (go.dev)
[3] copylocks analyzer docstring: copying values containing locks may cause malfunction. (go.dev)
[4] copylocks detection logic: flags types implementing sync.Locker by pointer and special-cases sync.noCopy. (go.dev)
Citations:
- 1: https://pkg.go.dev/sync
- 2: https://go.dev/src/sync/once.go
- 3: https://go.dev/src/cmd/vendor/golang.org/x/tools/go/analysis/passes/copylock/copylock.go
- 4: https://go.dev/src/cmd/vendor/golang.org/x/tools/go/analysis/passes/copylock/copylock.go
🏁 Script executed:
find . -name "tn_local_test.go" -type fRepository: trufnetwork/node
Length of output: 98
🏁 Script executed:
sed -n '85,110p' ./extensions/tn_local/tn_local_test.goRepository: trufnetwork/node
Length of output: 809
🏁 Script executed:
grep -n "var.*extensionInstance\|var.*once" ./extensions/tn_local/tn_local_test.go | head -20Repository: trufnetwork/node
Length of output: 42
🏁 Script executed:
head -50 ./extensions/tn_local/tn_local_test.goRepository: trufnetwork/node
Length of output: 1529
🏁 Script executed:
grep -n "var\|func GetExtension\|func init" ./extensions/tn_local/tn_local.go | head -20Repository: trufnetwork/node
Length of output: 84
🏁 Script executed:
grep -n "extensionInstance\|^var once" ./extensions/tn_local/tn_local.goRepository: trufnetwork/node
Length of output: 42
🏁 Script executed:
cat ./extensions/tn_local/tn_local.go | head -100Repository: trufnetwork/node
Length of output: 2973
🏁 Script executed:
grep -n "extensionInstance\|^var once" ./extensions/tn_local/tn_local.go -A 2Repository: trufnetwork/node
Length of output: 42
🏁 Script executed:
rg "extensionInstance|^var once" ./extensions/tn_local/ -A 2Repository: trufnetwork/node
Length of output: 1051
🏁 Script executed:
cat ./extensions/tn_local/extension.goRepository: trufnetwork/node
Length of output: 1449
🏁 Script executed:
sed -n '93,115p' ./extensions/tn_local/tn_local_test.goRepository: trufnetwork/node
Length of output: 723
Remove the sync.Once copy in test reset.
The test attempts to backup and restore the package-level once variable using prevOnce := once and once = prevOnce, which triggers the copylocks warning. sync.Once must not be copied after its first use—it contains internal state (noCopy marker, Mutex, done flag) that cannot be safely duplicated.
Instead, reinitialize once to a fresh sync.Once{} in the cleanup, and if restoring a previous instance, call once.Do(func() {}) to set its done flag without re-executing initialization logic:
Suggested fix
func TestExtensionSingleton(t *testing.T) {
// Reset for test isolation
prev := extensionInstance
- prevOnce := once
extensionInstance = nil
once = sync.Once{}
t.Cleanup(func() {
extensionInstance = prev
- once = prevOnce
+ once = sync.Once{}
+ if prev != nil {
+ once.Do(func() {})
+ }
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Reset for test isolation | |
| prev := extensionInstance | |
| prevOnce := once | |
| extensionInstance = nil | |
| once = sync.Once{} | |
| t.Cleanup(func() { | |
| extensionInstance = prev | |
| once = prevOnce | |
| }) | |
| // Reset for test isolation | |
| prev := extensionInstance | |
| extensionInstance = nil | |
| once = sync.Once{} | |
| t.Cleanup(func() { | |
| extensionInstance = prev | |
| once = sync.Once{} | |
| if prev != nil { | |
| once.Do(func() {}) | |
| } | |
| }) |
🧰 Tools
🪛 GitHub Check: lint
[failure] 102-102:
copylocks: assignment copies lock value to once: sync.Once contains sync.noCopy (govet)
[failure] 97-97:
copylocks: assignment copies lock value to prevOnce: sync.Once contains sync.noCopy (govet)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extensions/tn_local/tn_local_test.go` around lines 95 - 103, The test must
not copy the package-level sync.Once value; remove the prevOnce := once and the
assignment once = prevOnce. Keep backing up/restoring extensionInstance (prev)
as before, but reinitialize once to a fresh sync.Once{} instead of copying it.
Concretely, set once = sync.Once{} in setup and in the t.Cleanup restore
extensionInstance = prev and set once = sync.Once{}; if you need to mark the
restored Once as already done, call once.Do(func(){}) on the new once rather
than copying the old one.
extensions/tn_local/tn_local.go
Outdated
| func adminServerHook(server *rpcserver.Server) error { | ||
| ext := GetExtension() | ||
| server.RegisterSvc(ext) | ||
| return nil |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In github.com/trufnetwork/kwil-db, are AdminServer hooks guaranteed to run only after EngineReady hooks, and can the admin JSON-RPC server accept requests before EngineReady hooks have finished?
💡 Result:
No.
-
There is no guarantee that “AdminServer hooks” (i.e., anything that runs as part of bringing up the admin RPC server) run only after
EngineReadyhooks. The only ordering guarantee documented in-repo is thatEngineReadyHookruns “on startup after the engine (and all of its extensions) have been initialized”—it does not state anything about the admin server lifecycle or any relative ordering with it. [1] -
Yes, the admin JSON-RPC server can accept requests before
EngineReadyhooks have finished, because the node starts the admin JSON-RPC server in its own goroutine as part ofserver.Start(...)(i.e., it is started concurrently with other subsystems, with no indication here that it waits on “engine ready” hook completion): it logs “starting admin json-rpc server” and immediately callss.jsonRPCAdminServer.Serve(groupCtx)in anerrgroup.Go(...). [2]
Sources:
[1] extensions/hooks docs (EngineReadyHook description) (pkg.go.dev)
[2] app/node/node.go showing concurrent startup of the admin JSON-RPC server (github.com)
Citations:
- 1: https://pkg.go.dev/github.com/kwilteam/kwil-db/extensions/hooks
- 2: https://github.com/kwilteam/kwil-db/blob/v0.10.2/app/node/node.go
🏁 Script executed:
fd -t f "tn_local.go" --path "*/extensions/*" | head -20Repository: trufnetwork/node
Length of output: 293
🏁 Script executed:
fd -t f "service.go" --path "*/extensions/tn_local*" | head -20Repository: trufnetwork/node
Length of output: 293
🏁 Script executed:
cat -n extensions/tn_local/tn_local.go | sed -n '25,75p'Repository: trufnetwork/node
Length of output: 1771
🏁 Script executed:
cat -n extensions/tn_local/service.go | grep -A 10 "func.*Health"Repository: trufnetwork/node
Length of output: 257
🏁 Script executed:
rg "GetExtension\(\)" --type go -A 5 -B 5 extensions/tn_local/Repository: trufnetwork/node
Length of output: 3330
🏁 Script executed:
rg "isEnabled" --type go extensions/tn_local/Repository: trufnetwork/node
Length of output: 623
🏁 Script executed:
cat -n extensions/tn_local/extension.go | grep -A 10 "func.*configure"Repository: trufnetwork/node
Length of output: 435
🏁 Script executed:
cat -n extensions/tn_local/extension.go | head -50Repository: trufnetwork/node
Length of output: 1714
Add synchronization to prevent data race on isEnabled.
The admin server can accept requests concurrently with EngineReady hooks. adminServerHook() registers the singleton at lines 30–33, and engineReadyHook() mutates it at lines 67–70 via configure(), which unsafely writes to isEnabled. Health() reads isEnabled without synchronization. Add a mutex or atomic bool to protect isEnabled from concurrent access.
This also applies to lines 67–70.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extensions/tn_local/tn_local.go` around lines 30 - 33, The code has a data
race on the extension singleton's isEnabled field accessed by
adminServerHook/Health and mutated in engineReadyHook via configure; protect
isEnabled by adding synchronization (either a sync.Mutex/sync.RWMutex on the
extension struct or use atomic.Bool) and update configure to set the value with
the writer (Lock/atomic.Store) and update Health to read it with the reader
(RLock/atomic.Load); ensure GetExtension() still returns the singleton but all
reads/writes of isEnabled go through the chosen locking/atomic accessors.
d27a30d to
4d95c05
Compare
resolves: https://github.com/truflation/website/issues/3476
Summary by CodeRabbit
Release Notes
New Features
Chores