-
Notifications
You must be signed in to change notification settings - Fork 48
1d demo #655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
1d demo #655
Conversation
This script simulates and visualizes pressure fields in a one-dimensional heterogeneous medium using k-Wave.
* Introduces the `create_storage_variables` to define the creation of sensor data storage structure in kWaveSimulation which is passed to the simulator * updates `kWaveSimulation` to to use `create_storage_variables` also adds setting for nonlinearity * fixes shape handling in `ksource`, * improves 1D simulation logic in kspaceFirstOrder1D, * addresses minor issues in filters and signals utilities. Also updates dependencies to include tqdm.
WalkthroughAdds a 1D k-space first-order solver, comprehensive storage and sensor-data helpers, integration points in kWaveSimulation, example notebook/script and README, small utility/test tweaks, and a dependency addition. Introduces kWaveSimulation.is_nonlinear and new helper exports; no public removals. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant kWaveSimulation
participant create_storage_variables
participant kspace_first_order_1D
participant extract_sensor_data
participant Backend as GPU/CPU
User->>kWaveSimulation: provide kgrid, medium, source, sensor, options
kWaveSimulation->>kWaveSimulation: validate inputs, prepare state
kWaveSimulation->>create_storage_variables: request storage, flags, record
create_storage_variables-->>kWaveSimulation: return flags, record, sensor_data, Nt_info
kWaveSimulation->>kspace_first_order_1D: invoke 1D solver with prepared state
kspace_first_order_1D->>Backend: select NumPy or CuPy backend
loop each time step
kspace_first_order_1D->>kspace_first_order_1D: update fields (gradients, sources, PML)
kspace_first_order_1D->>extract_sensor_data: extract/store sensor outputs
extract_sensor_data-->>kspace_first_order_1D: return updated sensor_data
end
kspace_first_order_1D-->>User: return sensor_data and results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #655 +/- ##
==========================================
- Coverage 73.95% 68.61% -5.35%
==========================================
Files 50 52 +2
Lines 7000 7727 +727
Branches 1338 1568 +230
==========================================
+ Hits 5177 5302 +125
- Misses 1280 1847 +567
- Partials 543 578 +35
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
🤖 Fix all issues with AI agents
In @examples/ivp_1D_simulation/ivp_1D_simulation.ipynb:
- Around line 29-31: The notebook sentence "Now import dependencies, and party
of library which are required." contains a typo; update the cell text to read
"Now import dependencies, and parts of the library that are required." so it
uses "parts" and "that" for correct grammar and clarity; locate and edit the
markdown/source string in the cell where this sentence appears in
ivp_1D_simulation.ipynb.
- Around line 8-10: Update the markdown cell string "First install the pacakage
using the latests version." to correct the typos: change "pacakage" to "package"
and "latests" to "latest" so the sentence reads "First install the package using
the latest version."
- Around line 131-134: The inline comment for the mask array is incorrect: the
values in mask are in meters but the comment says [mm]; update the comment or
units to match the data. Locate the mask creation (variables named mask and
sensor.mask) and either change the comment from "[mm]" to "[m]" or convert the
numeric values to millimeters if you intend to keep "[mm]"; ensure sensor.mask
remains assigned to the corrected mask.
In @examples/ivp_1D_simulation/ivp_1D_simulation.py:
- Around line 62-65: The inline comment for the mask values is wrong: the array
mask = np.array([-10e-3, 10e-3]) is in meters, not millimeters; update the
comment to reflect the correct unit (e.g., change "[mm]" to "[m]" next to the
mask definition) so the documentation matches the values assigned to mask and
sensor.mask.
In @kwave/kspaceFirstOrder1D.py:
- Around line 505-506: The conditional uses the wrong attribute name: replace
the incorrect reference to options.source_ux with k_sim.source_ux; specifically
in the block that checks source ux (the lines with if (k_sim.source_ux is not
False and t_index < xp.shape(source.ux)[1]): if options.source_ux >= t_index:)
change the second condition to if k_sim.source_ux >= t_index so it references
the Simulation (k_sim) attribute instead of the nonexistent options.source_ux.
- Around line 77-84: The function dotdict_to_cpu currently calls isinstance(val,
cp.ndarray) which fails when cp is None; update dotdict_to_cpu to first guard
that cp is not None (or that cp has attribute ndarray) before using isinstance
with cp.ndarray, e.g. check "if cp is not None and isinstance(val, cp.ndarray):"
and otherwise skip the CuPy check (or fall back to checking numpy.ndarray) so no
TypeError occurs when CuPy isn't installed; keep the verbose print and setattr
behavior intact for the CuPy branch.
In @kwave/kWaveSimulation_helper/create_storage_variables.py:
- Around line 52-57: The barycentric coordinates are being computed using the
triangle vertex coordinates instead of the query interpolation_points; update
the bc calculation in create_storage_variables.py to subtract
tri.transform[simplex_indices, 2] from the interpolation_points for each simplex
and then multiply by tri.transform[simplex_indices, :2] (i.e., use
interpolation_points in the dot operand rather than tri.points[simplex_indices,
:]) so bc is computed from the query points; keep simplex_indices and the
returned tri.points[simplex_indices, :] as the vertices.
- Around line 29-33: The barycentric calculation uses the simplex vertices
instead of the query points; change the computation of bc to subtract
tri.transform[indices, 2] from interpolation_points (not tri.points[indices, :])
and multiply by tri.transform[indices, :2] accordingly so the matrix shapes
align (i.e., use interpolation_points - tri.transform[indices, 2] with the same
transpose arrangement used before). Update the expression that builds bc (which
currently references tri.points[indices, :]) to use interpolation_points so bc
contains barycentric coords for the query points.
- Around line 544-549: The code contains an unreachable and syntactically
incorrect block: the "if kgrid.dim == 1:" branch inside an else that only runs
when kgrid.dim != 1 should be removed, and the reshape call itself is wrong;
remove the entire unreachable if-block (including the sensor_x = np.reshape(...)
line) or, if the intent was to handle the kgrid.dim == 1 case, move that logic
out of the else and change the call to np.reshape(mask, (-1, 1)) so the first
argument is the array and the second is the shape; update the code around
create_storage_variables to either delete the unreachable branch or correctly
locate and fix the reshape using np.reshape(mask, (-1, 1)) where sensor_x is
set.
- Around line 85-86: The early return when flags.time_rev is not None returns
only flags causing unpack errors; update the branch in create_storage_variables
to return the same 4-tuple shape as the other path by constructing/initializing
record, sensor_data, and num_recorded_time_points (or appropriate placeholders)
and return (flags, record, sensor_data, num_recorded_time_points) instead of
just flags so callers can always unpack consistently.
In @kwave/kWaveSimulation_helper/extract_sensor_data.py:
- Around line 300-304: The logic for accumulating minimum corner pressures is
inverted: in the else branch where you currently call
np.maximum(sensor_data[cuboid_index].p_min, result), replace that with
np.minimum so the running p_min actually tracks the smallest values; keep the
file_index == 0 initialization (sensor_data[cuboid_index].p_min = result) and
only use np.minimum in the subsequent accumulation when flags.record_p_min is
true.
- Around line 537-547: The 3D branch inside the else (when file_index > 0)
updates ux_min and uy_min but omits updating uz_min; fix by adding a line
mirroring the ux/uy updates to set sensor_data.uz_min =
np.minimum(sensor_data.uz_min, np.sum(uz_sgz[record.tri] * record.bc, axis=1))
in the dim == 3 block (use the same record.tri and record.bc pattern and the
uz_sgz array).
- Around line 435-452: The np.interp calls for 1D Cartesian sensor masks use the
wrong argument order; change each np.interp(record.grid_x, p, record.sensor_x)
(and similar instances) to np.interp(record.sensor_x, record.grid_x, p) so that
x=query points is record.sensor_x, xp=data coordinates is record.grid_x, and
fp=data values is p; apply the same swap for all occurrences referenced (e.g.,
those setting sensor_data.p_max, sensor_data.p_min, and other 1D interpolations)
so subsequent np.minimum/np.maximum operations use correctly interpolated
arrays.
In @kwave/kWaveSimulation.py:
- Around line 210-214: The time_rev method has an infinite recursion because it
returns self.time_rev at the end; change that terminal return to a concrete
boolean (likely False) so the method returns False when no sensor or when sensor
is NotATransducer. Update the method time_rev to return False instead of
self.time_rev and keep the conditional branch that checks self.sensor,
isinstance(NotATransducer), options.simulation_type.is_elastic_simulation(), and
sensor.time_reversal_boundary_data to decide the True case.
In @tests/matlab_test_data_collectors/python_testers/getWin_test.py:
- Line 34: The logging call incorrectly passes multiple values to logging.log
without a format string; replace it with a properly formatted call such as
logging.info("N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N,
type_, param, rotation, symmetric, square) or use logging.log(logging.INFO,
"N=%s type=%s param=%s rotation=%s symmetric=%s square=%s", N, type_, param,
rotation, symmetric, square) so the variables (N, type_, param, rotation,
symmetric, square) are supplied as arguments to a single format string.
- Around line 36-44: The conditional uses N.all() > 1 which compares a boolean
to 1; change it to (N > 1).all() so the intent "all elements > 1" is correctly
checked for array inputs (locate the conditional around N, get_win call in
getWin_test.py). Also address the changed test coverage: decide whether inputs
with N <= 1 should still exercise get_win or instead be asserted to
raise/skip—either remove the conditional to run the test for those cases or add
explicit asserts/skips for N <= 1 so test behavior is intentional and
documented.
🧹 Nitpick comments (5)
kwave/ksource.py (1)
79-80: Consider squeezing both sides of the comparison for consistency.The validation now compares
p0.shapeagainstnp.squeeze(kgrid.k).shape, which removes singleton dimensions from the grid. For consistent comparison, consider also squeezingp0to ensure that singleton dimensions are handled uniformly on both sides.♻️ Proposed enhancement
- if self.p0.shape != np.squeeze(kgrid.k).shape: - + if np.squeeze(self.p0).shape != np.squeeze(kgrid.k).shape: # throw an error if p0 is not the correct size raise ValueError("source.p0 must be the same size as the computational grid.")examples/ivp_1D_simulation/ivp_1D_simulation.py (1)
75-76: Consider defaulting to CPU for broader compatibility.The example hardcodes
is_gpu_simulation=True. While the solver falls back to NumPy if CuPy is unavailable, defaulting to CPU (is_gpu_simulation=False) or detecting GPU availability would make this example more accessible to users without GPU support.kwave/kWaveSimulation.py (1)
528-600: Consider consolidating duplicate storage variable creation logic.The elastic and non-elastic branches (lines 531-563 and 564-600) are nearly identical, differing only in how
sensor_xis computed. This could be refactored to reduce duplication.♻️ Suggested consolidation
# Compute sensor_x based on code type if not self.blank_sensor: if is_elastic_code: sensor_x = self.sensor.mask[0, :] elif k_dim == 1: sensor_x = self.sensor_x else: sensor_x = self.sensor.mask[0, :] else: sensor_x = None record_old = copy.deepcopy(self.record) values = dotdict({ "sensor_x": sensor_x, "sensor_mask_index": self.sensor_mask_index, "record": record_old, "sensor_data_buffer_size": self.s_source_pos_index }) if self.record.u_split_field: self.record_u_split_field = self.record.u_split_field flags = dotdict({ "use_sensor": self.use_sensor, "blank_sensor": self.blank_sensor, # ... rest of flags }) flags, self.record, self.sensor_data, self.num_recorded_time_points = create_storage_variables( self.kgrid, self.sensor, opt, values, flags, self.record)kwave/kspaceFirstOrder1D.py (2)
3-4: Remove duplicate import.
SimulationExecutionOptionsis imported twice (lines 3 and 23).♻️ Remove duplicate
from typing import Union -from kwave.options.simulation_execution_options import SimulationExecutionOptions - try: import cupy as cp except ImportError:Also applies to: 23-24
474-474: Preferrange()overxp.arange()for loop iteration.Using
xp.arange()creates an array (potentially on GPU with CuPy), which may cause unexpected behavior when iterating. Use Python's built-inrange()for the loop index.♻️ Proposed fix
# start time loop - for t_index in tqdm(xp.arange(index_start, index_end, index_step, dtype=int)): + for t_index in tqdm(range(index_start, index_end, index_step)):
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
examples/ivp_1D_simulation/ivp_1D_simulation.ipynbexamples/ivp_1D_simulation/ivp_1D_simulation.pykwave/kWaveSimulation.pykwave/kWaveSimulation_helper/__init__.pykwave/kWaveSimulation_helper/create_absorption_variables.pykwave/kWaveSimulation_helper/create_storage_variables.pykwave/kWaveSimulation_helper/extract_sensor_data.pykwave/ksource.pykwave/kspaceFirstOrder1D.pykwave/utils/filters.pykwave/utils/signals.pypyproject.tomltests/matlab_test_data_collectors/python_testers/getWin_test.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
examples/ivp_1D_simulation/ivp_1D_simulation.pykwave/kspaceFirstOrder1D.pyexamples/ivp_1D_simulation/ivp_1D_simulation.ipynb
🧬 Code graph analysis (7)
tests/matlab_test_data_collectors/python_testers/getWin_test.py (2)
kwave/utils/signals.py (1)
get_win(58-324)tests/matlab_test_data_collectors/python_testers/utils/record_reader.py (1)
expected_value_of(15-20)
kwave/kWaveSimulation_helper/extract_sensor_data.py (2)
kwave/data.py (1)
numpy(107-108)kwave/kWaveSimulation.py (1)
compute_directivity(226-238)
examples/ivp_1D_simulation/ivp_1D_simulation.py (7)
kwave/data.py (2)
numpy(107-108)Vector(7-49)kwave/kgrid.py (5)
kWaveGrid(14-701)Nx(157-161)dx(178-182)x_size(353-357)makeTime(455-495)kwave/kmedium.py (1)
kWaveMedium(11-232)kwave/ksensor.py (1)
kSensor(9-81)kwave/kspaceFirstOrder1D.py (1)
kspace_first_order_1D(87-662)kwave/options/simulation_execution_options.py (1)
SimulationExecutionOptions(13-308)kwave/options/simulation_options.py (1)
SimulationOptions(46-348)
kwave/kWaveSimulation_helper/create_absorption_variables.py (1)
kwave/kWaveSimulation.py (1)
equation_of_state(156-167)
kwave/utils/signals.py (1)
kwave/kgrid.py (1)
size(297-301)
kwave/kWaveSimulation_helper/__init__.py (2)
kwave/kWaveSimulation_helper/create_storage_variables.py (1)
create_storage_variables(69-114)kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
extract_sensor_data(8-714)
kwave/kWaveSimulation.py (7)
kwave/kWaveSimulation_helper/create_storage_variables.py (1)
create_storage_variables(69-114)kwave/recorder.py (1)
Recorder(9-143)kwave/kmedium.py (1)
is_nonlinear(107-114)kwave/ksensor.py (2)
mask(38-42)mask(45-46)kwave/ktransducer.py (1)
mask(305-311)kwave/utils/dotdictionary.py (1)
dotdict(4-57)kwave/options/simulation_options.py (1)
is_axisymmetric(41-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (macos-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test_install (macos-latest, 3.12, all)
- GitHub Check: test_install (macos-latest, 3.12, docs)
- GitHub Check: test_install (macos-latest, 3.11, test)
- GitHub Check: test_install (windows-latest, 3.13, docs)
- GitHub Check: test_install (windows-latest, 3.12, dev)
- GitHub Check: test_install (windows-latest, 3.10, examples)
🔇 Additional comments (13)
kwave/utils/filters.py (1)
591-593: LGTM! Shape compatibility fix for 1D arrays.The added check correctly handles the case where the window is 2D but the input array is 1D by squeezing the window to match the input shape. This ensures proper broadcasting during the FFT-based smoothing operation.
tests/matlab_test_data_collectors/python_testers/getWin_test.py (1)
16-16: LGTM! Logging statement correctly formatted.The logging statement properly uses format string syntax to log the iteration index.
kwave/kWaveSimulation_helper/create_absorption_variables.py (1)
17-18: Confirmed: Callers safely handle None return values for lossless equation of state.The single caller (
kWaveSimulation.py, line 1461) assigns the four return values to instance attributes that are already initialized toNone. The absorption variables (absorb_nabla1,absorb_nabla2,absorb_tau,absorb_eta) are used only within equation-of-state-specific code paths (e.g., inkspaceFirstOrder1D.py), so returningNonefor the lossless case is safe since those code paths won't execute for lossless media.pyproject.toml (1)
36-36: The tqdm version 4.67.1 is valid and secure.Version 4.67.1 is the latest release on PyPI and not yanked. While a prior vulnerability (CVE-2024-34062) affected tqdm, it was fixed in v4.66.3, so this version is not affected.
kwave/kWaveSimulation_helper/__init__.py (1)
1-13: LGTM!The new imports for
create_storage_variablesandextract_sensor_datafollow the existing pattern, and__all__is correctly updated to expose these new utilities.kwave/utils/signals.py (3)
59-59: LGTM!The extended type hint to accept
Tuple[int]aligns with the updated handling logic below.
145-154: LGTM!Using
np.asarray(N).sizehandles scalar and array-like inputs uniformly, and the explicitint(N)cast ensures type consistency for downstream 1D window creation.
310-310: LGTM!Consistent use of
np.asarray(N).sizein the window enlargement branch.kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
1-6: LGTM on CuPy import fallback.The try/except pattern for optional CuPy import with
cp = Nonefallback is appropriate.kwave/kWaveSimulation.py (2)
147-154: LGTM!The new
is_nonlinearproperty correctly delegates to the medium's nonlinearity check.
166-167: Good catch fixing the typo."loseless" → "lossless" is a valid spelling correction.
kwave/kspaceFirstOrder1D.py (1)
87-92: LGTM on function signature.The function follows the established pattern with separate
simulation_optionsandexecution_optionsparameters. Based on learnings, this aligns with k-Wave Python conventions.kwave/kWaveSimulation_helper/create_storage_variables.py (1)
516-521: Unused buffer variable and unusualdelon local parameter.
sensor_data_bufferis created but never used (suppressed with noqa). Additionally,del sensor_data_buffer_sizeremoves the local parameter which is an unusual pattern. This appears to be incomplete implementation.Consider whether the buffer should be stored in
sensor_dataor returned, and whether thedelstatement should instead set a flag or attribute to False.
| if dim == 1: | ||
| if file_index == 0: | ||
| sensor_data.p_max = np.interp(record.grid_x, p, record.sensor_x) | ||
| else: | ||
| sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.grid_x, p, record.sensor_x)) | ||
| else: | ||
| if file_index == 0: | ||
| sensor_data.p_max = np.sum(p[record.tri] * record.bc, axis=1) | ||
| else: | ||
| sensor_data.p_max = np.maximum(sensor_data.p_max, np.sum(p[record.tri] * record.bc, axis=1)) | ||
|
|
||
| # store the minimum acoustic pressure | ||
| if flags.record_p_min: | ||
| if dim == 1: | ||
| if file_index == 0: | ||
| sensor_data.p_min = np.interp(record.grid_x, p, record.sensor_x) | ||
| else: | ||
| sensor_data.p_min = np.minimum(sensor_data.p_min, np.interp(record.grid_x, p, record.sensor_x)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect np.interp argument order for 1D Cartesian sensor mask.
The np.interp signature is np.interp(x, xp, fp) where x = query points, xp = data x-coordinates (monotonic), fp = data values. The current calls appear to have inverted arguments, e.g., np.interp(record.grid_x, p, record.sensor_x) should likely be np.interp(record.sensor_x, record.grid_x, p).
This pattern repeats in multiple locations (lines 437, 439, 450, 452, 462, 470, 484, 499, 511, 526, 539, 552).
🐛 Example fix for record_p_max 1D case
if flags.record_p_max:
if dim == 1:
if file_index == 0:
- sensor_data.p_max = np.interp(record.grid_x, p, record.sensor_x)
+ sensor_data.p_max = np.interp(record.sensor_x, record.grid_x, np.real(p))
else:
- sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.grid_x, p, record.sensor_x))
+ sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.sensor_x, record.grid_x, np.real(p)))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 435 - 452,
The np.interp calls for 1D Cartesian sensor masks use the wrong argument order;
change each np.interp(record.grid_x, p, record.sensor_x) (and similar instances)
to np.interp(record.sensor_x, record.grid_x, p) so that x=query points is
record.sensor_x, xp=data coordinates is record.grid_x, and fp=data values is p;
apply the same swap for all occurrences referenced (e.g., those setting
sensor_data.p_max, sensor_data.p_min, and other 1D interpolations) so subsequent
np.minimum/np.maximum operations use correctly interpolated arrays.
| def time_rev(self) -> bool: | ||
| if self.sensor and not isinstance(self.sensor, NotATransducer): | ||
| return not self.options.simulation_type.is_elastic_simulation() and self.sensor.time_reversal_boundary_data is not None | ||
| return self.userarg_time_rev | ||
| return self.time_rev | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Infinite recursion in time_rev property.
The property returns self.time_rev on line 213, which will recursively call itself, causing a RecursionError. Based on the commented-out code paths, this should likely return False.
🐛 Proposed fix
@property
@deprecated(version="0.4.1", reason="Use TimeReversal class instead")
def time_rev(self) -> bool:
if self.sensor and not isinstance(self.sensor, NotATransducer):
return not self.options.simulation_type.is_elastic_simulation() and self.sensor.time_reversal_boundary_data is not None
- return self.time_rev
+ return False🤖 Prompt for AI Agents
In @kwave/kWaveSimulation.py around lines 210 - 214, The time_rev method has an
infinite recursion because it returns self.time_rev at the end; change that
terminal return to a concrete boolean (likely False) so the method returns False
when no sensor or when sensor is NotATransducer. Update the method time_rev to
return False instead of self.time_rev and keep the conditional branch that
checks self.sensor, isinstance(NotATransducer),
options.simulation_type.is_elastic_simulation(), and
sensor.time_reversal_boundary_data to decide the True case.
tests/matlab_test_data_collectors/python_testers/getWin_test.py
Outdated
Show resolved
Hide resolved
tests/matlab_test_data_collectors/python_testers/getWin_test.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 375-386: The hasattr check for the s_source_pos_index block is
wrong: it tests for 's_source_sig_index' but then accesses
k_sim.s_source_pos_index, so when s_source_pos_index exists and
s_source_sig_index does not the conversion is skipped; change the guard to check
hasattr(k_sim, 's_source_pos_index') and ensure the block sets
k_sim.s_source_pos_index = np.squeeze(k_sim.s_source_pos_index) - int(1) (mirror
the pattern used for u_source_pos_index and p_source_pos_index) so the 1→0
indexing conversion is applied correctly.
- Around line 354-363: When options.use_kspace is True, source_kappa may remain
undefined if the complex conditional on k_sim.source.p_mode/u_mode is false;
always initialize source_kappa in that branch (e.g., default to 1.0 or to the
same spectral taper as kappa) so later uses (lines referencing source_kappa
around the additive source handling) cannot raise UnboundLocalError. Locate the
block with symbols options.use_kspace, kappa, source_kappa, c_ref, kgrid.k,
kgrid.dt, and k_sim.source.* and set source_kappa unconditionally inside the
options.use_kspace branch (then override it only when the additive-source
condition is true).
- Around line 561-579: The bug is that source_mat = source_mat.fill(0) assigns
None (since ndarray.fill mutates in-place), so replace that assignment with an
in-place call or reinitialize the array; e.g. call source_mat.fill(0) without
assignment or use source_mat = xp.zeros_like(source_mat) before populating it;
ensure this change is made in the 'additive' case where source_mat,
k_sim.p_source_pos_index, and k_sim.source.p[k_sim.p_source_sig_index, t_index]
are used so subsequent FFT/ifft with source_kappa and xp will operate on a valid
array.
- Around line 653-664: sensor.record_start_index is 1-based while t_index is
0-based, causing an off-by-one skip; convert record_start_index to 0-based
before comparing and computing file_index. Replace uses of
sensor.record_start_index in the condition and file_index calculation (the block
gating on k_sim.use_sensor and t_index, and the file_index = t_index -
sensor.record_start_index) so they use a zero-based value (e.g.,
sensor.record_start_index - 1) or compute a local start_index0 =
sensor.record_start_index - 1 and use t_index >= start_index0 and file_index =
t_index - start_index0 when calling extract_sensor_data.
- Around line 504-510: The GPU path fails because scipy.fft is being called on
CuPy arrays and some medium.absorb_nabla1/2 arrays remain NumPy; pick a single
FFT backend at init (alias it as xp.fft or fft_backend) and replace direct
scipy.fft.ifft/fftn/fft calls (e.g., the ifft used in the ux_sgx update that
references ddx_k, ddx_k_shift_pos, kappa, p_k) with the selected backend's fft
functions so they operate on xp arrays, and ensure cp_fft import is removed or
used consistently; additionally, when using_gpu=True convert
medium.absorb_nabla1 and medium.absorb_nabla2 (and any similar medium arrays) to
xp arrays so multiplications with rho0 and duxdx use the same array type.
🧹 Nitpick comments (2)
kwave/kspaceFirstOrder1D.py (2)
1-31: Remove duplicate import and avoid split CuPy import patterns.
SimulationExecutionOptionsis imported twice (Line 3 and Line 25). Also, you importcupyat module import time and dynamically inget_array_module, which makes GPU enablement harder to reason about.Proposed cleanup
-from kwave.options.simulation_execution_options import SimulationExecutionOptions - try: import cupy as cp from cupy import fft as cp_fft except ImportError: cp = None cp_fft = None @@ -from kwave.options.simulation_execution_options import SimulationExecutionOptions from kwave.options.simulation_options import SimulationOptions
34-57: Makeget_array_module()deterministic and consistent withcpglobal.Right now
get_array_module()re-importscupyand returns a localcp, while other helpers (to_gpu,dotdict_to_cpu) use the module-globalcp. It’s easy to end up withusing_gpu=Truebut helpers referencing a differentcpsymbol.One way to align behavior
def get_array_module(verbose: bool = False): @@ - # Try to import cupy and check for a GPU + # If module import already failed, don't try to use it + if cp is None: + return np, False + + # Check for a visible GPU try: - import cupy as cp - # this returns the number of visible GPUs - ngpus = cp.cuda.runtime.getDeviceCount() + ngpus = cp.cuda.runtime.getDeviceCount() if ngpus > 0: return cp, True except Exception: pass
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
examples/ivp_1D_simulation/README.mdkwave/kspaceFirstOrder1D.py
✅ Files skipped from review due to trivial changes (1)
- examples/ivp_1D_simulation/README.md
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
kwave/kspaceFirstOrder1D.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (macos-latest, 3.12)
- GitHub Check: test (macos-latest, 3.13)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test_install (windows-latest, 3.10, dev)
🔇 Additional comments (3)
kwave/kspaceFirstOrder1D.py (3)
89-95: API shape matches repo conventions (separate options objects).Signature uses
simulation_options: SimulationOptionsandexecution_options: SimulationExecutionOptions(not a single options dict), consistent with existing k-Wave Python patterns. Based on learnings, this is the right direction.
490-504: No issue. The repository explicitly requires Python 3.10+ as declared inpyproject.toml(requires-python = ">=3.10"), so thematch/casesyntax is fully supported.
511-514: Verify the conditionif options.source_ux >= t_index:at line 513.The comparison direction appears backwards for a "start at index" gate. Typically, time gates check
t_index >= start_indexorstart_index <= t_index. Ifoptions.source_uxrepresents a start index, the condition should bet_index >= options.source_ux. Additionally, verify the type and semantics ofoptions.source_uxto ensure the>=comparison is appropriate.
Corrects the use of np.minimum for p_min in extract_sensor_data, adds missing uz_min assignment for 3D, and fixes several issues in kspaceFirstOrder1D: uses xp.fft instead of scipy.fft for compatibility, corrects source_mat zeroing, and fixes a conditional for source_ux. Also improves logging in getWin_test.py and clarifies units in ivp_1D_simulation.py. Adds dimension check for ky_norm calculation in create_storage_variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 79-86: The function dotdict_to_cpu currently calls isinstance(val,
cp.ndarray) which will raise if cp is None; guard the check by first confirming
cp is available (e.g., if cp is not None and hasattr(cp, "ndarray")) or use
getattr(cp, "ndarray", None) as the second arg to isinstance, then only attempt
val.get() when the cp ndarray type exists; ensure the code still handles numpy
arrays unchanged and return obj as before.
- Around line 354-360: The condition incorrectly checks hasattr on options
instead of on k_sim so the k-space source correction never triggers; update the
if to check k_sim for source attributes (e.g., use hasattr(k_sim, 'source_p') /
hasattr(k_sim, 'source_ux') and hasattr(k_sim.source, 'p_mode') /
hasattr(k_sim.source, 'u_mode')) and ensure proper grouping of the OR/AND logic;
when either k_sim.source.p_mode == 'additive' or k_sim.source.u_mode ==
'additive' set source_kappa = scipy.fft.ifftshift(xp.cos(...)) as before.
In @kwave/kWaveSimulation_helper/create_storage_variables.py:
- Around line 517-521: The local variable sensor_data_buffer is created but
never used; either remove the placeholder creation (delete the np.zeros line and
the noqa comment and the else branch that deletes sensor_data_buffer_size), or
actually store/use the buffer (e.g., assign it to a persistent attribute like
sensor.sensor_data_buffer or return it from create_storage_variables) so it
isn’t unused; update references to sensor_data_buffer_size accordingly and
ensure any callers expect the new attribute/return if you choose to implement
the buffer.
- Around line 85-86: The early return in create_storage_variables when
flags.time_rev is not None returns only flags, breaking the expected 4-tuple
contract; update the branch so that when flags.time_rev is not None you either
return the full tuple (flags, record, sensor_data, num_recorded_time_points)
with appropriately initialized record/sensor_data/num_recorded_time_points or
raise a clear exception, ensuring callers of create_storage_variables receive
the consistent return shape; locate the conditional on flags.time_rev and adjust
the return to use the same variable names (record, sensor_data,
num_recorded_time_points) as in the normal path.
- Around line 540-550: The nested check for kgrid.dim == 1 inside the else
branch is unreachable; remove the inner conditional and its unreachable body
(the sensor_x = np.reshape((mask, (-1, 1))) line) and, if needed, handle sensor
mask reshaping in the appropriate kgrid.dim == 1 branch (where record.grid_x is
assigned) or in the general branch with a correct condition; update references
to sensor_x and mask so reshaping only occurs where kgrid.dim == 1 (or under the
correct dimensional check).
- Around line 29-33: In gridDataFast2D and gridDataFast3D the barycentric coords
are computed relative to the wrong points and the function returns vertex
coordinates instead of vertex indices; change the barycentric calculation to use
interpolation_points (i.e., compute tri.transform[indices,
:2].dot((interpolation_points - tri.transform[indices, 2]).T) or the 3D
equivalent) and return tri.simplices[indices] (vertex index arrays) instead of
tri.points[indices, :], so downstream code like p[record.tri] * record.bc works
correctly.
In @kwave/kWaveSimulation_helper/extract_sensor_data.py:
- Around line 170-171: In record_u_max, the 1D initialization uses
sensor_mask_index directly while the update uses np.unravel_index, causing
inconsistent indexing; change the initialization that sets sensor_data.ux_max
(and analogous assignments) to use np.unravel_index(sensor_mask_index,
ux_sgx.shape)[0] (or the appropriate axis index) when indexing ux_sgx so both
initialization and updates use the same unraveled index logic and shapes.
- Line 429: np.interp calls have their arguments swapped: occurrences like
xp.interp(record.grid_x, p, record.sensor_x) pass p as xp and sensor positions
as fp; change these to xp.interp(record.sensor_x, record.grid_x, p) (and use
xp.squeeze/np.real as appropriate), and apply the same fix for all
interpolations of pressure/velocity fields (e.g. assignments to sensor_data.p,
sensor_data.u, sensor_data.ux, etc.) so the call signature is xp.interp(x_query,
x_coords, y_values) with x_query = record.sensor_x, x_coords = record.grid_x,
y_values = p (or the corresponding field).
- Around line 315-325: The code incorrectly uses cuboid_indices (an (N,3) array)
directly for 1D/2D indexing; unpack the coordinate columns like the 3D branch.
For dim==1, index ux_sgx with cuboid_indices[:,0] (or
tuple([cuboid_indices[:,0]])) and reshape to cuboid_size_x; for dim==2, index
ux_sgx and uy_sgy with cuboid_indices[:,0] and cuboid_indices[:,1] (or use
tuple(cuboid_indices[:,:2].T)) and reshape to cuboid_size_x; keep the dim==3
branch as-is using cuboid_indices[:,0], cuboid_indices[:,1],
cuboid_indices[:,2]. Ensure you use the same variables ux_sgx, uy_sgy, uz_sgz
and cuboid_size_x when assigning into sensor_data[...] at file_index.
- Around line 343-364: The 2D split-field block incorrectly writes to
sensor_data.ux_split_p without indexing by cuboid_index, causing an
AttributeError when sensor_data is a list; update the write to use
sensor_data[cuboid_index].ux_split_p (matching the other writes like
sensor_data[cuboid_index].ux_split_s and
sensor_data[cuboid_index].uy_split_p/uy_split_s) so all four assignments in the
dim == 2 branch consistently index the cuboid entry.
- Around line 99-165: The block guarded by record_u_split_field assumes dim is 2
or 3 but currently runs for dim==1 and later raises RuntimeError("Wrong
dimensions"); update the logic so you either (a) early-guard: check if
record_u_split_field and dim >= 2 before entering the split-field branch and
skip/ignore split processing for 1D, or (b) implement an explicit 1D handler
that sets the split outputs to appropriate zero/placeholder arrays; modify the
condition around record_u_split_field and dim and adjust assignments to
sensor_data.*_split_* so they are only written when dim is 2 or 3 (refer to the
record_u_split_field flag, dim variable, and the existing RuntimeError branch)
to avoid the confusing runtime error for 1D simulations.
- Around line 254-262: The cuboid_size_x computation in extract_sensor_data.py
incorrectly omits the +1 offset for dim==1 and dim==2; update the dim==1 branch
(cuboid_size_x) to add +1 to the x-length calculation and keep the y-size as 1,
and update the dim==2 branch to add +1 to both components so both use
record.cuboid_corners_list[..., cuboid_index] difference plus 1 (consistent with
the dim==3 branch and the np.arange(...+1) usage).
- Around line 63-68: The current running RMS update in the block guarded by
flags.record_p_rms is incorrect; change it to accumulate sum-of-squares instead
of trying to combine RMS values: when file_index == 0 set sensor_data.p_rms =
p[...]**2 (treat it as sum_of_squares), otherwise do sensor_data.p_rms +=
p[...]**2 (add new squared sample using the same
np.unravel_index(np.squeeze(sensor_mask_index), np.shape(p), order='F')
indexing), and compute the final RMS outside this loop as
np.sqrt(sensor_data.p_rms / num_samples) (or use Welford if you prefer online
variance), keeping the symbol names flags.record_p_rms, file_index,
sensor_data.p_rms, p and sensor_mask_index to locate the changes.
🧹 Nitpick comments (7)
kwave/kWaveSimulation_helper/extract_sensor_data.py (2)
3-6: Unused import:cupyis imported but never used.The
cpvariable is imported but not referenced anywhere in this module. If GPU support via CuPy is intended, the FFT operations should usexp.fft(passed as parameter) rather thannp.fft. Otherwise, remove the unused import.🔧 Remove unused import
import numpy as np -try: - import cupy as cp -except ImportError: - cp = None
223-238: Redundant(file_index - 0)expressions.The expression
(file_index - 0)is equivalent tofile_index. This appears to be debug/placeholder code that should be cleaned up.kwave/kWaveSimulation_helper/create_storage_variables.py (2)
60-67:OutputSensorclass is incomplete and unused.The
OutputSensorclass defines only class-level attributes (flags,x_shift_neg,p) set toNone, but the actual function createssensor_dataas either adotdictor list ofdotdict. This class appears to be vestigial or incomplete.
406-414: Useeliffor mutually exclusive dimension checks.Multiple
if kgrid.dim ==checks should useelifsince dimensions are mutually exclusive. This is a minor efficiency issue but also a clarity concern.♻️ Use elif for dimension checks
if record_old.u_max: # pre-allocate the velocity fields based on the number of dimensions in the simulation if kgrid.dim == 1: sensor_data.ux_max = np.zeros([num_sensor_points,]) - if kgrid.dim == 2: + elif kgrid.dim == 2: sensor_data.ux_max = np.zeros([num_sensor_points,]) sensor_data.uy_max = np.zeros([num_sensor_points,]) - if kgrid.dim == 3: + elif kgrid.dim == 3: sensor_data.ux_max = np.zeros([num_sensor_points,]) sensor_data.uy_max = np.zeros([num_sensor_points,]) sensor_data.uz_max = np.zeros([num_sensor_points,])Apply throughout the function.
kwave/kspaceFirstOrder1D.py (3)
3-3: Duplicate import ofSimulationExecutionOptions.
SimulationExecutionOptionsis imported twice: at line 3 and line 25.🔧 Remove duplicate import
from typing import Union -from kwave.options.simulation_execution_options import SimulationExecutionOptions - try: import cupy as cpAlso applies to: 25-25
466-469:source_matallocated twice in separate conditionals.If both
k_sim.source_uxandk_sim.source_pare truthy,source_matis allocated twice. This is harmless but wasteful. More importantly, if neither is true but source terms are used later,source_matwill be undefined.♻️ Consolidate allocation
- if k_sim.source_ux is not False: - source_mat = xp.zeros((Nx,), dtype=my_type) - if k_sim.source_p is not False: + if k_sim.source_ux is not False or k_sim.source_p is not False: source_mat = xp.zeros((Nx,), dtype=my_type)
266-286: Consider using a dataclass or named tuple forextract_options.The
extract_optionsdotdict has 16 fields manually copied fromk_sim. This is error-prone and could benefit from a more structured approach.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
examples/ivp_1D_simulation/ivp_1D_simulation.pykwave/kWaveSimulation_helper/create_storage_variables.pykwave/kWaveSimulation_helper/extract_sensor_data.pykwave/kspaceFirstOrder1D.pytests/matlab_test_data_collectors/python_testers/getWin_test.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/matlab_test_data_collectors/python_testers/getWin_test.py
- examples/ivp_1D_simulation/ivp_1D_simulation.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
kwave/kspaceFirstOrder1D.py
🧬 Code graph analysis (2)
kwave/kWaveSimulation_helper/extract_sensor_data.py (2)
kwave/data.py (1)
numpy(107-108)kwave/kWaveSimulation.py (1)
compute_directivity(226-238)
kwave/kWaveSimulation_helper/create_storage_variables.py (3)
kwave/kgrid.py (16)
kWaveGrid(14-701)x(227-231)y(234-240)z(243-249)size(297-301)dx(178-182)dy(185-189)dz(192-196)kx(311-323)ky(326-337)kz(340-350)x_vec(199-204)y_vec(207-214)Nx(157-161)Ny(164-168)Nz(171-175)kwave/recorder.py (1)
Recorder(9-143)kwave/utils/dotdictionary.py (1)
dotdict(4-57)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (macos-latest, 3.12)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test_install (macos-latest, 3.12, test)
- GitHub Check: test_install (macos-latest, 3.10, all)
- GitHub Check: test_install (windows-latest, 3.12, all)
- GitHub Check: test_install (windows-latest, 3.13, test)
🔇 Additional comments (3)
kwave/kspaceFirstOrder1D.py (3)
294-315: Good handling of heterogeneous density interpolation.The staggered grid interpolation for
rho0correctly handles the boundary conditions by falling back to original values where interpolation produces NaN.
482-509: Time-stepping loop with k-space and finite-difference options is well-structured.The main time loop correctly branches between k-space methods, finite-difference schemes (2nd and 4th order), and non-uniform grid handling. The PML application follows the expected pattern.
620-651: Initial condition enforcement at t=0 is correct.The
source.p0initial pressure handling correctly sets up the pressure field and computes the initial velocity using the half-step approach required for the staggered grid scheme.
| indices = tri.find_simplex(interpolation_points) | ||
|
|
||
| bc = tri.transform[indices, :2].dot(np.transpose(tri.points[indices, :] - tri.transform[indices, 2])) | ||
|
|
||
| return tri.points[indices, :], bc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n kwave/kWaveSimulation_helper/create_storage_variables.py | head -40Repository: waltsims/k-wave-python
Length of output: 1311
🏁 Script executed:
# Check the 3D version and find usages of both functions
cat -n kwave/kWaveSimulation_helper/create_storage_variables.py | sed -n '36,60p'Repository: waltsims/k-wave-python
Length of output: 904
🏁 Script executed:
# Find where gridDataFast2D and gridDataFast3D are called
rg -n "gridDataFast2D|gridDataFast3D" kwave/kWaveSimulation_helper/create_storage_variables.py -A 2Repository: waltsims/k-wave-python
Length of output: 682
🏁 Script executed:
# Find where record.tri and record.bc are used
rg -n "record\.tri|record\.bc" kwave/ --type py -B 2 -A 5Repository: waltsims/k-wave-python
Length of output: 17241
🏁 Script executed:
# Check if there's any interpolation logic using these values
rg -n "barycentric|interpolat" kwave/ --type py -i -C 3Repository: waltsims/k-wave-python
Length of output: 45446
🏁 Script executed:
python3 << 'EOF'
from scipy.spatial import Delaunay
import numpy as np
# Simple example
points = np.array([[0, 0], [1, 0], [0, 1], [1, 1]])
tri = Delaunay(points)
print("tri.simplices shape:", tri.simplices.shape)
print("tri.simplices:\n", tri.simplices)
print("\ntri.points shape:", tri.points.shape)
print("tri.points:\n", tri.points)
print("\ntri.transform shape:", tri.transform.shape)
print("tri.transform (first simplex):\n", tri.transform[0])
# Test query point
query = np.array([[0.25, 0.25]])
simplex_idx = tri.find_simplex(query)
print(f"\nQuery point {query[0]} is in simplex {simplex_idx}")
print(f"Simplex vertices (indices): {tri.simplices[simplex_idx]}")
print(f"Simplex vertices (coords):\n{tri.points[tri.simplices[simplex_idx]]}")
# Correct barycentric calculation
b = query - tri.transform[simplex_idx, 2]
bc = tri.transform[simplex_idx, :2].dot(b.T)
print(f"\nCorrect BC (using interpolation_points): {bc}")
# Incorrect barycentric calculation (what current code does)
b_wrong = tri.points[tri.simplices[simplex_idx]] - tri.transform[simplex_idx, 2]
bc_wrong = tri.transform[simplex_idx, :2].dot(b_wrong.T)
print(f"Wrong BC (using tri.points): {bc_wrong}")
EOFRepository: waltsims/k-wave-python
Length of output: 152
🏁 Script executed:
# Check how scipy.spatial.Delaunay is documented/used
rg -n "tri\.simplices|tri\.points\[" kwave/ --type py -A 1 -B 1Repository: waltsims/k-wave-python
Length of output: 1023
🏁 Script executed:
# Verify the array indexing context more carefully
cat -n kwave/kWaveSimulation_helper/extract_sensor_data.py | sed -n '428,433p'Repository: waltsims/k-wave-python
Length of output: 416
🏁 Script executed:
# Let's verify by checking what dimensions record.tri and record.bc have
# when used in extract_sensor_data
rg -n "record.tri|record.bc" kwave/kWaveSimulation_helper/extract_sensor_data.py | head -20Repository: waltsims/k-wave-python
Length of output: 2153
🏁 Script executed:
# Check if there are any tests or examples showing expected behavior
find kwave -name "*test*.py" -o -name "*example*.py" | head -10Repository: waltsims/k-wave-python
Length of output: 48
Fix barycentric coordinate calculation and return value in gridDataFast2D/3D functions.
The barycentric coordinate computation uses the wrong points. Line 31 (and line 55 in 3D) should compute coordinates relative to interpolation_points, not tri.points. Additionally, the return value should be tri.simplices[indices] (vertex indices) instead of tri.points[indices, :] (coordinates) to enable proper array indexing in extract_sensor_data.py where the pattern p[record.tri] * record.bc is used.
🔧 Suggested fix for gridDataFast2D
indices = tri.find_simplex(interpolation_points)
- bc = tri.transform[indices, :2].dot(np.transpose(tri.points[indices, :] - tri.transform[indices, 2]))
+ # Compute barycentric coordinates for interpolation points
+ b = interpolation_points - tri.transform[indices, 2]
+ bc = np.einsum('ijk,ik->ij', tri.transform[indices, :2], b)
- return tri.points[indices, :], bc
+ return tri.simplices[indices], bc🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/create_storage_variables.py around lines 29 -
33, In gridDataFast2D and gridDataFast3D the barycentric coords are computed
relative to the wrong points and the function returns vertex coordinates instead
of vertex indices; change the barycentric calculation to use
interpolation_points (i.e., compute tri.transform[indices,
:2].dot((interpolation_points - tri.transform[indices, 2]).T) or the 3D
equivalent) and return tri.simplices[indices] (vertex index arrays) instead of
tri.points[indices, :], so downstream code like p[record.tri] * record.bc works
correctly.
| simplex_indices = tri.find_simplex(interpolation_points) | ||
|
|
||
| # barycentric coordinates | ||
| bc = tri.transform[simplex_indices, :2].dot(np.transpose(tri.points[simplex_indices, :] - tri.transform[simplex_indices, 2])) | ||
|
|
||
| return tri.points[simplex_indices, :], bc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same barycentric coordinate issue in 3D version.
The 3D triangulation function has the same incorrect barycentric coordinate calculation as the 2D version.
| if (dim == 1): | ||
| sensor_data.ux_max = ux_sgx[sensor_mask_index] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent indexing for 1D case in record_u_max.
At line 171 (initialization), sensor_mask_index is used directly without np.unravel_index. At line 183 (update), np.unravel_index is applied. This inconsistency will cause shape mismatches or incorrect comparisons.
🐛 Fix inconsistent indexing
else:
if (dim == 1):
- sensor_data.ux_max = np.maximum(sensor_data.ux_max, ux_sgx[np.unravel_index(np.squeeze(sensor_mask_index), ux_sgx.shape, order='F')])
+ sensor_data.ux_max = np.maximum(sensor_data.ux_max, ux_sgx[sensor_mask_index])Also applies to: 182-183
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 170 - 171,
In record_u_max, the 1D initialization uses sensor_mask_index directly while the
update uses np.unravel_index, causing inconsistent indexing; change the
initialization that sets sensor_data.ux_max (and analogous assignments) to use
np.unravel_index(sensor_mask_index, ux_sgx.shape)[0] (or the appropriate axis
index) when indexing ux_sgx so both initialization and updates use the same
unraveled index logic and shapes.
| if dim == 1: | ||
| cuboid_size_x = [record.cuboid_corners_list[1, cuboid_index] - record.cuboid_corners_list[0, cuboid_index], 1] | ||
| elif dim == 2: | ||
| cuboid_size_x = [record.cuboid_corners_list[2, cuboid_index] - record.cuboid_corners_list[0, cuboid_index], | ||
| record.cuboid_corners_list[3, cuboid_index] - record.cuboid_corners_list[1, cuboid_index]] | ||
| elif dim == 3: | ||
| cuboid_size_x = [record.cuboid_corners_list[3, cuboid_index] + 1 - record.cuboid_corners_list[0, cuboid_index], | ||
| record.cuboid_corners_list[4, cuboid_index] + 1 - record.cuboid_corners_list[1, cuboid_index], | ||
| record.cuboid_corners_list[5, cuboid_index] + 1 - record.cuboid_corners_list[2, cuboid_index]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n "cuboid_corners_list\[" --type py -C 2Repository: waltsims/k-wave-python
Length of output: 9703
🏁 Script executed:
rg -n "cuboid_corners_list" --type py | head -30Repository: waltsims/k-wave-python
Length of output: 4744
🏁 Script executed:
grep -n "cuboid_corners_list" kwave/kWaveSimulation_helper/extract_sensor_data.py | head -20Repository: waltsims/k-wave-python
Length of output: 1349
Add +1 to cuboid size calculation for 1D and 2D to match 3D and maintain consistency.
The calculation in extract_sensor_data.py is missing the +1 offset for 1D and 2D dimensions. This is a bug—the identical calculation in create_storage_variables.py (lines 287–294) correctly applies +1 to all dimensions:
- 1D should be:
cuboid_size_x = [record.cuboid_corners_list[1, cuboid_index] - record.cuboid_corners_list[0, cuboid_index] + 1, 1] - 2D should add
+1to both components
The 3D case already has +1, and lines 272–274 in the same file use np.arange(...+1) for inclusive ranges, confirming that +1 is required for all dimensions to correctly compute the size of the cuboid range.
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 254 - 262,
The cuboid_size_x computation in extract_sensor_data.py incorrectly omits the +1
offset for dim==1 and dim==2; update the dim==1 branch (cuboid_size_x) to add +1
to the x-length calculation and keep the y-size as 1, and update the dim==2
branch to add +1 to both components so both use record.cuboid_corners_list[...,
cuboid_index] difference plus 1 (consistent with the dim==3 branch and the
np.arange(...+1) usage).
| if (dim == 1): | ||
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices] | ||
| elif (dim == 2): | ||
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices] | ||
| sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices] | ||
| elif (dim == 3): | ||
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | ||
| sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | ||
| sensor_data[cuboid_index].uz[..., file_index] = uz_sgz[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | ||
| else: | ||
| raise RuntimeError("Wrong dimensions") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect indexing for 1D/2D cuboid velocity extraction.
For 1D/2D cases (lines 316, 318-319), cuboid_indices is used directly as an index, but cuboid_indices is a 2D array of shape (N, 3) from the meshgrid. This will cause an IndexError or incorrect results. The 3D case correctly unpacks the columns.
🐛 Fix 1D/2D cuboid indexing
For 1D:
if (dim == 1):
- sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices]
+ sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices[:, 0]].reshape(cuboid_size_x)For 2D:
elif (dim == 2):
- sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices]
- sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices]
+ sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices[:, 0], cuboid_indices[:, 1]].reshape(cuboid_size_x)
+ sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices[:, 0], cuboid_indices[:, 1]].reshape(cuboid_size_x)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (dim == 1): | |
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices] | |
| elif (dim == 2): | |
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices] | |
| sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices] | |
| elif (dim == 3): | |
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | |
| sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | |
| sensor_data[cuboid_index].uz[..., file_index] = uz_sgz[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | |
| else: | |
| raise RuntimeError("Wrong dimensions") | |
| if (dim == 1): | |
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices[:, 0]].reshape(cuboid_size_x) | |
| elif (dim == 2): | |
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices[:, 0], cuboid_indices[:, 1]].reshape(cuboid_size_x) | |
| sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices[:, 0], cuboid_indices[:, 1]].reshape(cuboid_size_x) | |
| elif (dim == 3): | |
| sensor_data[cuboid_index].ux[..., file_index] = ux_sgx[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | |
| sensor_data[cuboid_index].uy[..., file_index] = uy_sgy[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | |
| sensor_data[cuboid_index].uz[..., file_index] = uz_sgz[cuboid_indices[:, 0], cuboid_indices[:, 1], cuboid_indices[:, 2]].reshape(cuboid_size_x) | |
| else: | |
| raise RuntimeError("Wrong dimensions") |
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 315 - 325,
The code incorrectly uses cuboid_indices (an (N,3) array) directly for 1D/2D
indexing; unpack the coordinate columns like the 3D branch. For dim==1, index
ux_sgx with cuboid_indices[:,0] (or tuple([cuboid_indices[:,0]])) and reshape to
cuboid_size_x; for dim==2, index ux_sgx and uy_sgy with cuboid_indices[:,0] and
cuboid_indices[:,1] (or use tuple(cuboid_indices[:,:2].T)) and reshape to
cuboid_size_x; keep the dim==3 branch as-is using cuboid_indices[:,0],
cuboid_indices[:,1], cuboid_indices[:,2]. Ensure you use the same variables
ux_sgx, uy_sgy, uz_sgz and cuboid_size_x when assigning into sensor_data[...] at
file_index.
| if (dim == 2): | ||
|
|
||
| # compute forward FFTs | ||
| ux_k = record.x_shift_neg * np.fft.fftn(ux_sgx) | ||
| uy_k = record.y_shift_neg * np.fft.fftn(uy_sgy) | ||
|
|
||
| # ux compressional | ||
| split_field = np.real(np.fft.ifftn(record.kx_norm**2 * ux_k + record.kx_norm * record.ky_norm * uy_k)) | ||
| sensor_data.ux_split_p[..., file_index] = split_field[np.unravel_index(np.squeeze(sensor_mask_index), split_field.shape, order='F')] | ||
|
|
||
| # ux shear | ||
| split_field = np.real(np.fft.ifftn((1.0 - record.kx_norm**2) * ux_k - record.kx_norm * record.ky_norm * uy_k)) | ||
| sensor_data[cuboid_index].ux_split_s[..., file_index] = split_field[np.unravel_index(np.squeeze(sensor_mask_index), split_field.shape, order='F')] | ||
|
|
||
| # uy compressional | ||
| split_field = np.real(np.fft.ifftn(record.ky_norm * record.kx_norm * ux_k + record.ky_norm **2 * uy_k)) | ||
| sensor_data[cuboid_index].uy_split_p[..., file_index] = split_field[np.unravel_index(np.squeeze(sensor_mask_index), split_field.shape, order='F')] | ||
|
|
||
| # uy shear | ||
| split_field = np.real(np.fft.ifftn(-record.ky_norm * record.kx_norm * ux_k + (1.0 - record.ky_norm**2) * uy_k)) | ||
| sensor_data[cuboid_index].uy_split_s[..., file_index] = split_field[np.unravel_index(np.squeeze(sensor_mask_index), split_field.shape, order='F')] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2D split field in cuboid section uses wrong sensor_data reference.
At line 351, sensor_data.ux_split_p is used (without [cuboid_index]), while line 355 correctly uses sensor_data[cuboid_index].ux_split_s. This inconsistency will cause an AttributeError since sensor_data is a list when use_cuboid_corners is true.
🐛 Fix missing cuboid_index
# ux compressional
split_field = np.real(np.fft.ifftn(record.kx_norm**2 * ux_k + record.kx_norm * record.ky_norm * uy_k))
- sensor_data.ux_split_p[..., file_index] = split_field[np.unravel_index(np.squeeze(sensor_mask_index), split_field.shape, order='F')]
+ sensor_data[cuboid_index].ux_split_p[..., file_index] = split_field[np.unravel_index(np.squeeze(sensor_mask_index), split_field.shape, order='F')]🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py around lines 343 - 364,
The 2D split-field block incorrectly writes to sensor_data.ux_split_p without
indexing by cuboid_index, causing an AttributeError when sensor_data is a list;
update the write to use sensor_data[cuboid_index].ux_split_p (matching the other
writes like sensor_data[cuboid_index].ux_split_s and
sensor_data[cuboid_index].uy_split_p/uy_split_s) so all four assignments in the
dim == 2 branch consistently index the cuboid entry.
| # store the time history of the acoustic pressure | ||
| if flags.record_p or flags.record_I or flags.record_I_avg: | ||
| if dim == 1: | ||
| sensor_data.p[:, file_index] = xp.interp(xp.squeeze(record.sensor_x), xp.squeeze(record.grid_x), xp.real(p)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect argument order for np.interp.
The signature of np.interp is np.interp(x, xp, fp) where x is the query points, xp is the x-coordinates of data points, and fp is the y-coordinates (values).
At line 429, the order looks correct. But at lines 437, 450, 462, etc., the arguments appear swapped: np.interp(record.grid_x, p, record.sensor_x) passes p (pressure values) as x-coordinates and record.sensor_x (sensor positions) as values, which is backwards.
🐛 Fix np.interp argument order
if flags.record_p_max:
if dim == 1:
if file_index == 0:
- sensor_data.p_max = np.interp(record.grid_x, p, record.sensor_x)
+ sensor_data.p_max = np.interp(record.sensor_x, record.grid_x, p)
else:
- sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.grid_x, p, record.sensor_x))
+ sensor_data.p_max = np.maximum(sensor_data.p_max, np.interp(record.sensor_x, record.grid_x, p))Apply similar fixes to lines 450-452, 462, 470, 484, 499, 511, 526, 539, 553.
Also applies to: 437-439, 450-452
🤖 Prompt for AI Agents
In @kwave/kWaveSimulation_helper/extract_sensor_data.py at line 429, np.interp
calls have their arguments swapped: occurrences like xp.interp(record.grid_x, p,
record.sensor_x) pass p as xp and sensor positions as fp; change these to
xp.interp(record.sensor_x, record.grid_x, p) (and use xp.squeeze/np.real as
appropriate), and apply the same fix for all interpolations of pressure/velocity
fields (e.g. assignments to sensor_data.p, sensor_data.u, sensor_data.ux, etc.)
so the call signature is xp.interp(x_query, x_coords, y_values) with x_query =
record.sensor_x, x_coords = record.grid_x, y_values = p (or the corresponding
field).
|
Can you please address the comments? It would be much simpler to accept this PR if it was restricted to a single example before affecting so many files. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
kwave/kWaveSimulation.py (1)
209-214: Critical: Infinite recursion intime_revproperty.The property returns
self.time_revin its else branch, which will call itself recursively until stack overflow.🐛 Proposed fix
@property @deprecated(version="0.4.1", reason="Use TimeReversal class instead") def time_rev(self) -> bool: if self.sensor and not isinstance(self.sensor, NotATransducer): return not self.options.simulation_type.is_elastic_simulation() and self.sensor.time_reversal_boundary_data is not None - return self.time_rev + return False
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Line 517: The code uses the nonstandard call xp.shape(source.ux) — replace
calls like xp.shape(source.ux)[1] with the array attribute form
source.ux.shape[1] (and similarly for any other xp.shape(array) occurrences,
e.g., the one around line 567) so checks use source.ux.shape instead of
xp.shape(...), keeping the existing bounds logic with k_sim.source_ux and
t_index unchanged.
- Line 3: Remove the duplicate import of SimulationExecutionOptions from
kwave.options.simulation_execution_options by deleting the redundant import
statement (keep a single import of SimulationExecutionOptions in the file),
ensuring no other imports or references are affected and imports remain
PEP8-compliant.
- Around line 70-77: dotdict_to_gpu is using setattr(obj, name, ...) which sets
attributes rather than dictionary keys on the dotdict (a dict subclass); change
the mutation to item assignment obj[name] = cp.asarray(val) when val is a numpy
array, and make the identical change in dotdict_to_cpu (replace setattr(obj,
name, ...) with obj[name] = ...) so keys are updated correctly.
🧹 Nitpick comments (4)
examples/ivp_1D_simulation/ivp_1D_simulation.ipynb (2)
153-159: Hardcoded GPU simulation may cause failures on systems without GPU.Setting
is_gpu_simulation=Truewill cause the example to fail on systems without CUDA/CuPy. Consider making this configurable or adding a fallback.💡 Suggested improvement
- "# define the simulation options\n", - "simulation_options = SimulationOptions(data_cast=\"off\", save_to_disk=False)\n", - "execution_options = SimulationExecutionOptions(is_gpu_simulation=True)\n", + "# define the simulation options\n", + "simulation_options = SimulationOptions(data_cast=\"off\", save_to_disk=False)\n", + "\n", + "# set to True to use GPU (requires CuPy and CUDA), False for CPU-only\n", + "use_gpu = False\n", + "execution_options = SimulationExecutionOptions(is_gpu_simulation=use_gpu)\n",
131-134: Remove or clarify the "hack" comment.The comment "this hack is needed to ensure that the sensor is in [1,2] dimensions" suggests a workaround. Consider documenting why this reshaping is necessary or addressing it in the API if it's a common pattern.
kwave/kWaveSimulation.py (1)
531-603: Duplicated code blocks for elastic and non-elastic paths.The elastic (lines 534-566) and non-elastic (lines 567-603) branches are nearly identical, differing only in the
sensor_xextraction for 1D. Consider extracting shared logic into a helper method.♻️ Suggested refactor
- # move all this inside create_storage_variables? - # a copy of record is passed through, and use to update the - if is_elastic_code: - record_old = copy.deepcopy(self.record) - if not self.blank_sensor: - sensor_x = self.sensor.mask[0, :] - else: - sensor_x = None - - values = dotdict({"sensor_x": sensor_x, - "sensor_mask_index": self.sensor_mask_index, - "record": record_old, - "sensor_data_buffer_size": self.s_source_pos_index}) - - if self.record.u_split_field: - self.record_u_split_field = self.record.u_split_field - - flags = dotdict({"use_sensor": self.use_sensor, - "blank_sensor": self.blank_sensor, - "binary_sensor_mask": self.binary_sensor_mask, - "record_u_split_field": self.record.u_split_field, - "time_rev": None, - "reorder_data": self.reorder_data, - "transducer_receive_elevation_focus": self.transducer_receive_elevation_focus, - "axisymmetric": opt.simulation_type.is_axisymmetric(), - "transducer_sensor": self.transducer_sensor, - "use_cuboid_corners": self.cuboid_corners}) - - # this creates the storage variables by determining the spatial locations of the data which is in self.record. - flags, self.record, self.sensor_data, self.num_recorded_time_points = create_storage_variables(self.kgrid, - self.sensor, - opt, - values, - flags, - self.record) - else: - record_old = copy.deepcopy(self.record) - if not self.blank_sensor: - if k_dim == 1: - # this has been declared in check_sensor - sensor_x = self.sensor_x - else: - sensor_x = self.sensor.mask[0, :] - else: - sensor_x = None - - values = dotdict({"sensor_x": sensor_x, - "sensor_mask_index": self.sensor_mask_index, - "record": record_old, - "sensor_data_buffer_size": self.s_source_pos_index}) - - if self.record.u_split_field: - self.record_u_split_field = self.record.u_split_field - - flags = dotdict({"use_sensor": self.use_sensor, - "blank_sensor": self.blank_sensor, - "binary_sensor_mask": self.binary_sensor_mask, - "record_u_split_field": self.record.u_split_field, - "time_rev": None, - "reorder_data": self.reorder_data, - "transducer_receive_elevation_focus": self.transducer_receive_elevation_focus, - "axisymmetric": opt.simulation_type.is_axisymmetric(), - "transducer_sensor": self.transducer_sensor, - "use_cuboid_corners": self.cuboid_corners}) - - # this creates the storage variables by determining the spatial locations of the data which is in self.record. - flags, self.record, self.sensor_data, self.num_recorded_time_points = create_storage_variables(self.kgrid, - self.sensor, - opt, - values, - flags, - self.record) + # Determine sensor_x based on simulation type and dimensionality + record_old = copy.deepcopy(self.record) + if not self.blank_sensor: + if not is_elastic_code and k_dim == 1: + sensor_x = self.sensor_x # declared in check_sensor for 1D + else: + sensor_x = self.sensor.mask[0, :] + else: + sensor_x = None + + values = dotdict({"sensor_x": sensor_x, + "sensor_mask_index": self.sensor_mask_index, + "record": record_old, + "sensor_data_buffer_size": self.s_source_pos_index}) + + if self.record.u_split_field: + self.record_u_split_field = self.record.u_split_field + + flags = dotdict({"use_sensor": self.use_sensor, + "blank_sensor": self.blank_sensor, + "binary_sensor_mask": self.binary_sensor_mask, + "record_u_split_field": self.record.u_split_field, + "time_rev": None, + "reorder_data": self.reorder_data, + "transducer_receive_elevation_focus": self.transducer_receive_elevation_focus, + "axisymmetric": opt.simulation_type.is_axisymmetric(), + "transducer_sensor": self.transducer_sensor, + "use_cuboid_corners": self.cuboid_corners}) + + flags, self.record, self.sensor_data, self.num_recorded_time_points = create_storage_variables( + self.kgrid, self.sensor, opt, values, flags, self.record)kwave/kspaceFirstOrder1D.py (1)
51-54: Bareexceptclause swallows all exceptions.The bare
excepthides potential errors likeKeyboardInterruptor configuration issues. Consider catching specific exceptions.💡 Suggested improvement
try: import cupy as cp # this returns the number of visible GPUs ngpus = cp.cuda.runtime.getDeviceCount() if ngpus > 0: return cp, True - except Exception: + except (ImportError, RuntimeError, cp.cuda.runtime.CUDARuntimeError): pass
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
examples/ivp_1D_simulation/ivp_1D_simulation.ipynbkwave/kWaveSimulation.pykwave/kspaceFirstOrder1D.pytests/matlab_test_data_collectors/python_testers/getWin_test.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/matlab_test_data_collectors/python_testers/getWin_test.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
examples/ivp_1D_simulation/ivp_1D_simulation.ipynbkwave/kspaceFirstOrder1D.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
- GitHub Check: test (macos-latest, 3.12)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (macos-latest, 3.10)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test_install (macos-latest, 3.11, test)
- GitHub Check: test_install (macos-latest, 3.10, dev)
🔇 Additional comments (8)
examples/ivp_1D_simulation/ivp_1D_simulation.ipynb (1)
70-94: LGTM on grid and medium setup.The grid creation, heterogeneous medium definition with varying sound speed and density regions, and time array calculation follow expected k-Wave patterns for 1D simulations.
kwave/kWaveSimulation.py (3)
147-154: LGTM onis_nonlinearproperty.Clean delegation to
medium.is_nonlinear()with appropriate docstring.
166-167: Good catch on spelling fix.Corrected "loseless" to "lossless" in the equation of state return value.
1-1: LGTM onimport copyaddition.Required for the
copy.deepcopy(self.record)calls in the storage variable creation logic.kwave/kspaceFirstOrder1D.py (4)
93-236: Comprehensive docstring for the 1D solver.The detailed MATLAB-style documentation thoroughly explains the function's purpose, input structures, and output formats. This aids users familiar with the original k-Wave MATLAB implementation.
486-623: Main time-stepping loop structure looks correct.The loop correctly implements:
- k-space velocity updates with PML
- Finite difference fallbacks (2nd and 4th order)
- Source term injection (dirichlet, additive, additive-no-correction modes)
- Linear and nonlinear mass conservation
- Multiple equations of state (lossless, absorbing, stokes)
624-656: Initial condition handling at t_index == 0.The initialization of
p,rhox, andux_sgxfromsource.p0follows the correct k-Wave approach for enforcing initial pressure distributions.
446-448: No issue found. The code is safe becauseis_nonlinear()returnsself.BonA is not None(kmedium.py:114). This creates a logical guarantee: when the nonlinear code executes (lines 608, 622),is_nonlinearisTrue, which meansmedium.BonA is not None, ensuring the localBonAvariable assignment at line 447 always occurs before access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 378-379: The attribute check is mismatched: the code tests
hasattr(k_sim, 's_source_sig_index') but then accesses s_source_pos_index;
change the condition to check for the actual attribute used (hasattr(k_sim,
's_source_pos_index')) and keep the existing None check and assignment
(squeezing and subtracting 1) so the block safely runs only when
s_source_pos_index exists on k_sim.
- Around line 51-52: The except line uses function-call syntax and references
cp.cuda.runtime.CUDARuntimeError even when cp may be None; replace the incorrect
syntax with a tuple (e.g. except (ImportError, RuntimeError, ...):) and avoid
directly referencing cp attributes when cp can be None — either include
AttributeError in the tuple or conditionally include
cp.cuda.runtime.CUDARuntimeError only if cp is defined; update the except that
currently mentions cp to use a proper tuple and a safe check for cp before
adding cupy-specific exceptions.
- Around line 515-516: The inner numeric comparison is wrong: after checking
"k_sim.source_ux is not False and t_index < source.ux.shape[1]" you should not
compare the boolean k_sim.source_ux to t_index; remove the inner "if
k_sim.source_ux >= t_index:" check and execute the source-ux handling directly
(or replace it with a boolean check like "if k_sim.source_ux is True:" if you
need to distinguish True vs other truthy values), keeping the outer t_index <
source.ux.shape[1] guard to ensure the source signal length.
- Around line 338-339: The pml arrays are created with get_pml(..., dimension=0)
which returns row vectors; change the calls that assign pml_x and pml_x_sgx (the
get_pml invocations) to use dimension=1 instead of 0 so they match
kspaceFirstOrder2D/3D conventions and align the first spatial dimension
handling; update the two calls that set pml_x and pml_x_sgx to pass 1 as the
dimension argument (no other logic changes needed).
🧹 Nitpick comments (6)
kwave/kspaceFirstOrder1D.py (6)
68-75: Add guard for cupy availability.Unlike
dotdict_to_cpu(which checksif cp is None), this function lacks a guard. If called when cupy isn't available,cp.asarraywill fail.♻️ Suggested improvement
def dotdict_to_gpu(obj, verbose: bool = False): """Convert all numpy arrays in the given dotdict to cupy arrays.""" + if cp is None: + raise RuntimeError("CuPy is not available. Cannot convert to GPU arrays.") for name, val in obj.items(): if isinstance(val, np.ndarray): if verbose: print(f"Converting {name} to cupy array") obj[name] = cp.asarray(val) return obj
581-581: Inconsistent source reference.This line uses
source.p(function parameter) while line 573 usesk_sim.source.p. For consistency and to avoid potential issues if the source is modified during simulation setup, prefer usingk_sim.source.pthroughout.- rhox[k_sim.p_source_pos_index] = rhox[k_sim.p_source_pos_index] + source.p[k_sim.p_source_sig_index, t_index] + rhox[k_sim.p_source_pos_index] = rhox[k_sim.p_source_pos_index] + k_sim.source.p[k_sim.p_source_sig_index, t_index]
593-595: Consider using consistent medium reference.The code mixes
medium.absorb_*(lines 593-595, 600, 610-612, 619) withk_sim.medium.BonA(line 613). While they likely reference the same object, usingk_sim.mediumconsistently would improve clarity.
138-138: Consider updating MATLAB references in docstring.The docstring references "MATLAB's standard column-wise linear matrix index ordering" in multiple places. For a Python library, consider updating to reference NumPy's C-order (row-major) or Fortran-order conventions to avoid confusion.
543-548: Consider using array-native operations for boundary padding.Using
xp.appendwith Python lists (e.g.,[0, 0]) works but forces type conversions. For GPU arrays, this is suboptimal. Consider usingxp.padorxp.concatenatewith pre-allocated arrays.♻️ Example improvement for line 547
- duxdx = (xp.append([0, 0], ux_sgx[:-2]) - 27.0 * xp.append(0, ux_sgx[:-1]) + 27.0 * ux_sgx - xp.append(ux_sgx[1:], 0)) / (24.0 * dx) + # Using pad for cleaner boundary handling + duxdx = (xp.pad(ux_sgx[:-2], (2, 0)) - 27.0 * xp.pad(ux_sgx[:-1], (1, 0)) + 27.0 * ux_sgx - xp.pad(ux_sgx[1:], (0, 1))) / (24.0 * dx)
246-246: Consider respecting verbose settings for print statements.The GPU status message is printed unconditionally. Consider gating this and similar status messages (lines 480-481) behind
execution_options.verbose_levelfor cleaner output in production use.if execution_options.is_gpu_simulation: xp, using_gpu = get_array_module() - print(f"Using {'cupy (GPU)' if using_gpu else 'numpy (CPU)'}") + if execution_options.verbose_level > 0: + print(f"Using {'cupy (GPU)' if using_gpu else 'numpy (CPU)'}")
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
kwave/kspaceFirstOrder1D.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
kwave/kspaceFirstOrder1D.py
🧬 Code graph analysis (1)
kwave/kspaceFirstOrder1D.py (11)
kwave/data.py (1)
numpy(107-108)kwave/kgrid.py (3)
kWaveGrid(14-701)dx(178-182)Nx(157-161)kwave/kmedium.py (1)
kWaveMedium(11-232)kwave/ksensor.py (1)
kSensor(9-81)kwave/kWaveSimulation.py (1)
kWaveSimulation(36-1657)kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
extract_sensor_data(8-715)kwave/options/simulation_execution_options.py (1)
SimulationExecutionOptions(13-308)kwave/utils/data.py (1)
scale_time(57-111)kwave/utils/dotdictionary.py (1)
dotdict(4-57)kwave/utils/math.py (1)
sinc(334-345)kwave/utils/pml.py (1)
get_pml(6-65)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test_install (macos-latest, 3.11, test)
- GitHub Check: test_install (macos-latest, 3.13, examples)
- GitHub Check: test_install (windows-latest, 3.13, test)
- GitHub Check: test_install (macos-latest, 3.12, examples)
- GitHub Check: test_install (macos-latest, 3.11, docs)
- GitHub Check: test_install (windows-latest, 3.13, docs)
- GitHub Check: test_install (windows-latest, 3.10, docs)
- GitHub Check: test_install (windows-latest, 3.10, examples)
🔇 Additional comments (2)
kwave/kspaceFirstOrder1D.py (2)
444-445: This concern is unfounded; the code is safe from NameError.The
is_nonlinearproperty inkWaveSimulationreturnsself.medium.is_nonlinear(), which is defined inkWaveMediumasreturn self.BonA is not None. Therefore,k_sim.is_nonlinearcan only beTruewhenmedium.BonAis notNone, guaranteeing thatBonAis always defined at line 445 before it's used in the nonlinear branches at lines 606 and 620.Minor note: Line 613 uses
medium.BonAdirectly while lines 606 and 620 use the localBonAvariable—this inconsistency could be normalized for clarity, but it is not a bug.
494-506: The Python 3.10+ requirement for match/case statements is already documented inpyproject.tomlasrequires-python = ">=3.10". No action needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 590-594: The absorbing branch uses medium.absorb_tau,
medium.absorb_eta, medium.absorb_nabla1, and medium.absorb_nabla2 directly (in
the expression computing p with rhox, rho0, duxdx and FFTs), which breaks GPU
execution; fix by casting these medium properties to the active array module xp
in the medium-data casting section (e.g., create absorb_tau =
xp.asarray(medium.absorb_tau, dtype=my_type), absorb_eta =
xp.asarray(medium.absorb_eta, dtype=my_type), and absorb_nabla1/absorb_nabla2 =
xp.asarray(..., dtype=my_complex_type)), then replace direct uses of
medium.absorb_* with the new absorb_* variables in the equation-of-state
calculation that defines p.
- Line 580: The code uses source.p at rhox[k_sim.p_source_pos_index] =
rhox[k_sim.p_source_pos_index] + source.p[k_sim.p_source_sig_index, t_index]
while elsewhere (e.g. earlier at line 564) it accesses k_sim.source.p; make the
reference consistent by replacing source.p with k_sim.source.p so the update
uses k_sim.source.p[k_sim.p_source_sig_index, t_index], ensuring rhox,
k_sim.p_source_pos_index and k_sim.p_source_sig_index all refer to the same
source object.
- Around line 244-249: The code silently falls back to CPU when
execution_options.is_gpu_simulation is True but get_array_module() returns
numpy; update the block that calls get_array_module() (the call and variables xp
and using_gpu) to detect this case and emit a clear warning or logger message
indicating GPU was requested but not available (include which backend was
selected), or optionally raise an error if GPU is required; ensure the message
references execution_options.is_gpu_simulation, get_array_module(), xp and
using_gpu so reviewers can find and change the behavior easily.
- Around line 304-307: np.interp called in rho0_sgx = np.interp(points +
k_sim.kgrid.dx / 2.0, points, np.squeeze(k_sim.rho0)) will extrapolate boundary
values rather than produce NaN, so the subsequent check
rho0_sgx[np.isnan(rho0_sgx)] = np.squeeze(k_sim.rho0)[np.isnan(rho0_sgx)] is
dead; either remove that NaN-check fallback, or replace np.interp with
scipy.interpolate.interp1d (bounds_error=False, fill_value=np.nan) to get NaNs
for out-of-range points and then keep the existing fallback; update usages of
rho0_sgx, points, k_sim.rho0 and k_sim.kgrid.dx accordingly.
- Around line 642-648: The dpdx computations in the dpdx assignments for the
second-order and fourth-order branches use kgrid.dx directly instead of the
pre-cast dx variable; update both dpdx expressions to divide by dx (the pre-cast
array) rather than kgrid.dx to avoid type mismatches on GPU and unnecessary
conversions—specifically change the dpdx lines that set dpdx = (...) / (24.0 *
kgrid.dx) and dpdx = (xp.append(p[1:], 0.0) - p) / kgrid.dx to use dx (and keep
rho0_sgx_inv, dt, p, xp, and ux_sgx unchanged) ensuring any broadcasting remains
correct.
- Around line 360-362: The conditional can raise AttributeError when checking
k_sim.source.p_mode or .u_mode; replace the chained hasattr/access with safe
attribute access using the source object (e.g., source = getattr(k_sim,
'source', None)) and then test getattr(source, 'p_mode', None) and
getattr(source, 'u_mode', None); also ensure the overall boolean logic groups
the additive checks correctly (parenthesize the p_mode vs u_mode OR) before
computing source_kappa so you only access kgrid, c_ref, and k_sim.source
attributes after confirming they exist.
🧹 Nitpick comments (2)
kwave/kspaceFirstOrder1D.py (2)
58-75: Missingcp is Noneguard into_gpuanddotdict_to_gpu.Both
to_gpuanddotdict_to_gpuusecp.asarray()without checking ifcpisNone, unlikedotdict_to_cpuwhich does include this check. While these functions are only called whenusing_gpu=True, adding defensive guards would improve consistency and prevent potentialAttributeErrorif called incorrectly.Suggested defensive guards
def to_gpu(obj, verbose: bool = False): """Convert all numpy arrays in the given object to cupy arrays.""" + if cp is None: + return obj for name, val in vars(obj).items(): if isinstance(val, np.ndarray): if verbose: print(f"Converting {name} to cupy array") setattr(obj, name, cp.asarray(val)) return obj def dotdict_to_gpu(obj, verbose: bool = False): """Convert all numpy arrays in the given dotdict to cupy arrays.""" + if cp is None: + return obj for name, val in obj.items(): if isinstance(val, np.ndarray): if verbose: print(f"Converting {name} to cupy array") obj[name] = cp.asarray(val) return obj
396-418: Consider removing or converting commented placeholder code to TODOs.The commented-out visualization code (lines 400-418) could be converted to explicit
# TODO:comments or removed if not planned for near-term implementation. This would improve readability.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
kwave/kspaceFirstOrder1D.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
kwave/kspaceFirstOrder1D.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (macos-latest, 3.13)
- GitHub Check: test (macos-latest, 3.10)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (macos-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test_install (windows-latest, 3.10, dev)
- GitHub Check: test_install (macos-latest, 3.12, docs)
- GitHub Check: test_install (windows-latest, 3.11, dev)
- GitHub Check: test_install (windows-latest, 3.10, docs)
- GitHub Check: test_install (windows-latest, 3.12, docs)
- GitHub Check: test_install (windows-latest, 3.10, examples)
🔇 Additional comments (5)
kwave/kspaceFirstOrder1D.py (5)
91-96: LGTM!Function signature is well-typed and follows the established pattern used in
kspaceFirstOrder2Dwith separatesimulation_optionsandexecution_optionsparameters. Based on learnings, this is the correct API pattern for k-Wave Python.
268-288: LGTM!The
extract_optionsdictionary setup properly mirrors the record flags fromkWaveSimulationand correctly setscompute_directivitytoFalsefor the 1D case.
378-394: LGTM!The index conversion from MATLAB's 1-based indexing to Python's 0-based indexing is correctly applied to all source position and signal indices.
668-671: LGTM!The GPU-to-CPU conversion is properly guarded and returns the sensor data correctly.
444-445: No action needed.BonAwill always be defined when accessed in nonlinear branches becauseis_nonlinear(which guards those branches) returnsTrueif and only ifmedium.BonA is not None(seekmedium.py:114). The condition at line 444 is guaranteed to be True whenever the nonlinear code paths are executed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 1-16: Imports are unsorted; reorder them to follow isort/PEP8
grouping: put standard library imports first (e.g., import importlib, from
typing import Union, import warnings), then third-party imports (the conditional
cupy block, import numpy as np, import scipy.fft, from scipy.interpolate import
interp1d, from tqdm import tqdm), with a single blank line between stdlib and
third-party groups; preserve the try/except for cupy but move it into the
third-party section, and then run ruff check --fix or isort to apply and verify
the changes.
- Around line 456-462: The code incorrectly accesses absorption attributes via
medium.medium and k_sim.medium.medium; update the branches in
k_sim.equation_of_state handling to use the direct attributes instead (e.g.,
replace medium.medium.absorb_tau -> medium.absorb_tau and
k_sim.medium.medium.absorb_tau -> k_sim.absorb_tau), preserving the
xp.asarray(xp.squeeze(...), dtype=my_type) pattern and identical variable names
(absorb_tau, absorb_eta, absorb_nabla1, absorb_nabla2) so the dtype/squeeze
behavior remains the same.
🧹 Nitpick comments (4)
kwave/kspaceFirstOrder1D.py (4)
60-67: Add guard againstcpbeing None.If this helper is called when CuPy is unavailable,
cp.asarraywill raiseAttributeError: 'NoneType' object has no attribute 'asarray'. Consider adding a defensive check.Suggested fix
def to_gpu(obj, verbose: bool = False): """Convert all numpy arrays in the given object to cupy arrays.""" + if cp is None: + raise RuntimeError("CuPy is not available; cannot convert to GPU arrays.") for name, val in vars(obj).items(): if isinstance(val, np.ndarray): if verbose: print(f"Converting {name} to cupy array") setattr(obj, name, cp.asarray(val)) return obj
70-77: Same defensive guard needed here.For consistency with
dotdict_to_cpuwhich checksif cp is None, this function should also guard against CuPy unavailability.Suggested fix
def dotdict_to_gpu(obj, verbose: bool = False): """Convert all numpy arrays in the given dotdict to cupy arrays.""" + if cp is None: + raise RuntimeError("CuPy is not available; cannot convert to GPU arrays.") for name, val in obj.items():
6-6: Unused import:cp_fftis never used.The code correctly uses
xp.fft.*methods, making the explicitcp_fftimport unnecessary.Remove unused import
try: import cupy as cp - from cupy import fft as cp_fft except ImportError: cp = None - cp_fft = None
80-81: Docstring is misleading.The docstring says "Convert all numpy arrays... to numpy arrays" but the function converts CuPy arrays to NumPy arrays.
Fix docstring
def dotdict_to_cpu(obj, verbose: bool = False): - """Convert all numpy arrays in the given dotdict to numpy arrays.""" + """Convert all cupy arrays in the given dotdict to numpy arrays."""
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
kwave/kspaceFirstOrder1D.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
kwave/kspaceFirstOrder1D.py
🧬 Code graph analysis (1)
kwave/kspaceFirstOrder1D.py (8)
kwave/kgrid.py (3)
kWaveGrid(14-701)dx(178-182)Nx(157-161)kwave/ksource.py (3)
kSource(11-383)p0(53-57)p0(60-66)kwave/kWaveSimulation.py (4)
kWaveSimulation(36-1657)use_sensor(170-176)equation_of_state(156-167)is_nonlinear(148-153)kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
extract_sensor_data(8-715)kwave/utils/data.py (1)
scale_time(57-111)kwave/utils/dotdictionary.py (1)
dotdict(4-57)kwave/utils/pml.py (1)
get_pml(6-65)kwave/utils/tictoc.py (3)
TicToc(4-44)tic(17-24)toc(27-44)
🪛 GitHub Actions: Ruff
kwave/kspaceFirstOrder1D.py
[error] 1-1: Ruff I001 Import block is un-sorted or un-formatted.
🪛 GitHub Check: ruff
kwave/kspaceFirstOrder1D.py
[failure] 1-2: Ruff (I001)
kwave/kspaceFirstOrder1D.py:1:1: I001 Import block is un-sorted or un-formatted
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test_install (windows-latest, 3.10, dev)
- GitHub Check: test_install (macos-latest, 3.13, dev)
- GitHub Check: test_install (macos-latest, 3.10, dev)
- GitHub Check: test_install (macos-latest, 3.10, docs)
- GitHub Check: test_install (windows-latest, 3.13, test)
- GitHub Check: test_install (windows-latest, 3.12, all)
- GitHub Check: test_install (windows-latest, 3.12, dev)
- GitHub Check: test_install (windows-latest, 3.11, dev)
- GitHub Check: test_install (windows-latest, 3.11, test)
🔇 Additional comments (2)
kwave/kspaceFirstOrder1D.py (2)
246-254: Clean GPU fallback handling.The pattern of checking
execution_options.is_gpu_simulation, attempting GPU initialization, and falling back gracefully with a warning is well implemented.
502-502: Good use oftqdmfor progress tracking.The progress bar will improve UX for long-running simulations.
kwave/kspaceFirstOrder1D.py
Outdated
| from typing import Union | ||
| import warnings | ||
|
|
||
| try: | ||
| import cupy as cp | ||
| from cupy import fft as cp_fft | ||
| except ImportError: | ||
| cp = None | ||
| cp_fft = None | ||
|
|
||
| import importlib | ||
|
|
||
| import numpy as np | ||
| import scipy.fft | ||
| from scipy.interpolate import interp1d | ||
| from tqdm import tqdm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix import ordering to pass CI.
The pipeline is failing due to unsorted imports. Run ruff check --fix or manually reorder imports following isort conventions (stdlib → third-party → local).
Suggested import order
-from typing import Union
-import warnings
-
-try:
- import cupy as cp
- from cupy import fft as cp_fft
-except ImportError:
- cp = None
- cp_fft = None
-
-import importlib
-
-import numpy as np
-import scipy.fft
-from scipy.interpolate import interp1d
-from tqdm import tqdm
+import importlib
+import warnings
+from typing import Union
+
+import numpy as np
+import scipy.fft
+from scipy.interpolate import interp1d
+from tqdm import tqdm
+
+try:
+ import cupy as cp
+ from cupy import fft as cp_fft
+except ImportError:
+ cp = None
+ cp_fft = None🧰 Tools
🪛 GitHub Actions: Ruff
[error] 1-1: Ruff I001 Import block is un-sorted or un-formatted.
🪛 GitHub Check: ruff
[failure] 1-2: Ruff (I001)
kwave/kspaceFirstOrder1D.py:1:1: I001 Import block is un-sorted or un-formatted
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 1 - 16, Imports are unsorted;
reorder them to follow isort/PEP8 grouping: put standard library imports first
(e.g., import importlib, from typing import Union, import warnings), then
third-party imports (the conditional cupy block, import numpy as np, import
scipy.fft, from scipy.interpolate import interp1d, from tqdm import tqdm), with
a single blank line between stdlib and third-party groups; preserve the
try/except for cupy but move it into the third-party section, and then run ruff
check --fix or isort to apply and verify the changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In @kwave/kspaceFirstOrder1D.py:
- Around line 662-693: The initial-condition branch uses xp.append (which breaks
on GPU) and redundantly squeezes p0 here; replace xp.append calls in the
finite-difference cases with xp.concatenate or explicit padding using xp.zeros
of the correct dtype/shape to build the shifted arrays (e.g., for the 2nd-order
dpdx build shifted = xp.concatenate((p[1:], xp.zeros((1,), dtype=p.dtype))) and
for the 4th-order create two-element zero padding similarly) so shapes match and
GPU backends work, and remove the local p0 = xp.squeeze(p0) here (squeeze p0
once at initialization instead) while keeping p_k computation and ux_sgx logic
unchanged (refer to symbols p0, ux_sgx, dpdx, options.use_finite_difference,
p_k, xp.append).
- Around line 539-546: The code uses xp.append and xp.insert (e.g., in the dpdx
calculation inside the finite-difference branches for dpdx and update to ux_sgx)
which fails under CuPy; replace xp.append/p.insert patterns with
xp.concatenate-based equivalents (for example, xp.concatenate([p[1:],
xp.array([0.0], dtype=p.dtype)]) instead of xp.append(p[1:], 0.0), and
xp.concatenate([xp.array([0.0], dtype=p.dtype), p[:-1]]) instead of
xp.insert(p[:-1], 0, 0) ) so dpdx retains the same shape and dtype; update all
occurrences referenced (the dpdx expressions and any other uses of
xp.append/xp.insert such as the other finite-difference branches around ux_sgx)
to use xp.concatenate and explicit small constant arrays to ensure compatibility
with both NumPy and CuPy.
- Around line 704-705: The extract_sensor_data function is not GPU-safe because
it uses hardcoded NumPy calls; change all np.* usages inside extract_sensor_data
to use the passed-in xp namespace (e.g., xp.fft, xp.unravel_index, xp.squeeze,
xp.real, xp.maximum, xp.minimum, xp.sqrt, xp.meshgrid, xp.array, xp.arange,
xp.sum) so it works with CuPy arrays, and for operations missing in CuPy
(notably np.interp) provide a fallback: implement a small xp-conditional wrapper
that uses numpy.interp when xp is numpy and a compatible CuPy interpolation
alternative or a CPU-based numpy fallback (copy to CPU, call numpy.interp, copy
back) when xp is cupy; also ensure FFT calls use xp.fft where available and
handle differences in signatures between numpy.fft and cupy.fft.
- Around line 267-278: Remove the unused local variable src and make the
attribute checks robust by replacing the fragile `getattr(..., False) is not
False` pattern with explicit None checks (e.g., `getattr(k_sim.source, "ux",
None) is not None` and similarly for "p") or use `hasattr(k_sim.source, "ux")`
then fetch the value; then only promote to 2D when the attribute exists and has
ndim < 2 (e.g., check `k_sim.source.ux.ndim < 2` after ensuring the attribute is
not None), keeping the existing warnings and assignments to `k_sim.source.ux`
and `k_sim.source.p`.
🧹 Nitpick comments (6)
kwave/kspaceFirstOrder1D.py (6)
58-67:to_gpumay fail silently for objects without__dict__or with slots.The function uses
vars(obj)which only works for objects with__dict__. Objects using__slots__or built-in types will raiseTypeError. Consider adding a type check or documenting the limitation.Additionally, the function doesn't handle nested objects or arrays within containers (lists, dicts as attributes).
Suggested improvement
def to_gpu(obj, verbose: bool = False): """Convert all numpy arrays in the given object to cupy arrays.""" if cp is None: raise RuntimeError("CuPy is not available; cannot convert to GPU arrays.") + if not hasattr(obj, '__dict__'): + raise TypeError(f"Object of type {type(obj).__name__} does not support attribute iteration.") for name, val in vars(obj).items(): if isinstance(val, np.ndarray): if verbose: print(f"Converting {name} to cupy array") setattr(obj, name, cp.asarray(val)) return obj
235-238: USAGE section shows outdated call signature.The documented usage doesn't match the Python function signature. Based on learnings, k-Wave Python uses separate
simulation_optionsandexecution_optionsparameters.Suggested documentation update
USAGE: - sensor_data = kspaceFirstOrder1D(kgrid, medium, source, sensor) - sensor_data = kspaceFirstOrder1D(kgrid, medium, source, sensor, ...) + sensor_data = kspace_first_order_1D(kgrid, source, sensor, medium, + simulation_options, execution_options) """
315-339: Potential division by zero in density inversion.Line 336 computes
1.0 / rho0_sgxwithout checking for zero values. While physically density should never be zero, defensive programming would add a check or use a small epsilon.Defensive check
# invert rho0 so it doesn't have to be done each time step + if np.any(rho0_sgx == 0): + raise ValueError("Density (rho0) cannot be zero.") rho0_sgx_inv = 1.0 / rho0_sgx
505-508:source_matis potentially allocated twice with identical shape.If both
k_sim.source_uxandk_sim.source_pare not False,source_matis allocated twice on lines 506 and 508. While harmless (same shape), it's redundant.Consolidate allocation
- if k_sim.source_ux is not False: - source_mat = xp.zeros((Nx,), dtype=my_type) - if k_sim.source_p is not False: - source_mat = xp.zeros((Nx,), dtype=my_type) + if k_sim.source_ux is not False or k_sim.source_p is not False: + source_mat = xp.zeros((Nx,), dtype=my_type)
556-571: Source data may remain on CPU while field arrays are on GPU.When
using_gpu=True,k_sim.source.uxis not converted to GPU. Line 560 assignsk_sim.source.ux[..., t_index](CPU numpy array) toux_sgx[...](GPU cupy array). While CuPy handles this via implicit transfer, it incurs per-timestep host-to-device copies, degrading performance.Consider converting source arrays to GPU before the time loop:
if using_gpu and k_sim.source_ux is not False: k_sim.source.ux = xp.asarray(k_sim.source.ux)
95-100: Consider adding return type annotation.The function signature lacks a return type annotation. Based on the code, it returns a
dotdictcontaining sensor data.def kspace_first_order_1D(kgrid: kWaveGrid, source: kSource, sensor: Union[NotATransducer, kSensor], medium: kWaveMedium, simulation_options: SimulationOptions, - execution_options: SimulationExecutionOptions): + execution_options: SimulationExecutionOptions) -> dotdict:
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
kwave/kspaceFirstOrder1D.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
📚 Learning: 2025-09-11T16:12:04.876Z
Learnt from: waltsims
Repo: waltsims/k-wave-python PR: 625
File: docs/get_started/first_simulation.rst:147-154
Timestamp: 2025-09-11T16:12:04.876Z
Learning: In k-Wave Python, kspaceFirstOrder2D function takes separate simulation_options and execution_options parameters, not a single options dict. Use SimulationOptions and SimulationExecutionOptions classes from kwave.options, with snake_case attributes like pml_inside instead of MATLAB-style PMLInside.
Applied to files:
kwave/kspaceFirstOrder1D.py
🧬 Code graph analysis (1)
kwave/kspaceFirstOrder1D.py (8)
kwave/kWaveSimulation.py (8)
kWaveSimulation(36-1657)input_checking(484-611)use_sensor(170-176)equation_of_state(156-167)source_ux(306-316)source_p(278-288)nonuniform_grid(200-206)is_nonlinear(148-153)kwave/kWaveSimulation_helper/extract_sensor_data.py (1)
extract_sensor_data(8-715)kwave/options/simulation_execution_options.py (1)
SimulationExecutionOptions(13-308)kwave/options/simulation_options.py (1)
SimulationOptions(46-348)kwave/utils/data.py (1)
scale_time(57-111)kwave/utils/dotdictionary.py (1)
dotdict(4-57)kwave/utils/pml.py (1)
get_pml(6-65)kwave/utils/tictoc.py (3)
TicToc(4-44)tic(17-24)toc(27-44)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (15)
- GitHub Check: test (macos-latest, 3.11)
- GitHub Check: test (macos-latest, 3.10)
- GitHub Check: test (ubuntu-latest, 3.10)
- GitHub Check: test (windows-latest, 3.11)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (macos-latest, 3.12)
- GitHub Check: test (windows-latest, 3.10)
- GitHub Check: test (macos-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.11)
- GitHub Check: test_install (macos-latest, 3.10, examples)
- GitHub Check: test_install (windows-latest, 3.12, test)
- GitHub Check: test_install (windows-latest, 3.11, dev)
🔇 Additional comments (4)
kwave/kspaceFirstOrder1D.py (4)
1-30: LGTM!The imports are well-organized with proper CuPy availability guarding. The lazy import pattern with
cp = Nonefallback is appropriate for optional GPU support.
289-309: LGTM!The
extract_optionsdictionary is properly guarded byk_sim.use_sensorand is only used within the same conditional block later.
421-443: LGTM - Visualization placeholders.Commented sections indicate planned visualization features. No functional code to review.
418-419: Line 419 does not need a null check forrecord_start_index.The
record_start_indexattribute is a managed property initialized to1(line 25 ofksensor.py) and enforced as an integer through its setter (int(round(val))at line 71). Unlike the source indices (lines 400–416) which are direct attributes that may not exist or beNone,record_start_indexis guaranteed to always be an integer and cannot beNoneby design. No additional null check is needed.Likely an incorrect or invalid review comment.
| src = k_sim.source | ||
| if getattr(k_sim.source, "ux", False) is not False: | ||
| # only promote if it has fewer than 2 dims | ||
| if k_sim.source.ux.ndim < 2: | ||
| warnings.warn("A spatially and temporally varying velocity source was passed, but is the wrong shape.") | ||
| k_sim.source.ux = np.atleast_2d(k_sim.source.ux) | ||
|
|
||
| if getattr(k_sim.source, "p", False) is not False: | ||
| if k_sim.source.p.ndim < 2: | ||
| warnings.warn("A spatially and temporally varying pressure source was passed, but is the wrong shape.") | ||
| k_sim.source.p = np.atleast_2d(k_sim.source.p) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused variable and fragile attribute checks.
- Line 267:
srcis assigned but never used. - Lines 268, 274: The pattern
getattr(..., False) is not Falseis fragile. If the attribute isNoneor0, this will incorrectly proceed. Use explicitNonechecks.
Suggested fix
- # TODO: Move to input checking
- # If the user passed a 1D time‐series, promote it to shape (1, Nt)
- src = k_sim.source
- if getattr(k_sim.source, "ux", False) is not False:
+ # TODO: Move to input checking
+ # If the user passed a 1D time-series, promote it to shape (1, Nt)
+ if getattr(k_sim.source, "ux", None) is not None:
# only promote if it has fewer than 2 dims
if k_sim.source.ux.ndim < 2:
warnings.warn("A spatially and temporally varying velocity source was passed, but is the wrong shape.")
k_sim.source.ux = np.atleast_2d(k_sim.source.ux)
- if getattr(k_sim.source, "p", False) is not False:
+ if getattr(k_sim.source, "p", None) is not None:
if k_sim.source.p.ndim < 2:
warnings.warn("A spatially and temporally varying pressure source was passed, but is the wrong shape.")
k_sim.source.p = np.atleast_2d(k_sim.source.p)🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 267 - 278, Remove the unused local
variable src and make the attribute checks robust by replacing the fragile
`getattr(..., False) is not False` pattern with explicit None checks (e.g.,
`getattr(k_sim.source, "ux", None) is not None` and similarly for "p") or use
`hasattr(k_sim.source, "ux")` then fetch the value; then only promote to 2D when
the attribute exists and has ndim < 2 (e.g., check `k_sim.source.ux.ndim < 2`
after ensuring the attribute is not None), keeping the existing warnings and
assignments to `k_sim.source.ux` and `k_sim.source.p`.
| if (hasattr(k_sim, 'source_p') and getattr(k_sim.source, 'p_mode', None) == 'additive') or \ | ||
| (hasattr(k_sim, 'source_ux') and getattr(k_sim.source, 'u_mode', None) == 'additive'): | ||
| source_kappa = scipy.fft.ifftshift(np.cos (c_ref * kgrid.k * dt / 2.0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Condition always evaluates hasattr as True for properties.
source_p and source_ux are properties on kWaveSimulation, so hasattr(k_sim, 'source_p') will always return True. The intended check should verify if the property value is not False.
Suggested fix
- if (hasattr(k_sim, 'source_p') and getattr(k_sim.source, 'p_mode', None) == 'additive') or \
- (hasattr(k_sim, 'source_ux') and getattr(k_sim.source, 'u_mode', None) == 'additive'):
+ if (k_sim.source_p is not False and getattr(k_sim.source, 'p_mode', None) == 'additive') or \
+ (k_sim.source_ux is not False and getattr(k_sim.source, 'u_mode', None) == 'additive'):
source_kappa = scipy.fft.ifftshift(np.cos (c_ref * kgrid.k * dt / 2.0))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (hasattr(k_sim, 'source_p') and getattr(k_sim.source, 'p_mode', None) == 'additive') or \ | |
| (hasattr(k_sim, 'source_ux') and getattr(k_sim.source, 'u_mode', None) == 'additive'): | |
| source_kappa = scipy.fft.ifftshift(np.cos (c_ref * kgrid.k * dt / 2.0)) | |
| if (k_sim.source_p is not False and getattr(k_sim.source, 'p_mode', None) == 'additive') or \ | |
| (k_sim.source_ux is not False and getattr(k_sim.source, 'u_mode', None) == 'additive'): | |
| source_kappa = scipy.fft.ifftshift(np.cos (c_ref * kgrid.k * dt / 2.0)) |
| dpdx = (xp.append(p[1:], 0.0) - p) / dx | ||
| ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx ) | ||
|
|
||
| case 4: | ||
| # calculate gradient using fourth-order accurate finite | ||
| # difference scheme (including half step forward) | ||
| dpdx = (xp.insert(p[:-1], 0, 0) - 27.0 * p + 27 * xp.append(p[1:], 0.0) - xp.append(p[2:], [0, 0])) / (24.0 * dx) | ||
| ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CuPy does not have append function - GPU execution will fail.
xp.append() is used throughout the finite-difference branches (lines 539, 545, 583, 587, 688), but CuPy does not implement numpy.append. This will raise AttributeError when running on GPU with finite-difference enabled.
Use `concatenate` instead
case 2:
# calculate gradient using second-order accurate finite
# difference scheme (including half step forward)
- dpdx = (xp.append(p[1:], 0.0) - p) / dx
+ dpdx = (xp.concatenate([p[1:], xp.zeros(1, dtype=p.dtype)]) - p) / dx
ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx )
case 4:
# calculate gradient using fourth-order accurate finite
# difference scheme (including half step forward)
- dpdx = (xp.insert(p[:-1], 0, 0) - 27.0 * p + 27 * xp.append(p[1:], 0.0) - xp.append(p[2:], [0, 0])) / (24.0 * dx)
+ dpdx = (xp.concatenate([xp.zeros(1, dtype=p.dtype), p[:-1]]) - 27.0 * p + 27 * xp.concatenate([p[1:], xp.zeros(1, dtype=p.dtype)]) - xp.concatenate([p[2:], xp.zeros(2, dtype=p.dtype)])) / (24.0 * dx)
ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx )Apply similar changes to lines 583, 587, and 688.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| dpdx = (xp.append(p[1:], 0.0) - p) / dx | |
| ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx ) | |
| case 4: | |
| # calculate gradient using fourth-order accurate finite | |
| # difference scheme (including half step forward) | |
| dpdx = (xp.insert(p[:-1], 0, 0) - 27.0 * p + 27 * xp.append(p[1:], 0.0) - xp.append(p[2:], [0, 0])) / (24.0 * dx) | |
| ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx ) | |
| dpdx = (xp.concatenate([p[1:], xp.zeros(1, dtype=p.dtype)]) - p) / dx | |
| ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx ) | |
| case 4: | |
| # calculate gradient using fourth-order accurate finite | |
| # difference scheme (including half step forward) | |
| dpdx = (xp.concatenate([xp.zeros(1, dtype=p.dtype), p[:-1]]) - 27.0 * p + 27 * xp.concatenate([p[1:], xp.zeros(1, dtype=p.dtype)]) - xp.concatenate([p[2:], xp.zeros(2, dtype=p.dtype)])) / (24.0 * dx) | |
| ux_sgx = pml_x_sgx * (pml_x_sgx * ux_sgx - dt * rho0_sgx_inv * dpdx ) |
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 539 - 546, The code uses xp.append
and xp.insert (e.g., in the dpdx calculation inside the finite-difference
branches for dpdx and update to ux_sgx) which fails under CuPy; replace
xp.append/p.insert patterns with xp.concatenate-based equivalents (for example,
xp.concatenate([p[1:], xp.array([0.0], dtype=p.dtype)]) instead of
xp.append(p[1:], 0.0), and xp.concatenate([xp.array([0.0], dtype=p.dtype),
p[:-1]]) instead of xp.insert(p[:-1], 0, 0) ) so dpdx retains the same shape and
dtype; update all occurrences referenced (the dpdx expressions and any other
uses of xp.append/xp.insert such as the other finite-difference branches around
ux_sgx) to use xp.concatenate and explicit small constant arrays to ensure
compatibility with both NumPy and CuPy.
| # enforce initial conditions if k_sim.source.p0 is defined instead of time varying sources | ||
| if t_index == 0 and k_sim.source.p0 is not None: | ||
|
|
||
| p0 = xp.squeeze(p0) | ||
|
|
||
| # add the initial pressure to rho as a mass source | ||
| p = p0 | ||
| rhox = p0 / xp.squeeze(c0)**2 | ||
|
|
||
| # compute u(t = t1 - dt/2) based on u(dt/2) = -u(-dt/2) which forces u(t = t1) = 0 | ||
| if not options.use_finite_difference: | ||
|
|
||
| # calculate gradient using the k-space method on a regular grid | ||
| ux_sgx = dt * rho0_sgx_inv * xp.real(xp.fft.ifftn(ddx_k * ddx_k_shift_pos * kappa * xp.fft.fftn(p, axes=(0,)), axes=(0,) )) / 2.0 | ||
| p_k = xp.fft.fftn(p, axes=(0,)) | ||
| p_k = xp.squeeze(p_k) | ||
|
|
||
| else: | ||
| match options.use_finite_difference: | ||
| case 2: | ||
| # calculate gradient using second-order accurate finite difference scheme (including half step forward) | ||
| dpdx = (xp.append(p[1:], 0.0) - p) / dx | ||
| ux_sgx = dt * rho0_sgx_inv * dpdx / 2.0 | ||
|
|
||
| case 4: | ||
| # calculate gradient using fourth-order accurate finite difference scheme (including half step backward) | ||
| dpdx = (xp.append(p[2:], [0, 0]) - 27.0 * xp.append(p[1:], 0) + 27.0 * p - xp.append(0, p[:-1])) / (24.0 * dx) | ||
| ux_sgx = dt * rho0_sgx_inv * dpdx / 2.0 | ||
|
|
||
| else: | ||
| # precompute fft of p here so p can be modified for visualisation | ||
| p_k = xp.fft.fftn(p, axes=(0,)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial condition block has same xp.append issue and redundant squeeze.
- Lines 683, 688:
xp.appendwill fail on GPU (same issue as above). - Line 665:
p0 = xp.squeeze(p0)modifiesp0only att_index == 0, which is fine, but consider squeezing at initialization (line 499) instead.
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 662 - 693, The initial-condition
branch uses xp.append (which breaks on GPU) and redundantly squeezes p0 here;
replace xp.append calls in the finite-difference cases with xp.concatenate or
explicit padding using xp.zeros of the correct dtype/shape to build the shifted
arrays (e.g., for the 2nd-order dpdx build shifted = xp.concatenate((p[1:],
xp.zeros((1,), dtype=p.dtype))) and for the 4th-order create two-element zero
padding similarly) so shapes match and GPU backends work, and remove the local
p0 = xp.squeeze(p0) here (squeeze p0 once at initialization instead) while
keeping p_k computation and ux_sgx logic unchanged (refer to symbols p0, ux_sgx,
dpdx, options.use_finite_difference, p_k, xp.append).
| sensor_data = extract_sensor_data(kdim, xp, sensor_data, file_index, sensor_mask_index, | ||
| extract_options, record, p, ux_sgx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate and examine extract_sensor_data function
fd -t f "extract_sensor_data.py" && echo "---" && cat kwave/kWaveSimulation_helper/extract_sensor_data.pyRepository: waltsims/k-wave-python
Length of output: 46081
extract_sensor_data is not GPU-compatible and will fail at runtime with CuPy arrays.
The function receives xp as a parameter but predominantly uses hardcoded np. calls throughout (e.g., np.fft, np.unravel_index, np.squeeze, np.real, np.maximum, np.minimum, np.sqrt, np.meshgrid, np.array, np.arange, np.sum, np.interp). These NumPy-specific functions will raise AttributeError or TypeError when operating on CuPy arrays. All np. calls must be replaced with xp. to support GPU execution, with special care for FFT operations and functions not available in CuPy's API (e.g., np.interp requires a fallback).
🤖 Prompt for AI Agents
In @kwave/kspaceFirstOrder1D.py around lines 704 - 705, The extract_sensor_data
function is not GPU-safe because it uses hardcoded NumPy calls; change all np.*
usages inside extract_sensor_data to use the passed-in xp namespace (e.g.,
xp.fft, xp.unravel_index, xp.squeeze, xp.real, xp.maximum, xp.minimum, xp.sqrt,
xp.meshgrid, xp.array, xp.arange, xp.sum) so it works with CuPy arrays, and for
operations missing in CuPy (notably np.interp) provide a fallback: implement a
small xp-conditional wrapper that uses numpy.interp when xp is numpy and a
compatible CuPy interpolation alternative or a CPU-based numpy fallback (copy to
CPU, call numpy.interp, copy back) when xp is cupy; also ensure FFT calls use
xp.fft where available and handle differences in signatures between numpy.fft
and cupy.fft.
This is adds some increased functionality to include a basic one dimensional simulation example in numpy/cupy.
It
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.