-
-
Notifications
You must be signed in to change notification settings - Fork 11
Gradient-Free Noise Characterization #288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Gradient-Free Noise Characterization #288
Conversation
force the parameters to be positives
force the parameters to be positive
…se_Characterization
…qt-yaqs into Noise_Characterization
d_On_d_gk has shape (n_jump_site, n_obs_site, L, n_t) loss_function changed accordingly
…ent_free_optimization
📝 WalkthroughSummary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughAdds a noise-characterization subsystem: CompactNoiseModel, Propagator, LossClass, Characterizer, multiple optimizers (ADAM, CMA‑ES, Bayesian, MCMC), GateLibrary extensions (Zero + aliases), a usage example, pyproject dependency updates, and extensive unit tests. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Characterizer
participant Optimizer
participant LossClass
participant Propagator
participant GateLibrary
User->>Characterizer: init(propagator, init_guess, loss)
User->>Characterizer: adam_optimize(max_iter=N)
Characterizer->>Optimizer: start optimization loop
loop each candidate x
Optimizer->>LossClass: evaluate(x)
LossClass->>LossClass: x_to_noise_model(x)
LossClass->>Propagator: run(noise_model)
Propagator->>GateLibrary: lookup gates & expand per-site noise
Propagator-->>LossClass: obs_traj, times, sim_time
LossClass-->>Optimizer: (loss, grad, sim_time)
Optimizer->>Optimizer: ADAM update & checkpoint
end
Characterizer->>LossClass: x_to_noise_model(final_x)
Characterizer-->>User: optimal_model (CompactNoiseModel)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 42
🤖 Fix all issues with AI agents
In @docs/examples/characterizer_example.md:
- Around line 138-140: Add a short clarifying note before the example's
optimization block stating that adam_optimize is a gradient-based method and
provide the gradient-free alternatives to use (cma_optimize, mcmc_optimize,
bayesian_optimize) with the same max_iterations argument; reference the existing
call to characterizer.adam_optimize(max_iterations=50) and suggest replacing or
showing alternatives like characterizer.cma_optimize(max_iterations=50),
characterizer.mcmc_optimize(max_iterations=50), or
characterizer.bayesian_optimize(max_iterations=50) so readers understand the
distinction.
- Around line 33-36: The code mixes a string path variable work_dir with a Path
object work_dir_path; replace all subsequent uses of the string (notably the
usage around the original Line 86) to use work_dir_path for consistency and
cross-platform correctness, and where an API requires a string pass
work_dir_path.as_posix() or str(work_dir_path); update any file/directory
operations, joins, or reads/writes to reference work_dir_path (or its string
form when necessary).
In @src/mqt/yaqs/core/data_structures/noise_model.py:
- Around line 236-249: The loop over self.compact_processes should validate
required keys before touching GateLibrary or instantiating gates: first assert
"name", "sites", and "strength" are present on proc, then check GateLibrary
membership for proc["name"], then retrieve the gate class once (e.g., cls =
getattr(GateLibrary, name)) and assert cls().interaction == 1 before expanding
sites into self.expanded_processes and appending to self.index_list; reorder the
checks in the loop to perform key presence checks first to fail fast and avoid
creating gate instances when keys are missing.
In @src/mqt/yaqs/noise_char/__init__.py:
- Line 8: The module docstring is too narrow—update the top-level docstring in
src/mqt/yaqs/noise_char/__init__.py to describe the full noise characterization
subsystem (not just "optimization routines"): mention that the module exports
propagators, loss evaluators, the high-level Characterizer orchestrator, and
multiple optimization strategies (gradient-based and gradient-free), and briefly
state their roles so readers know the available APIs and responsibilities.
In @src/mqt/yaqs/noise_char/characterizer.py:
- Around line 77-108: The constructor docstring refers to
"PropagatorWithGradients" but the parameter type hint is Propagator; update the
docstring for __init__ to describe the propagator parameter as Propagator (or
match whatever concrete type you intend) so the text and the type hint for the
propagator parameter are consistent; modify the docstring line that starts
"propagator : PropagatorWithGradients" to read "propagator : Propagator" and
keep the rest of the description about producing trajectories and gradients
intact.
- Around line 189-210: The code assumes adam_opt returned a non-empty
x_avg_history and directly does self.optimal_model =
self.loss.x_to_noise_model(self.x_avg_history[-1]); add a defensive check after
the adam_opt call to handle an empty self.x_avg_history: if it's empty, either
set self.optimal_model to a safe default (e.g. transform self.init_x via
self.loss.x_to_noise_model or set None) or raise a clear RuntimeError with
context; ensure this uses the same symbols (self.x_avg_history, adam_opt,
self.optimal_model, self.loss.x_to_noise_model, self.init_x) so callers get a
deterministic outcome instead of an IndexError.
In @src/mqt/yaqs/noise_char/loss.py:
- Line 46: The class-level variable n_avg = 20 is shadowed by self.n_avg = 100
in __init__, causing confusion; either remove the class attribute or explain the
rationale. Fix by deleting the class attribute n_avg = 20 from the class
definition if the instance default is always 100, or add a clear comment above
the class-level n_avg explaining its purpose and why __init__ overrides it
(referencing the class variable n_avg and the constructor assignment self.n_avg
in __init__).
- Line 104: The call to set_observable_list currently passes the original
parameter ref_traj instead of the deep-copied attribute, causing potential
external mutation inconsistencies; update the call to use the deep-copied
trajectory stored on the instance (self.ref_traj) so that
self.propagator.set_observable_list(self.ref_traj) uses the internal copy
consistently (look for the set_observable_list invocation in the same scope as
where self.ref_traj is assigned).
- Around line 222-250: The set_history method can leave the object in an
inconsistent state because it does not validate that x_history, f_history,
x_avg_history, and diff_avg_history have the same length; add a length-check at
the start of set_history (in the set_history method) that computes expected_len
= len(x_history) and asserts or raises a ValueError if any of len(f_history),
len(x_avg_history), or len(diff_avg_history) differ, and only then set
self.n_eval and populate self.x_history, self.f_history, self.x_avg_history, and
self.diff_avg_history (keep existing copies/conversions). Ensure the error
message names the mismatched input(s) for easier debugging.
- Around line 130-144: compute_avg currently slices x_history as
self.x_history[self.n_avg :] when len(self.x_history) > self.n_avg, which takes
items from index n_avg to end instead of the last n_avg items; change the slice
to use the tail of the list (e.g., self.x_history[-self.n_avg:]) so that when
len(self.x_history) > self.n_avg you compute the mean over the final n_avg
entries, keeping the existing behavior of using the entire history when len <=
n_avg and appending the resulting x_avg.copy() to x_avg_history.
- Around line 349-370: The __call__ method's docstring lists a fourth return
value (avg_min_max_traj_time) that doesn't exist in the declared return type
tuple[float, np.ndarray, float]; update the Returns section of the docstring to
describe only the three actual returned values (f: float, grad: np.ndarray,
sim_time: float) and remove the incorrect fourth item so the docstring matches
the method signature and actual behavior.
- Around line 208-220: The reset method currently clears most optimizer history
but omits resetting self.grad_history and self.converged; update the reset
method (def reset) to also set self.grad_history = [] and restore self.converged
to its initial state (False) so no stale gradient history or convergence flag
remains when the optimizer is reused.
- Line 383: Replace the inefficient deep copy of a NumPy array by using the
array's native copy: change the assignment that uses
copy.deepcopy(self.propagator.obs_array) to use self.propagator.obs_array.copy()
so self.obs_array = self.propagator.obs_array.copy(); also remove the unused
import of copy if nothing else in the module uses it.
- Line 414: Replace the inefficient zero-array construction "grad = np.array([0]
* self.d)" with a direct zeros call: use np.zeros(self.d) (optionally specifying
dtype if other code expects a specific numeric type) to create the gradient
buffer; update the line that assigns to grad in the surrounding function/method
where self.d is defined so it uses np.zeros for performance and clarity.
- Around line 393-408: The numeric gradient currently uses forward differences
giving O(epsilon) error; change the loop in the return_numeric_gradients branch
to compute central differences by evaluating loss at x_plus and x_minus for each
index: create x_minus via x.copy() subtracting self.epsilon, call
self.x_to_noise_model(x_minus) and self.propagator.run(...) to get loss_minus
from obs_array (like you do for loss_plus), then set grad[i] = (loss_plus -
loss_minus) / (2 * self.epsilon); keep existing use of self.x_to_noise_model,
self.propagator.run, and self.propagator.obs_array, and ensure you deep-copy obs
arrays as done now.
- Around line 307-331: The docstring for write_traj is misleading: replace
references to exp_vals_traj, Propagator-specific filename patterns, work_dir,
n_eval, and x0/y0/z labels with a concise description that matches the
implementation: write_traj(self, obs_array: np.ndarray, output_file: Path)
expects obs_array shaped (n_obs, n_t), prepends self.t as the first row, builds
a header "t obs_0 obs_1 ..." from n_obs, and writes the transposed array to
the provided output_file using np.savetxt with fmt="%.6f"; mention the
parameters obs_array and output_file and the returned None behavior.
- Around line 28-40: The function name lineal_function_1000 is misspelled and
misleading because it returns a constant; rename it to constant_function_1000
(or linear_function_1000 if you change implementation to be linear) and update
all references, including the default parameter in the class __init__ that
currently points to lineal_function_1000; ensure the function signature remains
(i: int) -> int and behavior unchanged if choosing constant_function_1000, and
run tests to catch any missed references.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py:
- Line 18: The trailing "# noqa: S403" on the import statement is unnecessary;
edit the line that imports pickle (the "import pickle" statement) to remove the
" # noqa: S403" comment so it reads simply "import pickle". Ensure no other uses
of that specific noqa token exist in the same module.
- Around line 163-165: Current clipping only runs when both x_low and x_up are
set, preventing one-sided bounds; update the clipping in the Adam update (the
block using variables x, x_low, x_up in
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py) to apply
each bound independently: if x_low is not None enforce the lower bound on x, and
if x_up is not None enforce the upper bound on x (use np.maximum for lower,
np.minimum for upper or equivalent) so partial bounds are respected.
- Around line 84-86: The docstring wrongly claims "All parameter values are
clipped to the [0, 1] range"; update the docstring for the Adam optimizer
(symbol "Adam" in gradient_based/adam.py) to state that parameter clipping uses
the provided bounds x_low and x_up (i.e., parameters are clipped to [x_low,
x_up] when those bounds are supplied; if no bounds are provided it defaults to
the original behavior), and keep the rest of the docstring details
(checkpointing to f.work_dir and logging to 'performance_metric_sec.txt')
intact.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/__init__.py:
- Around line 1-8: The package __init__.py lacks an explicit public API and
doesn't re-export the optimization submodules; update the module docstring to
mention available algorithms and add an __all__ list (e.g., "__all__ =
['bayesian', 'cma', 'mcmc']") and, if desired for convenience, add explicit
relative imports (from . import bayesian, cma, mcmc) so users can access those
modules via the package namespace; ensure names in __all__ match the imported
symbols.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py:
- Around line 86-122: The function bayesian_opt uses x_low and x_up without
validating they are provided: add explicit input validation at the start of
bayesian_opt to raise a clear error if either x_low or x_up is None (or
coerce/derive bounds if intended), and ensure bounds is constructed only after
validation; also update the docstring to match the actual default acq_name="UCB"
(or change the function default to "EI" if that is the intended behavior) so the
docstring and function signature are consistent (refer to symbols bayesian_opt,
x_low, x_up, acq_name, and the bounds construction).
- Around line 39-81: The docstring for get_acquisition_function incorrectly
lists "EI" as supported while the function only handles "LEI", "PI", and "UCB";
fix by either removing "EI"/Expected Improvement from the docstring or add an
"EI" branch that returns the ExpectedImprovement acquisition (e.g., return
ExpectedImprovement(model=model, best_f=best_f, maximize=True)) alongside the
existing LogExpectedImprovement, ProbabilityOfImprovement, and
UpperConfidenceBound handlers so the docs and implementation stay consistent.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.py:
- Line 21: Remove the leftover interactive cell marker by deleting the
standalone line containing "# %%" from the top of the cma.py module so the file
contains only production code and no notebook markers.
- Around line 95-103: The loop contains duplicated and unsafe convergence
checks: remove the redundant direct access to f.converged and keep a single
protected check; specifically, delete the final unconditional "if f.converged:
break" and rely on the earlier guarded check "if hasattr(f, 'converged') and
getattr(f, 'converged', False): break" alongside the existing es.stop() check so
the code no longer risks AttributeError when f lacks a converged attribute.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py:
- Around line 103-108: The acceptance probability computation can overflow when
temperature becomes extremely small; in the MCMC loop compute a floored
temperature (e.g., temperature = max(temperature, min_temperature)) before
computing acceptance_prob = np.exp(-delta / temperature) and/or clamp
acceptance_prob to [0,1]; add a new constant or parameter like min_temperature
(e.g., 1e-8 or 1e-6) and use it wherever temperature is used for the
Metropolis-Hastings acceptance in this file (variables/identifiers: temperature,
delta, acceptance_prob, rng.random(), x, fx, f_new) so annealing (via
anneal_rate) cannot drive temperature to zero and cause overflow.
In @src/mqt/yaqs/noise_char/propagation.py:
- Around line 308-315: The current validation loops over
noise_model.compact_processes and indexes into
self.compact_noise_model.compact_processes by i, which can raise IndexError or
fail to detect extra processes; first check that
len(noise_model.compact_processes) ==
len(self.compact_noise_model.compact_processes) and raise a clear ValueError if
they differ, then either iterate with zip(noise_model.compact_processes,
self.compact_noise_model.compact_processes) or keep the existing enumerate but
only index into the other sequence after confirming equal length; keep the
existing site/name comparison (proc["name"], proc["sites"]) and raise the same
ValueError message on mismatch.
- Around line 221-233: The code uses max(all_obs_sites) which will raise if
obs_list is empty; update propagation.py around the obs_list handling (variables
obs_list, all_obs_sites, self.sites, self.n_obs, set_observables) to explicitly
handle the empty case: compute all_obs_sites, then if not all_obs_sites skip the
max check (or raise a clear ValueError with a descriptive message), otherwise
perform if max(all_obs_sites) >= self.sites and raise the existing ValueError;
ensure self.n_obs is set (0 for empty) and set_observables is still assigned
consistently.
- Around line 38-42: The loop over noise_model.processes uses
getattr(GateLibrary, proc["name"]) which will raise a raw AttributeError for
unknown names; wrap this in a try/except AttributeError around the getattr call
inside the loop and raise a clearer exception (e.g., ValueError) that includes
the bad proc["name"] (and optionally the process index or list of valid names
via dir(GateLibrary)) so callers see an actionable message; keep adding
Observable(gate(), site) for valid gates to noise_list as before.
- Around line 193-197: The max() call over the generator for
expanded_noise_model.processes can raise ValueError when the sequence is empty;
update the check around self.sites to handle an empty processes list by either
using max(proc["sites"][0] for proc in self.expanded_noise_model.processes,
default=-1) or first guarding with an if not
self.expanded_noise_model.processes: ...; then perform the comparison (using the
default -1 will skip the raise) and keep the existing ValueError with the same
message when an out-of-range site index is detected.
- Line 8: Fix the module docstring typo: change "trahectories" to "trajectories"
in the top-level docstring that currently reads 'Performs the simulation of the
Ising model and returns expectations values and A_kn trahectories.' so it
correctly reads '...A_kn trajectories.'.
In @tests/core/data_structures/test_noise_model.py:
- Around line 97-127: The test function test_compact_noise_model_assertion and
its docstring claim to validate CompactNoiseModel but the assertions instantiate
NoiseModel; update the test to be consistent by replacing NoiseModel with
CompactNoiseModel in the three pytest.raises blocks (or alternatively rename the
function/docstring to reference NoiseModel if that was intended) so the test
actually validates CompactNoiseModel's missing-field assertions; refer to the
function name test_compact_noise_model_assertion and the classes NoiseModel and
CompactNoiseModel when making the change.
In @tests/noise_char/test_bayesian.py:
- Around line 190-225: The test_bayesian_opt_convergence test currently only
checks result types and doesn't assert that early stopping occurred; update the
assertions after calling bayesian_opt to verify convergence/early termination by
asserting loss.n_eval < (n_init + max_iter) (or assert loss.n_eval == 3 given
the test sets converged after the 3rd eval), and optionally assert that the
returned best_x is one of the evaluated points in _x_train and best_y matches a
value in _y_train to strengthen correctness checks.
In @tests/noise_char/test_characterizer.py:
- Around line 153-157: The docstring example calls create_instances(tmp_path,
test) but the actual function signature is create_instances(test, tmp_path);
update the example invocation to create_instances(test, tmp_path) (and any
accompanying comment text) so the example argument order matches the
create_instances function.
- Around line 132-135: The Returns docstring is missing the seventh element
`loss` that the function actually returns (the type annotation indicates a tuple
including LossClass); update the Returns section to describe the full 7-tuple by
adding an entry for `loss` (its type, e.g., LossClass or Loss) and a short
description of what it represents alongside the existing entries for MPO, MPS,
Observable list, AnalogSimParams, CompactNoiseModel, and Propagator so the
docstring matches the annotated return type.
In @tests/noise_char/test_mcmc.py:
- Around line 82-91: The helper function make_loss is duplicated across tests;
extract it into a shared test utility (e.g., tests/conftest.py or
tests/utils.py) and replace local copies with an import or fixture. Move the def
make_loss(obj: object) -> LossClass implementation into the shared module,
export it, then update the tests (both test_mcmc.py and test_cma.py) to import
make_loss from that module (or register it as a pytest fixture) and remove the
duplicated definitions in each test file.
In @tests/noise_char/test_optimization.py:
- Around line 445-455: The manual cleanup loop using
loss_function.work_dir.glob("opt_traj*.txt"), ("restart*.pkl"), and
("performance*.txt") is redundant with pytest's tmp_path fixture; remove these
three cleanup blocks and ensure the test uses a tmp_path (or tmp_path_factory)
for loss_function.work_dir so pytest auto-cleans between tests, or if mid-test
removal is actually required keep only the specific unlink calls and add a short
comment explaining why manual cleanup is necessary.
- Line 13: Remove the unused inline noqa by editing the import of pickle in
tests/noise_char/test_optimization.py: change the line importing pickle (the
symbol "pickle") to not include the trailing "# noqa: S403" since the S403 rule
isn't enabled and the directive is unnecessary; simply keep the import statement
without the noqa comment.
In @tests/noise_char/test_propagation.py:
- Around line 24-74: The Parameters class is duplicated across tests; move it to
a shared test fixture by extracting the class (Parameters) into
tests/conftest.py (or a test utilities module) and import or provide it as a
fixture in both test_propagation.py and test_optimization.py; update the tests
to either import Parameters from the shared module or declare a fixture named
parameters that returns a Parameters() instance so both test files use the
single shared definition and eliminate duplication.
- Around line 159-179: Rename the test function and update its docstring to
reflect the current API: change the test function name
test_propagatorwithgradients_runs to something like test_propagator_runs and
update the docstring references from propagation.tjm_traj to
propagation.Propagator (or the Propagator.run behavior) so the name and
description match the code that uses create_propagator_instance and
propagator.run; ensure any mentions of gradients or tjm_traj are removed or
revised to describe propagator.times and propagator.obs_array assertions.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (24)
docs/examples/characterizer_example.mdpyproject.tomlsrc/mqt/yaqs/core/data_structures/noise_model.pysrc/mqt/yaqs/core/libraries/gate_library.pysrc/mqt/yaqs/noise_char/__init__.pysrc/mqt/yaqs/noise_char/characterizer.pysrc/mqt/yaqs/noise_char/loss.pysrc/mqt/yaqs/noise_char/optimization_algorithms/__init__.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/__init__.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/__init__.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.pysrc/mqt/yaqs/noise_char/propagation.pytests/core/data_structures/test_noise_model.pytests/core/libraries/test_gate_library.pytests/noise_char/__init__.pytests/noise_char/test_bayesian.pytests/noise_char/test_characterizer.pytests/noise_char/test_cma.pytests/noise_char/test_mcmc.pytests/noise_char/test_optimization.pytests/noise_char/test_propagation.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-04T14:26:25.420Z
Learnt from: marcelwa
Repo: munich-quantum-toolkit/core PR: 1243
File: test/python/qdmi/qiskit/conftest.py:11-19
Timestamp: 2025-11-04T14:26:25.420Z
Learning: In the munich-quantum-toolkit/core repository, Qiskit is always available as a dependency during testing, so import guards for qiskit-dependent imports in test files (e.g., test/python/qdmi/qiskit/*.py) are not necessary.
Applied to files:
pyproject.toml
🧬 Code graph analysis (7)
src/mqt/yaqs/noise_char/characterizer.py (3)
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (1)
mcmc_opt(25-128)src/mqt/yaqs/noise_char/loss.py (2)
LossClass(43-418)x_to_noise_model(333-347)src/mqt/yaqs/noise_char/propagation.py (1)
Propagator(45-335)
tests/noise_char/test_cma.py (2)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)tests/noise_char/test_mcmc.py (1)
make_loss(82-91)
tests/noise_char/test_bayesian.py (1)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)
tests/noise_char/test_propagation.py (3)
src/mqt/yaqs/core/data_structures/networks.py (3)
MPO(956-1377)MPS(36-953)init_ising(989-1042)src/mqt/yaqs/core/data_structures/simulation_parameters.py (2)
AnalogSimParams(132-277)Observable(41-129)src/mqt/yaqs/noise_char/propagation.py (3)
Propagator(45-335)set_observable_list(199-233)run(261-335)
src/mqt/yaqs/noise_char/loss.py (2)
src/mqt/yaqs/core/data_structures/simulation_parameters.py (1)
Observable(41-129)src/mqt/yaqs/noise_char/propagation.py (4)
Propagator(45-335)set_observable_list(199-233)write_traj(235-259)run(261-335)
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (1)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)
tests/noise_char/test_mcmc.py (3)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)tests/noise_char/test_cma.py (1)
make_loss(106-115)src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (1)
mcmc_opt(25-128)
🪛 Ruff (0.14.10)
tests/noise_char/test_optimization.py
13-13: Unused noqa directive (non-enabled: S403)
Remove unused noqa directive
(RUF100)
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py
18-18: Unused noqa directive (non-enabled: S403)
Remove unused noqa directive
(RUF100)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: 🐍 Test (windows-2022) / 🐍 windows-2022
- GitHub Check: 🐍 Test (ubuntu-24.04) / 🐍 ubuntu-24.04
- GitHub Check: 🐍 Test (ubuntu-24.04-arm) / 🐍 ubuntu-24.04-arm
🔇 Additional comments (39)
tests/noise_char/__init__.py (1)
1-8: Looks good.Standard test package initializer with proper licensing and a clear docstring.
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/__init__.py (1)
1-8: Looks good.Standard package initializer with proper licensing and an accurate docstring describing the gradient-based optimization routines.
src/mqt/yaqs/noise_char/optimization_algorithms/__init__.py (1)
1-8: LGTM!The package initializer follows project conventions with proper license headers and a clear module docstring.
tests/core/libraries/test_gate_library.py (1)
188-200: LGTM!The test follows the same pattern as other gate tests in this file and provides adequate coverage for the Zero gate, including factory method consistency between
GateLibraryandBaseGate.src/mqt/yaqs/core/libraries/gate_library.py (3)
354-361: LGTM!The
zero()classmethod follows the established pattern for other gate factory methods inBaseGate.
775-795: LGTM!The
Zerogate class is well-implemented with proper documentation and follows the same structure as other gate classes (X, Y, Z, Id, etc.). The 2×2 zero matrix[[0,0],[0,0]]is correct.
1756-1763: LGTM! Clear aliases for noise model compatibility.The added aliases provide convenient alternative names that align with common quantum mechanics and noise modeling conventions:
raising/loweringare standard terms for creation/annihilation operatorspauli_x/pauli_y/pauli_zmake the Pauli nature explicitThe comment clearly explains the rationale for these aliases.
tests/core/data_structures/test_noise_model.py (1)
59-95: LGTM! Thorough test coverage for CompactNoiseModel creation and expansion.The test validates compact process storage, expansion to per-site entries, strength propagation, and index mapping. The assertions comprehensively cover the expected behavior.
tests/noise_char/test_mcmc.py (4)
27-70: LGTM! Well-designed deterministic RNG for reproducible MCMC tests.The
DummyRngclass provides controlled randomness with clear semantics for both normal distribution samples and acceptance decisions. The index clamping ensures stability when sequences are exhausted.
94-120: LGTM! Test correctly validates best-state tracking.The test verifies that the MCMC optimizer accepts an improved proposal (moving from
[1,1]to[0,0]) and properly tracks the global best with expected evaluation count.
123-151: LGTM! Early stopping behavior correctly validated.The test confirms that the optimizer stops after
patienceiterations without improvement. The expected call count ofpatience + 1(initial + iterations) is correct.
154-180: LGTM! Bounds clipping correctly tested.The test verifies that out-of-bounds proposals
[5.0, -5.0]are clipped to bounds[-1, 1]before evaluation, resulting in the expected[1.0, -1.0].src/mqt/yaqs/core/data_structures/noise_model.py (2)
18-24: LGTM! Appropriate imports for the new functionality.The
copymodule enables deep copying to prevent external mutation, andGateLibraryimport is necessary for gate validation inCompactNoiseModel.
170-230: Well-documented class with clear behavior specification.The docstring thoroughly documents parameters, behavior, attributes, and error conditions. The validation order and expansion logic are clearly explained.
tests/noise_char/test_cma.py (4)
28-82: LGTM! Well-designed mock strategy for isolated CMA testing.The
DummyStrategyclass accurately mimics theCMAEvolutionStrategyinterface, tracks calls appropriately, and provides controllable behavior viastop_after_first. The result tracking intell()correctly identifies the best solution.
118-132: LGTM! Best solution selection correctly validated.The test verifies that
cma_optreturns the solution with the lowest objective value from the candidates. With candidates[1.0, 2.0](sum=5) and[-1.0, 0.5](sum=1.25), the expected best is[-1.0, 0.5]with value 1.25.
135-156: LGTM! Convergence detection test validates early stopping.The test confirms that setting
objective.converged = Trueafter 2 calls triggers early termination, with only 1tell()call made.
159-174: LGTM! Bounds propagation correctly tested.The test verifies that bounds are correctly converted to list form and passed to the CMA strategy options.
tests/noise_char/test_optimization.py (5)
29-101: LGTM! Comprehensive parameter container for test setup.The
Parametersclass provides a well-documented, centralized configuration for test scenarios with sensible defaults for lightweight testing.
103-162: LGTM! Well-structured test helper for LossClass instantiation.The helper correctly constructs all necessary components (Hamiltonian, initial state, observables, simulation parameters, noise model, propagator) and returns a configured
LossClassinstance ready for testing.
182-209: LGTM! History management and reset functionality properly tested.The test validates that
reset()clears state andset_history()correctly populates history attributes.
458-526: LGTM! Restart file loading correctly tested.The test validates that
adam_optcorrectly loads saved state (x, m, v, histories) from a restart file and raisesValueErrorfor missing restart files.
529-618: LGTM! Latest restart file selection and cleanup behavior tested.The test verifies that the optimizer selects the restart file with the highest iteration number and properly cleans up old restart files when restart is disabled.
tests/noise_char/test_propagation.py (2)
77-156: LGTM! Comprehensive propagator setup helper.The helper correctly initializes all components needed for propagation testing and returns them in a well-documented tuple for flexible test usage.
182-246: LGTM! Comprehensive error handling validation.The test thoroughly validates all expected
ValueErrorconditions:
- Noise model exceeding Hamiltonian sites
- Observable list exceeding Hamiltonian sites
- Running without setting observables
- Mismatched noise model at runtime
Each error message is verified using
re.escapefor safe regex matching.tests/noise_char/test_bayesian.py (4)
1-95: LGTM! Well-structured test infrastructure.The
MockLossClasscorrectly implements theLossClassinterface (callable returning(loss, grad, sim_time)tuple), and themake_losshelper provides clean type-checker compatibility. Warning suppression is appropriately documented as external library warnings.
97-135: LGTM! Comprehensive acquisition function tests.Tests cover all supported acquisition functions (LEI, PI, UCB) and properly verify the invalid name error case with
pytest.raises.
137-169: LGTM! Thorough basic functionality test.The test validates return types, shapes, bounds constraints, and evaluation count—all essential aspects of the Bayesian optimization interface.
227-353: LGTM! Good coverage of edge cases and parameter variations.Tests appropriately cover 1D/2D/3D inputs, custom parameters (dtype, device), negative bounds, and objective improvement behavior. The dtype assertion on line 242 correctly allows both float32 and float64.
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (2)
1-43: LGTM! Clear module documentation and function signature.The docstring comprehensively documents all parameters with appropriate defaults.
76-128: LGTM! Solid MCMC implementation.The algorithm correctly:
- Initializes and tracks global best separately from chain state
- Applies bounds clipping after proposals
- Implements standard Metropolis-Hastings acceptance
- Supports early stopping via patience counter
- Anneals both temperature and step size
tests/noise_char/test_characterizer.py (2)
30-102: LGTM! Clear parameter container for test configuration.The
Parametersclass provides sensible defaults for a lightweight test scenario (1 site, short simulation time). Docstring thoroughly documents all attributes.
199-218: LGTM! Test exercises the key Characterizer workflow.The test validates initialization, attribute types, and a single optimization iteration. The assertions confirm
optimal_modelis properly constructed afteradam_optimize.src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py (1)
159-214: LGTM! Core Bayesian optimization loop is well-structured.The implementation correctly:
- Scales inputs from unit cube to bounds
- Negates objectives for maximization-based GP
- Fits GP with proper transforms (Normalize, Standardize)
- Checks convergence flag for early stopping
- Returns values in original (minimization) scale
src/mqt/yaqs/noise_char/characterizer.py (2)
57-76: Class-level mutable default values can cause shared state issues.Declaring mutable defaults (
list[float] | None = None) at class level is fine when they default toNone, but for documentation clarity and to prevent accidental shared state if ever changed to[], consider initializing these in__init__instead.Current pattern is acceptable since defaults are
None, but this is worth noting for future maintenance.
212-357: LGTM! Clean delegation to optimization routines.The
cma_optimize,bayesian_optimize, andmcmc_optimizemethods properly delegate to their respective optimizers and consistently constructoptimal_modelfrom the best parameters.src/mqt/yaqs/noise_char/propagation.py (1)
317-335: LGTM! Simulator invocation and result extraction.The run method correctly constructs new
AnalogSimParams, invokes the simulator, and extracts observable trajectories.src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py (2)
141-206: LGTM! Well-implemented Adam optimizer with checkpointing.The optimization loop correctly implements:
- Standard Adam moment updates with bias correction
- Checkpointing at every iteration for restart capability
- Performance logging
- Multiple convergence criteria (tolerance and diff_avg_history)
117-121: Document that restart assumes prior evaluation history exists.Restoring
obs_arrayfrom the pickle assumes at least one evaluation has already occurred in the previous run (sinceself.obs_arrayis only created during__call__, not in__init__). Whileself.tis safely initialized in__init__,self.obs_arrayis not documented as a standard attribute. Add a comment clarifying that restart files are only valid when the previous run completed at least one evaluation, or validate that saved state includes the required attributes.
| work_dir = "test/scikit_characterizer/" | ||
| work_dir_path = Path(work_dir) | ||
| work_dir_path.mkdir(parents=True, exist_ok=True) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Prefer consistent use of Path objects.
Line 33 creates a string path, Line 34 converts it to a Path object, but Line 86 continues to use the string version. For consistency and cross-platform compatibility, prefer using work_dir_path throughout.
♻️ Suggested refactor for consistent Path usage
- work_dir = "test/scikit_characterizer/"
- work_dir_path = Path(work_dir)
+ work_dir_path = Path("test/scikit_characterizer/")
work_dir_path.mkdir(parents=True, exist_ok=True)And update Line 86:
- work_dir + "gammas.txt", ref_noise_model.strength_list, header="##", fmt="%.6f"
+ work_dir_path / "gammas.txt", ref_noise_model.strength_list, header="##", fmt="%.6f"Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @docs/examples/characterizer_example.md around lines 33 - 36, The code mixes
a string path variable work_dir with a Path object work_dir_path; replace all
subsequent uses of the string (notably the usage around the original Line 86) to
use work_dir_path for consistency and cross-platform correctness, and where an
API requires a string pass work_dir_path.as_posix() or str(work_dir_path);
update any file/directory operations, joins, or reads/writes to reference
work_dir_path (or its string form when necessary).
| print("Optimizing ... ") | ||
| characterizer.adam_optimize(max_iterations=50) | ||
| print("Optimization completed.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Clarify that this example uses gradient-based optimization.
The PR title and objectives emphasize "Gradient-Free Noise Characterization," but this example demonstrates adam_optimize, which is a gradient-based method. While it's fine to include gradient-based examples, the documentation should clarify this distinction or show gradient-free alternatives (CMA, MCMC, Bayesian) as well.
Consider adding a note before Line 138:
Note: This example uses the Adam optimizer, which is a gradient-based method. For gradient-free optimization, you can use:
- `characterizer.cma_optimize(max_iterations=50)`
- `characterizer.mcmc_optimize(max_iterations=50)`
- `characterizer.bayesian_optimize(max_iterations=50)`🤖 Prompt for AI Agents
In @docs/examples/characterizer_example.md around lines 138 - 140, Add a short
clarifying note before the example's optimization block stating that
adam_optimize is a gradient-based method and provide the gradient-free
alternatives to use (cma_optimize, mcmc_optimize, bayesian_optimize) with the
same max_iterations argument; reference the existing call to
characterizer.adam_optimize(max_iterations=50) and suggest replacing or showing
alternatives like characterizer.cma_optimize(max_iterations=50),
characterizer.mcmc_optimize(max_iterations=50), or
characterizer.bayesian_optimize(max_iterations=50) so readers understand the
distinction.
| for i, proc in enumerate(self.compact_processes): | ||
| assert "name" in proc, "Each process must have a 'name' key" | ||
| name = proc["name"] | ||
| if not hasattr(GateLibrary, name): | ||
| msg = f"Gate '{name}' not found in GateLibrary" | ||
| raise ValueError(msg) | ||
| msg = "Only 1-site noise processes are supported in CompactNoiseModel" | ||
| assert getattr(GateLibrary, name)().interaction == 1, msg | ||
| assert "sites" in proc, "Each process must have a 'sites' key" | ||
| assert "strength" in proc, "Each process must have a 'strength' key" | ||
|
|
||
| for site in proc["sites"]: | ||
| self.expanded_processes.append({"name": proc["name"], "sites": [site], "strength": proc["strength"]}) | ||
| self.index_list.append(i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Validation order could be optimized to fail fast.
Currently, the code validates name key presence (line 237), then checks GateLibrary (line 239), then validates interaction (line 243), and finally checks sites and strength (lines 244-245). If sites or strength are missing, the code will still instantiate the gate before failing. Consider checking all required keys first.
Proposed reordering for fail-fast validation
for i, proc in enumerate(self.compact_processes):
assert "name" in proc, "Each process must have a 'name' key"
+ assert "sites" in proc, "Each process must have a 'sites' key"
+ assert "strength" in proc, "Each process must have a 'strength' key"
name = proc["name"]
if not hasattr(GateLibrary, name):
msg = f"Gate '{name}' not found in GateLibrary"
raise ValueError(msg)
msg = "Only 1-site noise processes are supported in CompactNoiseModel"
assert getattr(GateLibrary, name)().interaction == 1, msg
- assert "sites" in proc, "Each process must have a 'sites' key"
- assert "strength" in proc, "Each process must have a 'strength' key"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for i, proc in enumerate(self.compact_processes): | |
| assert "name" in proc, "Each process must have a 'name' key" | |
| name = proc["name"] | |
| if not hasattr(GateLibrary, name): | |
| msg = f"Gate '{name}' not found in GateLibrary" | |
| raise ValueError(msg) | |
| msg = "Only 1-site noise processes are supported in CompactNoiseModel" | |
| assert getattr(GateLibrary, name)().interaction == 1, msg | |
| assert "sites" in proc, "Each process must have a 'sites' key" | |
| assert "strength" in proc, "Each process must have a 'strength' key" | |
| for site in proc["sites"]: | |
| self.expanded_processes.append({"name": proc["name"], "sites": [site], "strength": proc["strength"]}) | |
| self.index_list.append(i) | |
| for i, proc in enumerate(self.compact_processes): | |
| assert "name" in proc, "Each process must have a 'name' key" | |
| assert "sites" in proc, "Each process must have a 'sites' key" | |
| assert "strength" in proc, "Each process must have a 'strength' key" | |
| name = proc["name"] | |
| if not hasattr(GateLibrary, name): | |
| msg = f"Gate '{name}' not found in GateLibrary" | |
| raise ValueError(msg) | |
| msg = "Only 1-site noise processes are supported in CompactNoiseModel" | |
| assert getattr(GateLibrary, name)().interaction == 1, msg | |
| for site in proc["sites"]: | |
| self.expanded_processes.append({"name": proc["name"], "sites": [site], "strength": proc["strength"]}) | |
| self.index_list.append(i) |
🤖 Prompt for AI Agents
In @src/mqt/yaqs/core/data_structures/noise_model.py around lines 236 - 249, The
loop over self.compact_processes should validate required keys before touching
GateLibrary or instantiating gates: first assert "name", "sites", and "strength"
are present on proc, then check GateLibrary membership for proc["name"], then
retrieve the gate class once (e.g., cls = getattr(GateLibrary, name)) and assert
cls().interaction == 1 before expanding sites into self.expanded_processes and
appending to self.index_list; reorder the checks in the loop to perform key
presence checks first to fail fast and avoid creating gate instances when keys
are missing.
| # | ||
| # Licensed under the MIT License | ||
|
|
||
| """This module contains the optimization routines for noise characterization.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Docstring scope is incomplete.
The module docstring describes only "optimization routines," but per the PR context, this module provides a comprehensive noise characterization subsystem including propagators, loss evaluators, a high-level Characterizer orchestrator, and multiple optimization strategies (both gradient-based and gradient-free). Consider revising the docstring to reflect the broader scope.
🤖 Prompt for AI Agents
In @src/mqt/yaqs/noise_char/__init__.py at line 8, The module docstring is too
narrow—update the top-level docstring in src/mqt/yaqs/noise_char/__init__.py to
describe the full noise characterization subsystem (not just "optimization
routines"): mention that the module exports propagators, loss evaluators, the
high-level Characterizer orchestrator, and multiple optimization strategies
(gradient-based and gradient-free), and briefly state their roles so readers
know the available APIs and responsibilities.
| def __init__( | ||
| self, | ||
| *, | ||
| propagator: Propagator, | ||
| init_guess: CompactNoiseModel, | ||
| loss: LossClass, | ||
| ) -> None: | ||
| """Initialize the noise characterizer. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| propagator : PropagatorWithGradients | ||
| Propagator capable of producing trajectories and gradients with respect | ||
| to the compact noise-model parameters. | ||
| init_guess : CompactNoiseModel | ||
| Initial guess for the compact noise model. A deep copy of this object | ||
| will be stored on the instance. | ||
| loss : LossClass | ||
| Loss object that compares simulated trajectories to a reference and | ||
| provides utilities to convert between compact noise models and flat | ||
| optimization vectors. | ||
|
|
||
| The constructor stores a deep copy of init_guess, assigns propagator | ||
| and loss, and initializes self.init_x from init_guess.strength_list. | ||
| """ | ||
| self.init_guess = copy.deepcopy(init_guess) | ||
| self.propagator = propagator | ||
|
|
||
| self.loss = loss | ||
|
|
||
| self.init_x = self.init_guess.strength_list.copy() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring type mismatch for propagator parameter.
Line 88 says PropagatorWithGradients but the type hint on line 80 is Propagator. Update the docstring for consistency.
📝 Fix
Parameters
----------
- propagator : PropagatorWithGradients
+ propagator : Propagator
Propagator capable of producing trajectories and gradients with respect
to the compact noise-model parameters.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__( | |
| self, | |
| *, | |
| propagator: Propagator, | |
| init_guess: CompactNoiseModel, | |
| loss: LossClass, | |
| ) -> None: | |
| """Initialize the noise characterizer. | |
| Parameters | |
| ---------- | |
| propagator : PropagatorWithGradients | |
| Propagator capable of producing trajectories and gradients with respect | |
| to the compact noise-model parameters. | |
| init_guess : CompactNoiseModel | |
| Initial guess for the compact noise model. A deep copy of this object | |
| will be stored on the instance. | |
| loss : LossClass | |
| Loss object that compares simulated trajectories to a reference and | |
| provides utilities to convert between compact noise models and flat | |
| optimization vectors. | |
| The constructor stores a deep copy of init_guess, assigns propagator | |
| and loss, and initializes self.init_x from init_guess.strength_list. | |
| """ | |
| self.init_guess = copy.deepcopy(init_guess) | |
| self.propagator = propagator | |
| self.loss = loss | |
| self.init_x = self.init_guess.strength_list.copy() | |
| def __init__( | |
| self, | |
| *, | |
| propagator: Propagator, | |
| init_guess: CompactNoiseModel, | |
| loss: LossClass, | |
| ) -> None: | |
| """Initialize the noise characterizer. | |
| Parameters | |
| ---------- | |
| propagator : Propagator | |
| Propagator capable of producing trajectories and gradients with respect | |
| to the compact noise-model parameters. | |
| init_guess : CompactNoiseModel | |
| Initial guess for the compact noise model. A deep copy of this object | |
| will be stored on the instance. | |
| loss : LossClass | |
| Loss object that compares simulated trajectories to a reference and | |
| provides utilities to convert between compact noise models and flat | |
| optimization vectors. | |
| The constructor stores a deep copy of init_guess, assigns propagator | |
| and loss, and initializes self.init_x from init_guess.strength_list. | |
| """ | |
| self.init_guess = copy.deepcopy(init_guess) | |
| self.propagator = propagator | |
| self.loss = loss | |
| self.init_x = self.init_guess.strength_list.copy() |
🤖 Prompt for AI Agents
In @src/mqt/yaqs/noise_char/characterizer.py around lines 77 - 108, The
constructor docstring refers to "PropagatorWithGradients" but the parameter type
hint is Propagator; update the docstring for __init__ to describe the propagator
parameter as Propagator (or match whatever concrete type you intend) so the text
and the type hint for the propagator parameter are consistent; modify the
docstring line that starts "propagator : PropagatorWithGradients" to read
"propagator : Propagator" and keep the rest of the description about producing
trajectories and gradients intact.
| def make_loss(obj: object) -> LossClass: | ||
| """Treat a simple callable/object as a LossClass for static type checking. | ||
|
|
||
| Args: | ||
| obj: The object to cast to LossClass. | ||
|
|
||
| Returns: | ||
| The object cast to LossClass type. | ||
| """ | ||
| return cast("LossClass", obj) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting make_loss to a shared test utility.
This helper is duplicated in test_cma.py (lines 106-115) and here. Extracting it to a shared conftest.py or test utilities module would reduce duplication and centralize maintenance.
🤖 Prompt for AI Agents
In @tests/noise_char/test_mcmc.py around lines 82 - 91, The helper function
make_loss is duplicated across tests; extract it into a shared test utility
(e.g., tests/conftest.py or tests/utils.py) and replace local copies with an
import or fixture. Move the def make_loss(obj: object) -> LossClass
implementation into the shared module, export it, then update the tests (both
test_mcmc.py and test_cma.py) to import make_loss from that module (or register
it as a pytest fixture) and remove the duplicated definitions in each test file.
| from __future__ import annotations | ||
|
|
||
| import contextlib | ||
| import pickle # noqa: S403 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Remove unused noqa directive.
The static analysis tool flagged this as an unused directive (RUF100). The S403 rule (pickle security) is not enabled, so the directive has no effect.
Proposed fix
-import pickle # noqa: S403
+import pickle📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import pickle # noqa: S403 | |
| import pickle |
🧰 Tools
🪛 Ruff (0.14.10)
13-13: Unused noqa directive (non-enabled: S403)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In @tests/noise_char/test_optimization.py at line 13, Remove the unused inline
noqa by editing the import of pickle in tests/noise_char/test_optimization.py:
change the line importing pickle (the symbol "pickle") to not include the
trailing "# noqa: S403" since the S403 rule isn't enabled and the directive is
unnecessary; simply keep the import statement without the noqa comment.
| for file_path in loss_function.work_dir.glob("opt_traj*.txt"): | ||
| with contextlib.suppress(Exception): | ||
| file_path.unlink() | ||
|
|
||
| for file_path in loss_function.work_dir.glob("restart*.pkl"): | ||
| with contextlib.suppress(Exception): | ||
| file_path.unlink() | ||
|
|
||
| for file_path in loss_function.work_dir.glob("performance*.txt"): | ||
| with contextlib.suppress(Exception): | ||
| file_path.unlink() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Test cleanup could use pytest's tmp_path fixture semantics.
The manual file cleanup using contextlib.suppress is functional, but pytest's tmp_path fixture automatically cleans up after each test. This cleanup code may be unnecessary unless files need to be removed mid-test for specific assertions.
🤖 Prompt for AI Agents
In @tests/noise_char/test_optimization.py around lines 445 - 455, The manual
cleanup loop using loss_function.work_dir.glob("opt_traj*.txt"),
("restart*.pkl"), and ("performance*.txt") is redundant with pytest's tmp_path
fixture; remove these three cleanup blocks and ensure the test uses a tmp_path
(or tmp_path_factory) for loss_function.work_dir so pytest auto-cleans between
tests, or if mid-test removal is actually required keep only the specific unlink
calls and add a short comment explaining why manual cleanup is necessary.
| class Parameters: | ||
| """Container for default test parameters used in a lightweight open-quantum-system propagation test.""" | ||
|
|
||
| def __init__(self) -> None: | ||
| """Initialize default test simulation parameters. | ||
|
|
||
| This constructor sets up a collection of attributes used for running a simple | ||
| open-quantum-system propagation test. Attributes and their meanings: | ||
| - sites (int): Number of sites/spins. Default: 1. | ||
| - sim_time (float): Total simulation time. Default: 0.6. | ||
| - dt (float): Time step for propagation. Default: 0.2. | ||
| - order (int): Integration/order parameter for the propagator. Default: 1. | ||
| - threshold (float): Numerical/truncation tolerance used in algorithms. Default: 1e-4. | ||
| - ntraj (int): Number of trajectories to average over (stochastic methods). Default: 1. | ||
| - max_bond_dim (int): Maximum bond dimension for tensor-network representations. Default: 4. | ||
| - j (float): Coupling constant used in the model Hamiltonian. Default: 1. | ||
| - g (float): Local field (e.g., transverse field) parameter. Default: 0.5. | ||
| - times (np.ndarray): 1-D array of time points computed as np.arange(0, sim_time + dt, dt). | ||
| - n_obs (int): Number of observables (3 per site for Pauli x, y, z). Computed as sites * 3. | ||
| - n_jump (int): Number of jump operators (2 per site, e.g., lowering and Pauli-Z). Computed as sites * 2. | ||
| - n_t (int): Number of time points (len(times)). | ||
| - gamma_rel (float): Relaxation (dissipative) rate. Default: 0.1. | ||
| - gamma_deph (float): Dephasing rate. Default: 0.15. | ||
| - d (int): Local Hilbert-space dimension (e.g., spin-1/2 -> 2). Default: 2. | ||
|
|
||
| Notes: | ||
| - The provided defaults are chosen for lightweight tests and can be modified | ||
| on the instance after construction if different test scenarios are required. | ||
| - The 'times' array explicitly includes the final time by using sim_time + dt | ||
| as the stop value in np.arange. | ||
| """ | ||
| self.sites = 1 | ||
| self.sim_time = 0.6 | ||
| self.dt = 0.2 | ||
| self.order = 1 | ||
| self.threshold = 1e-4 | ||
| self.ntraj = 1 | ||
| self.max_bond_dim = 4 | ||
| self.j = 1 | ||
| self.g = 0.5 | ||
|
|
||
| self.times = np.arange(0, self.sim_time + self.dt, self.dt) | ||
|
|
||
| self.n_obs = self.sites * 3 # x, y, z for each site | ||
| self.n_jump = self.sites * 2 # lowering and pauli_z for each site | ||
| self.n_t = len(self.times) | ||
|
|
||
| self.gamma_rel = 0.1 | ||
| self.gamma_deph = 0.15 | ||
|
|
||
| self.d = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting shared Parameters class to a conftest.py.
This class is nearly identical to the one in test_optimization.py. Extracting it to a shared conftest.py or test utilities module would reduce duplication and ensure consistency across test files.
🤖 Prompt for AI Agents
In @tests/noise_char/test_propagation.py around lines 24 - 74, The Parameters
class is duplicated across tests; move it to a shared test fixture by extracting
the class (Parameters) into tests/conftest.py (or a test utilities module) and
import or provide it as a fixture in both test_propagation.py and
test_optimization.py; update the tests to either import Parameters from the
shared module or declare a fixture named parameters that returns a Parameters()
instance so both test files use the single shared definition and eliminate
duplication.
| def test_propagatorwithgradients_runs() -> None: | ||
| """Test that `propagation.tjm_traj` executes correctly and returns expected output shapes. | ||
|
|
||
| This test verifies that: | ||
| - The function can be called with a valid `SimulationParameters` instance. | ||
| - The returned values `t`, `original_exp_vals`, and `d_on_d_gk` are NumPy arrays. | ||
| - The shapes of the outputs match the expected dimensions based on simulation parameters. | ||
| - The average minimum and maximum trajectory time is returned as a list of None values. | ||
| """ | ||
| # Prepare SimulationParameters | ||
| test = Parameters() | ||
|
|
||
| _, _, _obs_list, _, ref_noise_model, propagator = create_propagator_instance(test) | ||
|
|
||
| propagator.run(ref_noise_model) | ||
|
|
||
| assert isinstance(propagator.times, np.ndarray) | ||
| assert isinstance(propagator.obs_array, np.ndarray) | ||
|
|
||
| assert propagator.times.shape == (test.n_t,) | ||
| assert propagator.obs_array.shape == (test.n_obs, test.n_t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Test function name references outdated class name.
The function test_propagatorwithgradients_runs and its docstring reference propagation.tjm_traj, but the code tests propagation.Propagator. Consider renaming for clarity.
Proposed rename
-def test_propagatorwithgradients_runs() -> None:
- """Test that `propagation.tjm_traj` executes correctly and returns expected output shapes.
+def test_propagator_runs() -> None:
+ """Test that `propagation.Propagator.run` executes correctly and returns expected output shapes.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_propagatorwithgradients_runs() -> None: | |
| """Test that `propagation.tjm_traj` executes correctly and returns expected output shapes. | |
| This test verifies that: | |
| - The function can be called with a valid `SimulationParameters` instance. | |
| - The returned values `t`, `original_exp_vals`, and `d_on_d_gk` are NumPy arrays. | |
| - The shapes of the outputs match the expected dimensions based on simulation parameters. | |
| - The average minimum and maximum trajectory time is returned as a list of None values. | |
| """ | |
| # Prepare SimulationParameters | |
| test = Parameters() | |
| _, _, _obs_list, _, ref_noise_model, propagator = create_propagator_instance(test) | |
| propagator.run(ref_noise_model) | |
| assert isinstance(propagator.times, np.ndarray) | |
| assert isinstance(propagator.obs_array, np.ndarray) | |
| assert propagator.times.shape == (test.n_t,) | |
| assert propagator.obs_array.shape == (test.n_obs, test.n_t) | |
| def test_propagator_runs() -> None: | |
| """Test that `propagation.Propagator.run` executes correctly and returns expected output shapes. | |
| This test verifies that: | |
| - The function can be called with a valid `SimulationParameters` instance. | |
| - The returned values `t`, `original_exp_vals`, and `d_on_d_gk` are NumPy arrays. | |
| - The shapes of the outputs match the expected dimensions based on simulation parameters. | |
| - The average minimum and maximum trajectory time is returned as a list of None values. | |
| """ | |
| # Prepare SimulationParameters | |
| test = Parameters() | |
| _, _, _obs_list, _, ref_noise_model, propagator = create_propagator_instance(test) | |
| propagator.run(ref_noise_model) | |
| assert isinstance(propagator.times, np.ndarray) | |
| assert isinstance(propagator.obs_array, np.ndarray) | |
| assert propagator.times.shape == (test.n_t,) | |
| assert propagator.obs_array.shape == (test.n_obs, test.n_t) |
🤖 Prompt for AI Agents
In @tests/noise_char/test_propagation.py around lines 159 - 179, Rename the test
function and update its docstring to reflect the current API: change the test
function name test_propagatorwithgradients_runs to something like
test_propagator_runs and update the docstring references from
propagation.tjm_traj to propagation.Propagator (or the Propagator.run behavior)
so the name and description match the code that uses create_propagator_instance
and propagator.run; ensure any mentions of gradients or tjm_traj are removed or
revised to describe propagator.times and propagator.obs_array assertions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| def lineal_function_1000(_i: int) -> int: | ||
| """Return a constant value of 1000. | ||
|
|
||
| This function takes an input parameter and returns a fixed value of 1000, | ||
| regardless of the input value. | ||
|
|
||
| Args: | ||
| _i (int): An integer parameter (unused in the calculation). | ||
|
|
||
| Returns: | ||
| int: The constant value 1000. | ||
| """ | ||
| return 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function name typo: "lineal" should be "linear" or "constant".
The function name lineal_function_1000 contains a typo. Since the function returns a constant value regardless of input, consider either:
- Renaming to
linear_function_1000if you plan to make it actually linear (e.g.,return _i * 1000) - Renaming to
constant_function_1000to accurately reflect its current behavior
📝 Suggested fix
-def lineal_function_1000(_i: int) -> int:
- """Return a constant value of 1000.
+def constant_function_1000(_i: int) -> int:
+ """Return a constant value of 1000.Then update the default parameter in __init__:
- num_traj: Callable[[int], int] = lineal_function_1000,
+ num_traj: Callable[[int], int] = constant_function_1000,Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @src/mqt/yaqs/noise_char/loss.py around lines 28 - 40, The function name
lineal_function_1000 is misspelled and misleading because it returns a constant;
rename it to constant_function_1000 (or linear_function_1000 if you change
implementation to be linear) and update all references, including the default
parameter in the class __init__ that currently points to lineal_function_1000;
ensure the function signature remains (i: int) -> int and behavior unchanged if
choosing constant_function_1000, and run tests to catch any missed references.
|
|
||
| self.propagator.set_observable_list(ref_traj) | ||
|
|
||
| self.ref_traj_array = copy.deepcopy(np.array([obs.results for obs in self.ref_traj])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "loss.py" -type f | head -5Repository: munich-quantum-toolkit/yaqs
Length of output: 104
🏁 Script executed:
cat -n ./src/mqt/yaqs/noise_char/loss.py | head -150Repository: munich-quantum-toolkit/yaqs
Length of output: 7161
🏁 Script executed:
rg -n "class Observable" --type=py -A 10Repository: munich-quantum-toolkit/yaqs
Length of output: 1046
🏁 Script executed:
sed -n '41,150p' src/mqt/yaqs/core/data_structures/simulation_parameters.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 4733
🏁 Script executed:
find . -name "propagation.py" -o -name "*propagat*.py" | grep -v __pycache__ | head -10Repository: munich-quantum-toolkit/yaqs
Length of output: 150
🏁 Script executed:
rg -n "class Propagator" ./src/mqt/yaqs/noise_char/propagation.py -A 5Repository: munich-quantum-toolkit/yaqs
Length of output: 380
🏁 Script executed:
rg -n "def set_observable_list" ./src/mqt/yaqs/noise_char/propagation.py -A 15Repository: munich-quantum-toolkit/yaqs
Length of output: 965
🏁 Script executed:
sed -n '199,250p' ./src/mqt/yaqs/noise_char/propagation.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 2793
🏁 Script executed:
rg -n "\.initialize\(" ./src/mqt/yaqs/noise_char/propagation.py -B 3 -A 3Repository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
rg -n "\.initialize\(" ./src/mqt/yaqs/noise_char/ -B 3 -A 3Repository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
rg -n "LossClass\(" ./tests/ -B 5 -A 5Repository: munich-quantum-toolkit/yaqs
Length of output: 2479
🏁 Script executed:
rg -n "def run\(" ./src/mqt/yaqs/noise_char/propagation.py -A 30Repository: munich-quantum-toolkit/yaqs
Length of output: 1893
🏁 Script executed:
rg -n "self.obs_traj =" ./src/mqt/yaqs/noise_char/propagation.py -B 5 -A 5Repository: munich-quantum-toolkit/yaqs
Length of output: 468
🏁 Script executed:
rg -n "obs.initialize\|observable.initialize" ./src/mqt/yaqs/ -B 3 -A 3Repository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
rg -n "initialize\(" ./src/mqt/yaqs/ -B 2 -A 2 | grep -A 5 -B 5 "Observable\|observable"Repository: munich-quantum-toolkit/yaqs
Length of output: 2803
🏁 Script executed:
rg -n "propagator.run\|ref_traj=" ./tests/noise_char/ -B 2 -A 2 | head -100Repository: munich-quantum-toolkit/yaqs
Length of output: 53
Add validation to ensure reference trajectory observables have populated results before creating array.
Line 106 creates ref_traj_array by accessing obs.results for each observable in self.ref_traj. Although the Observable class initializes results to None and populates them during simulator.run(), this constructor does not validate that the observables have been initialized. If observables are passed without first running propagator.run(), the array will contain None values, causing silent data corruption or downstream errors in loss computation.
Add a validation check such as:
if any(obs.results is None for obs in self.ref_traj):
raise ValueError("Reference trajectory observables must have results populated. "
"Ensure propagator.run() has been called before LossClass instantiation.")Alternatively, document this precondition clearly in the docstring (lines 62-69).
…ent_free_optimization
… gradient_free_optimization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@pyproject.toml`:
- Around line 52-55: Update pyproject.toml to move the heavy optimization
libraries ("torch", "botorch", "gpytorch", "cma") out of the main dependencies
and into an optional extras group (e.g., extras = { "noise-optimization" = [...]
}) so they are installed only when requested; then add import guards in the
modules under src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/
(referencing the relevant modules and functions that import
torch/botorch/gpytorch/cma) to catch ImportError and raise a clear error message
instructing users to install the optional extra (for example: "pip install
package[noise-optimization]") when those optional libs are required.
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/__init__.py`:
- Around line 1-8: Add an explicit public API by defining __all__ in the module
to mirror sibling packages: add a top-level __all__ list containing the exported
symbol name(s) (e.g., "adam") so the module only exposes the intended API (refer
to the symbol adam in this package) and keep it updated if additional
functions/classes are added.
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py`:
- Around line 122-128: The cleanup loop in adam.py that iterates
restart_path.iterdir() and unlinks every file with suffix ".pkl" is too broad
and may delete unrelated user data; change the filter to only remove files
matching the restart pattern (e.g., "restart_step_*.pkl"). Specifically, update
the block that uses restart_dir/restart_path to only unlink Path objects whose
name matches the glob or regex "restart_step_*.pkl" (or use
restart_path.glob("restart_step_*.pkl")), keeping the
contextlib.suppress(Exception) behavior for safety.
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py`:
- Around line 162-205: Initialize best_x and best_y from the initial design
before entering the BO loop so they exist when max_iter == 0: compute best_idx =
torch.argmax(y_train) (or use existing y_train), set best_x =
scale_to_bounds(x_train[best_idx]) and best_y = -y_train[best_idx] prior to the
for _i in range(max_iter) loop (referencing x_train, y_train, best_x, best_y in
the Bayesian optimization block).
- Around line 122-140: The bounds tensor is created with a hardcoded dtype and
device which can mismatch user-specified tensors used in scale_to_bounds; update
the construction of bounds (the variable bounds) to pass through the function's
dtype and device parameters (use the provided dtype and device when creating the
torch.tensor/torch.as_tensor for bounds) so that scale_to_bounds(x_unit:
torch.Tensor) performs operations on tensors with matching dtype and device;
ensure any subsequent indexing like bounds[0], bounds[1] inherits that
dtype/device.
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.py`:
- Around line 61-67: The docstring in the CMA implementation documents the
return order as (fbest, xbest) but the function actually returns (xbest, fbest);
update the docstring to match the implementation by describing the returns as
xbest : ndarray (best solution found) followed by fbest : float (best objective
value) so the documented return order matches the actual tuple returned (xbest,
fbest), and ensure the description text near the return block referencing
fbest/xbest is adjusted accordingly.
In `@tests/noise_char/test_characterizer.py`:
- Around line 199-218: The test currently checks initialization and that
adam_optimize runs but doesn't verify the optimization effect or model
structure; update test_characterizer_init to capture the initial model
parameters from characterizer.init_guess (e.g., parameter arrays or attributes
on CompactNoiseModel returned by create_instances), run
characterizer.adam_optimize(max_iter=1), then assert that
characterizer.optimal_model is a CompactNoiseModel and that its key parameters
differ from the initial ones (or match expected shapes/types), and also add
assertions on parameter shapes/types (e.g., numpy arrays) to confirm the
expected structure was preserved.
♻️ Duplicate comments (41)
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/__init__.py (1)
1-8: Consider defining the public API.This package initializer is minimal and functional, but lacks explicit
__all__definition and re-exports for the submodules (bayesian, cma, mcmc). This was already flagged in a previous review.src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.py (2)
21-21: Remove leftover cell marker.The
# %%marker is a notebook-style cell delimiter that should be removed from production code. This was previously flagged.
95-103: Redundant and potentially buggy convergence checks.This issue was previously flagged: line 96 safely checks
hasattr(f, "converged")before accessing it, but line 102 directly accessesf.convergedwithout protection. Ifflacks aconvergedattribute, line 102 will raiseAttributeError. Additionally, if the check at line 96 passes and is truthy, the loop breaks there, making line 102 unreachable in that scenario—the logic is duplicated.🔧 Proposed fix
# Optional custom convergence detection if hasattr(f, "converged") and getattr(f, "converged", False): break if es.stop(): break - - if f.converged: - breaksrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py (2)
39-80: Docstring lists unsupported "EI".Only "LEI", "PI", and "UCB" are implemented; "EI" is advertised but not handled. Either add EI support or remove it from the docstring to avoid user confusion.
📝 Docstring-only fix
- - "EI": Expected Improvement - "LEI": Log Expected Improvement - "PI": Probability of Improvement - "UCB": Upper Confidence Bound🧩 Alternative: add EI support
+from botorch.acquisition import ExpectedImprovement ... if name == "LEI": return LogExpectedImprovement(model=model, best_f=best_f, maximize=True) + if name == "EI": + return ExpectedImprovement(model=model, best_f=best_f, maximize=True)
86-121: Validate bounds early and fix docstring/default/type mismatches.
x_low/x_updefault toNonebut are used immediately to buildbounds, which will error with a vague message. Also the docstring saysacq_namedefaults to "EI" andstdto 1e-6, while the signature uses "UCB" and 1e-2. Finally, the return annotation listsbest_yasnp.ndarraybut a scalar is returned.🐛 Suggested fix
def bayesian_opt( f: LossClass, - x_low: np.ndarray | None = None, - x_up: np.ndarray | None = None, + x_low: np.ndarray | None = None, + x_up: np.ndarray | None = None, n_init: int = 5, max_iter: int = 15, acq_name: str = "UCB", std: float = 1e-2, beta: float = 2.0, dtype: torch.dtype = torch.double, device: str = "cpu", -) -> tuple[np.ndarray, np.ndarray, torch.Tensor, torch.Tensor]: +) -> tuple[np.ndarray, np.floating, torch.Tensor, torch.Tensor]: @@ - acq_name: Acquisition function name ('EI', 'UCB', etc.). Defaults to "EI". - std: Observation noise standard deviation. Defaults to 1e-6. + acq_name: Acquisition function name ('LEI', 'PI', 'UCB'). Defaults to "UCB". + std: Observation noise standard deviation. Defaults to 1e-2. @@ - bounds = torch.tensor(np.array([x_low, x_up]), dtype=torch.double) + if x_low is None or x_up is None: + msg = "x_low and x_up must be provided to define bounds." + raise ValueError(msg) + bounds = torch.tensor(np.array([x_low, x_up]), dtype=torch.double)src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py (3)
18-18: Reassess the unused# noqa: S403.Ruff reports this as unused; either remove it or keep with a short rationale if you want to document intentional pickle usage.
83-86: Docstring says clipping to [0,1] but implementation clips to bounds.Update the docstring to match the actual clipping behavior (
[x_low, x_up]when bounds are provided).
163-165: Allow one-sided bounds instead of requiring both.Current logic only clips when both bounds are provided, which prevents lower-only or upper-only constraints.
♻️ Proposed fix
- if x_low is not None and x_up is not None: - x = np.clip(x, x_low, x_up) + if x_low is not None: + x = np.maximum(x, x_low) + if x_up is not None: + x = np.minimum(x, x_up)src/mqt/yaqs/noise_char/loss.py (12)
28-40: Renamelineal_function_1000to reflect its behavior.The function returns a constant, so the name is misleading. Consider renaming to
constant_function_1000(and updating the default in__init__).♻️ Suggested rename
-def lineal_function_1000(_i: int) -> int: +def constant_function_1000(_i: int) -> int: @@ - num_traj: Callable[[int], int] = lineal_function_1000, + num_traj: Callable[[int], int] = constant_function_1000,
46-47: Remove or document the shadowed class-leveln_avg.
n_avg = 20is immediately overridden byself.n_avg = 100, which is confusing.♻️ Proposed fix
-class LossClass: - """A base LossClass to track optimization history and compute averages.""" - - n_avg = 20 +class LossClass: + """A base LossClass to track optimization history and compute averages."""
101-105: Use the deep-copied reference trajectory consistently.You deep-copy
ref_trajintoself.ref_trajbut then callset_observable_list(ref_traj)with the original object. Useself.ref_trajto avoid external mutation issues.♻️ Proposed fix
- self.propagator.set_observable_list(ref_traj) + self.propagator.set_observable_list(self.ref_traj)
106-106: Validate that reference observables have results before buildingref_traj_array.If any
Observable.resultsisNone,ref_traj_arraywill includeNoneand corrupt downstream loss computations. Add a guard or document this precondition explicitly.
130-144: Fix averaging window logic incompute_avg.When history exceeds
n_avg, the code averages from indexn_avgonward instead of the lastn_avgentries. That shortens the window and skews convergence checks.🐛 Proposed fix
- if len(self.x_history) <= self.n_avg: + if len(self.x_history) < self.n_avg: x_avg = np.mean(self.x_history, axis=0) else: - x_avg = np.mean(self.x_history[self.n_avg :], axis=0) + x_avg = np.mean(self.x_history[-self.n_avg :], axis=0)
208-221:reset()leaves stale gradients and convergence state.
grad_historyandconvergedare not reset, so the optimizer can carry stale state across runs.🐛 Proposed fix
self.n_eval = 0 self.x_history = [] self.f_history = [] self.x_avg_history = [] self.diff_avg_history = [] + self.grad_history = [] + self.converged = False
222-250: Validate history array lengths inset_history.Mismatched lengths will create inconsistent internal state and later index errors.
✅ Add a length check
def set_history( self, x_history: list[np.ndarray] | np.ndarray, f_history: list[float] | np.ndarray, x_avg_history: list[np.ndarray] | np.ndarray, diff_avg_history: list[float] | np.ndarray, ) -> None: @@ - self.n_eval = len(x_history) + if not (len(x_history) == len(f_history) == len(x_avg_history) == len(diff_avg_history)): + msg = "All history arrays must have the same length." + raise ValueError(msg) + self.n_eval = len(x_history)
307-325:write_trajdocstring doesn’t match implementation.The docstring references
exp_vals_trajand filename patterns that aren’t used here. Align it with the actual parameters and output format.
349-370:__call__docstring lists a non-existent return value.The method returns three values, but the docstring mentions a fourth. Remove the extra item to keep docs accurate.
383-383: Avoidcopy.deepcopy()for NumPy arrays.Use
.copy()for efficiency and clarity.♻️ Proposed fix
- self.obs_array = copy.deepcopy(self.propagator.obs_array) + self.obs_array = self.propagator.obs_array.copy()
393-408: Consider central differences for numeric gradients.Forward differences are lower accuracy and can be noisy. Central differences improve accuracy at the cost of one extra evaluation per dimension.
414-414: Prefernp.zeros(self.d)over list-based construction.It’s clearer and more efficient.
♻️ Proposed fix
- grad = np.array([0] * self.d) + grad = np.zeros(self.d)src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (1)
103-108: Prevent numerical instability as temperature cools.With annealing,
temperaturecan become very small, causing overflow innp.exp(-delta / temperature). Add a minimum temperature floor (or clamp acceptance probability).💡 Suggested fix
+MIN_TEMPERATURE = 1e-10 @@ - acceptance_prob = np.exp(-delta / temperature) + acceptance_prob = np.exp(-delta / max(temperature, MIN_TEMPERATURE))src/mqt/yaqs/noise_char/__init__.py (1)
8-8: Broaden the module docstring to reflect the full subsystem.This package now includes propagators, loss evaluators, the
Characterizer, and multiple optimizers—not just optimization routines.src/mqt/yaqs/core/data_structures/noise_model.py (1)
170-253: LGTM overall, with a note on validation order.The
CompactNoiseModelimplementation correctly validates, expands, and constructs the expanded noise model. The docstring is comprehensive and the logic is sound.One minor optimization (previously flagged): the validation at lines 236-245 could be reordered to check all required keys (
name,sites,strength) before instantiating the gate class viagetattr(GateLibrary, name)(). This would fail faster for missing keys without unnecessary gate instantiation.tests/noise_char/test_optimization.py (2)
411-455: LGTM with minor note on cleanup.The test correctly validates Adam optimizer execution and return types. The manual cleanup at lines 445-455 is functional but may be unnecessary since pytest's
tmp_pathfixture automatically cleans up after each test.
13-13: Remove unusednoqadirective.The
S403rule (pickle security) is not enabled, so the directive has no effect.Proposed fix
-import pickle # noqa: S403 +import pickletests/noise_char/test_mcmc.py (1)
82-91: Consider extractingmake_lossto a shared test utility.This helper is duplicated in
test_cma.py. Extracting it to a sharedconftest.pyor test utilities module would reduce duplication and centralize maintenance.tests/core/data_structures/test_noise_model.py (1)
97-127: Test validatesNoiseModelbut function name and docstring mentionCompactNoiseModel.The function is named
test_compact_noise_model_assertionand the docstring states it testsCompactNoiseModel, but thepytest.raisesblocks actually instantiateNoiseModel. Either rename the test totest_noise_model_missing_fields_assertionor change the assertions to useCompactNoiseModel.tests/noise_char/test_cma.py (1)
106-115: Duplicatemake_losshelper.This helper is identical to the one in
test_mcmc.py. Consider extracting to a shared fixture inconftest.py.tests/noise_char/test_propagation.py (2)
24-74: Consider extracting sharedParametersclass to reduce duplication.This
Parametersclass is duplicated across test files (also intest_optimization.pyandtest_characterizer.py). Extracting it to a sharedconftest.pywould improve maintainability.
159-179: Test function name and docstring reference outdated class name.The function
test_propagatorwithgradients_runsand its docstring referencepropagation.tjm_traj, but the code testspropagation.Propagator. Consider renaming for clarity.tests/noise_char/test_bayesian.py (1)
190-225: Convergence test assertions could be strengthened.The test sets
converged=Trueafter the 3rd evaluation and runs withmax_iter=10, but only asserts that the result is valid—not that early stopping actually occurred. Consider assertingloss.n_eval < n_init + max_iterto verify early termination.tests/noise_char/test_characterizer.py (3)
30-101: Consider extracting sharedParametersclass.This class is duplicated across multiple test files. A shared fixture in
conftest.pywould reduce duplication.
132-144: Docstring return type mismatch.The return type annotation on line 106 shows a 7-tuple (including
LossClass), but the docstring describes only a 6-tuple. Update the docstring to includeloss.
153-157: Example in docstring has incorrect argument order.The example shows
create_instances(tmp_path, test)but the function signature iscreate_instances(test, tmp_path).src/mqt/yaqs/noise_char/characterizer.py (2)
77-108: Docstring type mismatch for propagator parameter.Line 88 says
PropagatorWithGradientsbut the type hint on line 80 isPropagator. Update the docstring for consistency.
209-211: Potential IndexError if optimization returns empty history.Line 210 accesses
self.x_avg_history[-1]without verifying the list is non-empty. Ifadam_optreturns an empty history (edge case), this would raise anIndexError.💡 Defensive check
+ if not self.x_avg_history: + msg = "Optimization returned empty history" + raise RuntimeError(msg) self.optimal_model = self.loss.x_to_noise_model(self.x_avg_history[-1])src/mqt/yaqs/noise_char/propagation.py (5)
1-9: Typo in module docstring."trahectories" should be "trajectories" and "expectations values" should be "expectation values".
28-43: Missing error handling for invalid gate names.
getattr(GateLibrary, proc["name"])will raiseAttributeErrorwith a cryptic message if the process name doesn't exist inGateLibrary. Consider wrapping with a clearer error.
195-198:max()on empty sequence will raise ValueError.If
expanded_noise_model.processesis empty, the generator expression will be empty andmax()will raiseValueError: max() arg is an empty sequence. Add a guard or usemax(..., default=-1).
199-234:max()on empty observable list will raise ValueError.If
obs_listis empty,all_obs_siteswill be empty andmax(all_obs_sites)at line 227 will fail. The docstring acknowledges this but an explicit check would be clearer.
308-316: Noise model validation assumes identical structure.The validation loop iterates over
noise_model.compact_processesand accessesself.compact_noise_model.compact_processes[i]by index, assuming both have the same length. If they differ in length, this could raiseIndexErroror miss validation of extra processes. Consider explicit length check first.
pyproject.toml
Outdated
| "cma>=3.0.0", | ||
| "torch>=2.0.0", | ||
| "botorch>=0.8.0", | ||
| "gpytorch>=1.10.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
PyTorch Python 3.14 support compatibility
💡 Result:
Short answer: Yes — official PyTorch gained Python 3.14 support in recent releases (nightlies first) and full wheel support was added in the stable release line around the 2.10 cycle. However, some components still have caveats (TorchScript warnings / limited torch.compile multithreading on certain “no‑GIL” builds), and platform-specific wheels (macOS/conda vs pip) can lag — if pip can't find a wheel, try the conda/nightly builds or check the PyTorch Get Started page. [1][2][3]
Sources:
- PyTorch developer discussion: torch.compile Python 3.14 support and caveats. [1]
- FBGEMM / release-compatibility table showing Python 3.14 in recent releases. [2]
- PyTorch user forum thread reporting macOS pip wheel availability issues and using conda as a workaround. [3]
🌐 Web query:
botorch gpytorch Python 3.14 support
💡 Result:
Short answer: Partial — not yet guaranteed.
- BoTorch (PyPI metadata) requires Python >= 3.10 but has no explicit “supports 3.14” statement; upstream compatibility depends on PyTorch/GPyTorch support. [1]
- GPyTorch latest stable series (v1.14.x) targets PyTorch 2.x; its releases/docs don’t advertise official CPython 3.14 support (no explicit 3.14 wheel/statement). [2][3]
- PyTorch itself worked through Python 3.14 issues and added/landed 3.14 support in late 2025 (nightlies / later releases), so BoTorch/GPyTorch compatibility with 3.14 will follow only after matching PyTorch binaries and any code fixes. [4]
- Python 3.14 entered beta in 2025; many scientific libs have been testing but you should verify wheels for your OS/Python build before upgrading. [5]
Recommendation: If you need 3.14 now, test in a disposable environment: install a PyTorch wheel built for 3.14 (or nightly), then pip-install gpytorch/botorch from PyPI or from GitHub and run your test suite. If you want an officially supported stack, wait until BoTorch/GPyTorch release notes explicitly list CPython 3.14 support or the projects publish 3.14 wheels.
Sources:
[1] botorch PyPI installation requirements.
[2] GPyTorch docs / module index (v1.14.x).
[3] GPyTorch GitHub releases (v1.14 series).
[4] PyTorch developer discussion on Python 3.14 support.
[5] Python 3.14 beta announcements (Python.org).
🏁 Script executed:
# Check if the pyproject.toml already has optional-dependencies structure
cd /tmp && git ls-files pyproject.toml 2>/dev/null || echo "Checking repository structure"
cat -n pyproject.toml | head -100Repository: munich-quantum-toolkit/yaqs
Length of output: 164
🏁 Script executed:
# Search for imports of torch, botorch, gpytorch, cma in the codebase to understand usage patterns
rg "import torch|from torch|import botorch|from botorch|import gpytorch|from gpytorch|import cma|from cma" --type py -lRepository: munich-quantum-toolkit/yaqs
Length of output: 317
🏁 Script executed:
# Check if there are already any optional dependencies or entry points that might indicate conditional usage
rg "optional-dependencies|extras" pyproject.tomlRepository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
# Find and read pyproject.toml from the repository root
git ls-files pyproject.toml 2>/dev/null && cat pyproject.toml | head -150Repository: munich-quantum-toolkit/yaqs
Length of output: 4093
🏁 Script executed:
# Check if torch/botorch/gpytorch imports are localized to noise_char module
rg "import torch|from torch|import botorch|from botorch|import gpytorch|from gpytorch|import cma|from cma" --type py -B 2 | head -100Repository: munich-quantum-toolkit/yaqs
Length of output: 2428
🏁 Script executed:
# Check the main package __init__.py to see what's exposed at package level
find src -name "__init__.py" -exec cat {} + | head -150Repository: munich-quantum-toolkit/yaqs
Length of output: 3020
🏁 Script executed:
# Verify the architecture: are noise_char imports guarded anywhere?
rg "try:|except ImportError|except ModuleNotFoundError" src/mqt/yaqs --type py -A 2 -B 2 | grep -A 3 -B 3 "torch\|botorch\|gpytorch\|cma"Repository: munich-quantum-toolkit/yaqs
Length of output: 53
Make optimization libraries optional dependencies.
The heavy dependencies (torch ~2GB+, botorch, gpytorch, cma) are currently required for all users despite being needed only by the noise characterization feature. These imports are localized to src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/ and not exposed in the package's public API, making them suitable as optional dependencies.
Additionally, while PyTorch has Python 3.14 support, botorch and gpytorch lack explicit Python 3.14 compatibility statements yet. Moving these to optional dependencies would allow the core package to work on Python 3.14 without these dependencies, maintaining broad compatibility.
📦 Suggested change
dependencies = [
"numpy>=1.23.0",
"numpy>=1.24; python_version >= '3.11'",
"numpy>=1.26; python_version >= '3.12'",
"numpy>=2.1; python_version >= '3.13'",
"numpy>=2.3.2; python_version >= '3.14'",
"scipy>=1.13.1",
"scipy>=1.14.1; python_version >= '3.13'",
"scipy>=1.16.1; python_version >= '3.14'",
"qiskit>=1.1.0",
"qiskit>=1.3.0; python_version >= '3.13'",
"opt-einsum>=3.4.0",
"tqdm>=4.67.1",
- "cma>=3.0.0",
- "torch>=2.0.0",
- "botorch>=0.8.0",
- "gpytorch>=1.10.0",
]
+[project.optional-dependencies]
+optimization = [
+ "cma>=3.0.0",
+ "torch>=2.0.0",
+ "botorch>=0.8.0",
+ "gpytorch>=1.10.0",
+]Then add import guards in the optimization modules to provide helpful error messages when the optional dependencies are missing.
🤖 Prompt for AI Agents
In `@pyproject.toml` around lines 52 - 55, Update pyproject.toml to move the heavy
optimization libraries ("torch", "botorch", "gpytorch", "cma") out of the main
dependencies and into an optional extras group (e.g., extras = {
"noise-optimization" = [...] }) so they are installed only when requested; then
add import guards in the modules under
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/ (referencing the
relevant modules and functions that import torch/botorch/gpytorch/cma) to catch
ImportError and raise a clear error message instructing users to install the
optional extra (for example: "pip install package[noise-optimization]") when
those optional libs are required.
| # Copyright (c) 2025 - 2026 Chair for Design Automation, TUM | ||
| # All rights reserved. | ||
| # | ||
| # SPDX-License-Identifier: MIT | ||
| # | ||
| # Licensed under the MIT License | ||
|
|
||
| """This module contains the optimization routines for noise characterization.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider adding __all__ for consistency with sibling packages.
For consistency with the gradient_free package (where this was previously flagged), consider adding an explicit public API definition:
__all__ = ["adam"]🤖 Prompt for AI Agents
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/__init__.py`
around lines 1 - 8, Add an explicit public API by defining __all__ in the module
to mirror sibling packages: add a top-level __all__ list containing the exported
symbol name(s) (e.g., "adam") so the module only exposes the intended API (refer
to the symbol adam in this package) and keep it updated if additional
functions/classes are added.
| else: | ||
| # Remove all .pkl files in the folder | ||
| restart_path = Path(restart_dir) | ||
| for file_path in restart_path.iterdir(): | ||
| if file_path.suffix == ".pkl": | ||
| with contextlib.suppress(Exception): | ||
| file_path.unlink() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Avoid deleting unrelated .pkl files in the work dir.
The cleanup removes every .pkl file, which could delete user data unrelated to restarts. Consider limiting to restart_step_*.pkl.
♻️ Safer cleanup
- for file_path in restart_path.iterdir():
- if file_path.suffix == ".pkl":
+ for file_path in restart_path.iterdir():
+ if file_path.name.startswith("restart_step_") and file_path.suffix == ".pkl":
with contextlib.suppress(Exception):
file_path.unlink()🤖 Prompt for AI Agents
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py`
around lines 122 - 128, The cleanup loop in adam.py that iterates
restart_path.iterdir() and unlinks every file with suffix ".pkl" is too broad
and may delete unrelated user data; change the filter to only remove files
matching the restart pattern (e.g., "restart_step_*.pkl"). Specifically, update
the block that uses restart_dir/restart_path to only unlink Path objects whose
name matches the glob or regex "restart_step_*.pkl" (or use
restart_path.glob("restart_step_*.pkl")), keeping the
contextlib.suppress(Exception) behavior for safety.
| bounds = torch.tensor(np.array([x_low, x_up]), dtype=torch.double) | ||
|
|
||
| d = bounds.shape[1] | ||
|
|
||
| # Normalized [0,1]^d → real-space bounds | ||
| def scale_to_bounds(x_unit: torch.Tensor) -> torch.Tensor: | ||
| """Scale a unit interval value to a specified bounds range. | ||
|
|
||
| Transforms a value from the unit interval [0, 1] to the range [bounds[0], bounds[1]] | ||
| using linear scaling. | ||
|
|
||
| Args: | ||
| x_unit: A value in the unit interval [0, 1] to be scaled. | ||
|
|
||
| Returns: | ||
| The scaled value mapped to the bounds range [bounds[0], bounds[1]]. | ||
| """ | ||
| return bounds[0] + (bounds[1] - bounds[0]) * x_unit | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure and locate the function
cd /tmp/repo && find . -name "bayesian.py" | head -5Repository: munich-quantum-toolkit/yaqs
Length of output: 130
🏁 Script executed:
# Find and read the bayesian.py file
find . -path "*noise_char/optimization_algorithms/gradient_free/bayesian.py" -type f | head -1Repository: munich-quantum-toolkit/yaqs
Length of output: 146
🏁 Script executed:
# Let's examine the file around the lines in question
sed -n '100,160p' src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 2666
🏁 Script executed:
# Let's get more context - the function definition
sed -n '1,150p' src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py | head -150Repository: munich-quantum-toolkit/yaqs
Length of output: 5489
🏁 Script executed:
# Find where scale_to_bounds is called
sed -n '100,250p' src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py | grep -n "scale_to_bounds"Repository: munich-quantum-toolkit/yaqs
Length of output: 305
🏁 Script executed:
# Get more lines to see the usage context
sed -n '150,250p' src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 2216
Add dtype and device parameters to bounds tensor construction.
bounds is hardcoded to torch.double on CPU, but scale_to_bounds() is called with tensors created using the user-specified dtype and device. PyTorch arithmetic operations between tensors with mismatched device or dtype will fail. The tensor should respect the function's dtype and device parameters.
Proposed fix
- bounds = torch.tensor(np.array([x_low, x_up]), dtype=torch.double)
+ bounds = torch.tensor(np.array([x_low, x_up]), dtype=dtype, device=device)🤖 Prompt for AI Agents
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py`
around lines 122 - 140, The bounds tensor is created with a hardcoded dtype and
device which can mismatch user-specified tensors used in scale_to_bounds; update
the construction of bounds (the variable bounds) to pass through the function's
dtype and device parameters (use the provided dtype and device when creating the
torch.tensor/torch.as_tensor for bounds) so that scale_to_bounds(x_unit:
torch.Tensor) performs operations on tensors with matching dtype and device;
ensure any subsequent indexing like bounds[0], bounds[1] inherits that
dtype/device.
| x_train = torch.rand(n_init, d, dtype=dtype, device=device) | ||
| y = eval_function(scale_to_bounds(x_train)) | ||
| y_train = -y # Negate for minimization (BO maximizes internally) | ||
|
|
||
| # Constant noise variance | ||
| yvar_train = torch.full_like(y_train, std**2) | ||
|
|
||
| # ----------------------- | ||
| # BO loop | ||
| # ----------------------- | ||
| for _i in range(max_iter): | ||
| model = SingleTaskGP( | ||
| x_train, | ||
| y_train, | ||
| yvar_train, | ||
| input_transform=Normalize(d), | ||
| outcome_transform=Standardize(m=1), | ||
| ) | ||
| mll = ExactMarginalLogLikelihood(model.likelihood, model) | ||
| fit_gpytorch_mll(mll) | ||
|
|
||
| best_f = y_train.max() | ||
| acq_func = get_acquisition_function(acq_name, model, best_f=best_f, beta=beta) | ||
|
|
||
| new_x_unit, _ = optimize_acqf( | ||
| acq_function=acq_func, | ||
| bounds=torch.stack([torch.zeros(d, device=device), torch.ones(d, device=device)]), | ||
| q=1, | ||
| num_restarts=5, | ||
| raw_samples=50, | ||
| ) | ||
|
|
||
| new_y = eval_function(scale_to_bounds(new_x_unit)) | ||
| new_y = -new_y # negate for minimization | ||
|
|
||
| # Append new data | ||
| x_train = torch.cat([x_train, new_x_unit]) | ||
| y_train = torch.cat([y_train, new_y]) | ||
| yvar_train = torch.cat([yvar_train, torch.full_like(new_y, std**2)]) | ||
|
|
||
| best_idx = torch.argmax(y_train) | ||
| best_x = scale_to_bounds(x_train[best_idx]) | ||
| best_y = -y_train[best_idx] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize best_x/best_y before the loop to avoid unbound return.
If max_iter=0, best_x/best_y are never set and the return will throw. Initialize from the initial design before the loop.
💡 Suggested fix
x_train = torch.rand(n_init, d, dtype=dtype, device=device)
y = eval_function(scale_to_bounds(x_train))
y_train = -y # Negate for minimization (BO maximizes internally)
+ best_idx = torch.argmax(y_train)
+ best_x = scale_to_bounds(x_train[best_idx])
+ best_y = -y_train[best_idx]
@@
for _i in range(max_iter):
...
- best_idx = torch.argmax(y_train)
- best_x = scale_to_bounds(x_train[best_idx])
- best_y = -y_train[best_idx]
+ best_idx = torch.argmax(y_train)
+ best_x = scale_to_bounds(x_train[best_idx])
+ best_y = -y_train[best_idx]🤖 Prompt for AI Agents
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py`
around lines 162 - 205, Initialize best_x and best_y from the initial design
before entering the BO loop so they exist when max_iter == 0: compute best_idx =
torch.argmax(y_train) (or use existing y_train), set best_x =
scale_to_bounds(x_train[best_idx]) and best_y = -y_train[best_idx] prior to the
for _i in range(max_iter) loop (referencing x_train, y_train, best_x, best_y in
the Bayesian optimization block).
| Returns: | ||
| ------- | ||
| fbest : float | ||
| Best objective function value. | ||
| xbest : ndarray | ||
| Best solution found. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring return order doesn't match implementation.
The docstring documents the return values as (fbest, xbest), but the actual return on line 106 is (xbest, fbest). This will cause confusion for users.
🐛 Proposed fix
Returns:
-------
- fbest : float
- Best objective function value.
xbest : ndarray
Best solution found.
+ fbest : float
+ Best objective function value.
"""📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Returns: | |
| ------- | |
| fbest : float | |
| Best objective function value. | |
| xbest : ndarray | |
| Best solution found. | |
| """ | |
| Returns: | |
| ------- | |
| xbest : ndarray | |
| Best solution found. | |
| fbest : float | |
| Best objective function value. | |
| """ |
🤖 Prompt for AI Agents
In `@src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.py` around
lines 61 - 67, The docstring in the CMA implementation documents the return
order as (fbest, xbest) but the function actually returns (xbest, fbest); update
the docstring to match the implementation by describing the returns as xbest :
ndarray (best solution found) followed by fbest : float (best objective value)
so the documented return order matches the actual tuple returned (xbest, fbest),
and ensure the description text near the return block referencing fbest/xbest is
adjusted accordingly.
| def test_characterizer_init(tmp_path: Path) -> None: | ||
| """Test that Characterizer initializes correctly with given parameters.""" | ||
| test = Parameters() | ||
|
|
||
| _h_0, _init_state, _obs_list, _sim_params, ref_noise_model, propagator, loss = create_instances(test, tmp_path) | ||
|
|
||
| characterizer = Characterizer( | ||
| propagator=propagator, | ||
| init_guess=ref_noise_model, | ||
| loss=loss, | ||
| ) | ||
|
|
||
| assert isinstance(characterizer.init_guess, CompactNoiseModel) | ||
| assert isinstance(characterizer.propagator, propagation.Propagator) | ||
| assert isinstance(characterizer.loss, LossClass) | ||
| assert isinstance(characterizer.init_x, np.ndarray) | ||
|
|
||
| characterizer.adam_optimize(max_iter=1) | ||
|
|
||
| assert isinstance(characterizer.optimal_model, CompactNoiseModel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Test covers initialization and basic optimization.
The test validates that the Characterizer initializes correctly and can run a single optimization step. Consider adding assertions to verify that optimal_model has the expected structure or that the optimization actually modified parameters.
🤖 Prompt for AI Agents
In `@tests/noise_char/test_characterizer.py` around lines 199 - 218, The test
currently checks initialization and that adam_optimize runs but doesn't verify
the optimization effect or model structure; update test_characterizer_init to
capture the initial model parameters from characterizer.init_guess (e.g.,
parameter arrays or attributes on CompactNoiseModel returned by
create_instances), run characterizer.adam_optimize(max_iter=1), then assert that
characterizer.optimal_model is a CompactNoiseModel and that its key parameters
differ from the initial ones (or match expected shapes/types), and also add
assertions on parameter shapes/types (e.g., numpy arrays) to confirm the
expected structure was preserved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@tests/noise_char/test_optimization.py`:
- Around line 361-367: The expected data-line assertion uses unformatted f,
causing a mismatch with the test output which uses 6-decimal formatting; update
the expected_line to format f with six decimals and preserve spacing so it
matches the actual output (e.g. use the formatted f-string with {f:.6f} while
keeping loss.n_eval and the existing six-decimal x/grad literals), and adjust
the string accordingly in the test function so lines[1] equals that fully
6-decimal formatted string.
- Around line 243-281: Update the test docstring so the expected header labels
match the assertions: replace the mention of "x0/y0/z0" with the actual label
format used by the test ("obs_0", "obs_1", ... based on test.n_obs). Ensure the
docstring describes that the header starts with "# t" and contains labels
"obs_{i}" for i in range(test.n_obs), matching the construction of
expected_labels in the test and the behavior of Parameters/write_traj.
In `@tests/noise_char/test_propagation.py`:
- Around line 82-120: The docstring for the test helper claims the Propagator
performs the propagation (calls propagator.run(...)) but the code never invokes
run; either update the docstring to remove the claim that the Propagator has
performed a propagation, or modify the test helper to call
propagator.run(ref_noise_model) after set_observable_list(obs_list) so the
returned propagator has actually executed; locate the propagator variable and
the set_observable_list / run methods in this helper to implement the change.
♻️ Duplicate comments (4)
tests/noise_char/test_characterizer.py (2)
132-145: Docstring return tuple omitsloss.
The return annotation and actual return include 7 items, but the docstring lists 6. Please update to match.✏️ Suggested docstring fix
- tuple[MPO, MPS, list[Observable], AnalogSimParams, CompactNoiseModel, Propagator] - A 6-tuple containing: + tuple[MPO, MPS, list[Observable], AnalogSimParams, CompactNoiseModel, Propagator, LossClass] + A 7-tuple containing: - h_0 (MPO): Ising Hamiltonian MPO initialized with (sites, j, g). ... - propagator (Propagator): propagator configured with the Hamiltonian, noise model and initial state; ... + - loss (LossClass): Loss instance tied to the propagator and observables.
153-157: Example call uses wrong argument order and missesloss.
The example should matchcreate_instances(test, tmp_path)and unpack 7 values.✏️ Suggested example fix
- >>> h_0, init_state, obs_list, sim_params, ref_noise_model, propagator = create_instances(tmp_path, test) + >>> h_0, init_state, obs_list, sim_params, ref_noise_model, propagator, loss = create_instances(test, tmp_path)tests/noise_char/test_optimization.py (1)
13-13: Remove unused# noqa: S403.
Ruff flags this as an unused directive (RUF100).🧹 Proposed fix
-import pickle # noqa: S403 +import pickletests/noise_char/test_propagation.py (1)
158-166: Test name/docstring references outdated API.
Rename the test and update the docstring to reflectPropagator.run.✏️ Proposed rename
-def test_propagatorwithgradients_runs() -> None: - """Test that `propagation.tjm_traj` executes correctly and returns expected output shapes. +def test_propagator_runs() -> None: + """Test that `propagation.Propagator.run` executes correctly and returns expected output shapes.
| The test verifies that: | ||
| - The output file `opt_traj_{n_eval}.txt` is created in the specified work directory. | ||
| - The header line starts with 't' and contains correct observable labels formatted as | ||
| 'x0', 'y0', 'z0', etc., matching the dimensions of the `exp_vals_traj`. | ||
| - The number of data lines matches the number of time points. | ||
| - Each data line contains the correct number of columns: one for time plus one for each observable/site combination. | ||
| - The time values in the file match the original `t` array within a numerical tolerance. | ||
|
|
||
| This ensures that the trajectory data is saved properly for later analysis or visualization. | ||
| """ | ||
| # Create an instance | ||
| test = Parameters() | ||
| loss = create_loss_instance(tmp_path, test) | ||
|
|
||
| obs_traj = np.zeros([test.n_obs, test.n_t]) | ||
|
|
||
| loss.n_eval = 7 # evaluation index for filename | ||
|
|
||
| output_file = loss.work_dir / f"opt_traj_{loss.n_eval}.txt" | ||
|
|
||
| # Call the method | ||
| loss.write_traj(obs_traj, output_file) | ||
|
|
||
| assert output_file.exists() | ||
|
|
||
| # Read the file content using Path.read_text() | ||
| file_content = output_file.read_text(encoding="utf-8") | ||
| lines = file_content.splitlines(keepends=True) | ||
|
|
||
| # The first line should be the header starting with 't' | ||
| header = lines[0].strip() | ||
| expected_header_prefix = "# t" | ||
| assert header.startswith(expected_header_prefix) | ||
|
|
||
| # Check header contains correct observable labels like x0, y0 (depending on n_obs_site and sites) | ||
| # Construct expected header pattern | ||
| expected_labels = ["obs_" + str(i) for i in range(test.n_obs)] | ||
| for label in expected_labels: | ||
| assert label in header |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align header-label docstring with expected labels.
The docstring mentions x0/y0/z0, but the assertion checks for obs_*. Please make them consistent.
✏️ Example docstring alignment
- - The header line starts with 't' and contains correct observable labels formatted as
- 'x0', 'y0', 'z0', etc., matching the dimensions of the `exp_vals_traj`.
+ - The header line starts with 't' and contains correct observable labels formatted as
+ 'obs_0', 'obs_1', 'obs_2', etc., matching the dimensions of the `exp_vals_traj`.🤖 Prompt for AI Agents
In `@tests/noise_char/test_optimization.py` around lines 243 - 281, Update the
test docstring so the expected header labels match the assertions: replace the
mention of "x0/y0/z0" with the actual label format used by the test ("obs_0",
"obs_1", ... based on test.n_obs). Ensure the docstring describes that the
header starts with "# t" and contains labels "obs_{i}" for i in
range(test.n_obs), matching the construction of expected_labels in the test and
the behavior of Parameters/write_traj.
| # First line: header | ||
| expected_header = "# iter loss x1 x2 x3 grad_x1 grad_x2 grad_x3\n" | ||
| assert lines[0] == expected_header | ||
|
|
||
| # Second line: data line, should contain n_eval, f, x, grad formatted | ||
| expected_line = f"{loss.n_eval} {f} 0.100000 0.200000 0.300000 0.010000 0.020000 0.030000\n" | ||
| assert lines[1] == expected_line |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expected line formatting doesn’t match stated 6‑decimal formatting.
If the output is formatted to 6 decimals, the expected line should reflect that.
✏️ Suggested expectation fix
- expected_line = f"{loss.n_eval} {f} 0.100000 0.200000 0.300000 0.010000 0.020000 0.030000\n"
+ expected_line = f"{loss.n_eval} {f:.6f} 0.100000 0.200000 0.300000 0.010000 0.020000 0.030000\n"🤖 Prompt for AI Agents
In `@tests/noise_char/test_optimization.py` around lines 361 - 367, The expected
data-line assertion uses unformatted f, causing a mismatch with the test output
which uses 6-decimal formatting; update the expected_line to format f with six
decimals and preserve spacing so it matches the actual output (e.g. use the
formatted f-string with {f:.6f} while keeping loss.n_eval and the existing
six-decimal x/grad literals), and adjust the string accordingly in the test
function so lines[1] equals that fully 6-decimal formatted string.
| It is configured for an analog open quantum system simulation. | ||
| This helper constructs an Ising Hamiltonian (MPO), a zero-filled initial MPS, a list of single-site | ||
| Pauli observables (X, Y, Z for each site), an AnalogSimParams object with sampling enabled, and a | ||
| CompactNoiseModel containing two noise channels ("lowering" and "pauli_z"). It then instantiates a | ||
| propagation.Propagator using those objects, registers the observable list with the | ||
| propagator, and runs a propagation using the reference noise model. | ||
| Parameters | ||
| ---------- | ||
| test : Parameters | ||
| A parameter bundle object required to configure the system. Expected attributes: | ||
| - sites (int): number of lattice sites (spins). | ||
| - j (float): Ising coupling strength used to initialize the MPO Hamiltonian. | ||
| - g (float): transverse field strength used to initialize the MPO Hamiltonian. | ||
| - sim_time (float): total simulation elapsed time. | ||
| - dt (float): simulation time step. | ||
| - ntraj (int): number of stochastic trajectories to sample. | ||
| - max_bond_dim (int): maximum MPS/MPO bond dimension for truncation. | ||
| - threshold (float): singular-value threshold for truncation. | ||
| - order (int): Trotter/order parameter for the simulator. | ||
| - gamma_rel (float): strength of the "lowering" (relaxation) noise channel applied to all sites. | ||
| - gamma_deph (float): strength of the "pauli_z" (dephasing) noise channel applied to all sites. | ||
|
|
||
| Returns: | ||
| ------- | ||
| tuple[MPO, MPS, list[Observable], AnalogSimParams, CompactNoiseModel, propagation.Propagator] | ||
| A 6-tuple containing, in order: | ||
| - h_0: MPO | ||
| The initialized Ising Hamiltonian MPO for the given system parameters. | ||
| - init_state: MPS | ||
| The initialized many-body state (all zeros). | ||
| - obs_list: list[Observable] | ||
| The list of single-site Observable objects (X, Y, Z for each site). | ||
| - sim_params: AnalogSimParams | ||
| The simulation parameter object used to configure the propagator (with sample_timesteps=True). | ||
| - ref_noise_model: CompactNoiseModel | ||
| The compact noise model containing the "lowering" and "pauli_z" channels applied to all sites. | ||
| - propagator: propagation.Propagator | ||
| The propagator instance after calling set_observable_list(...) and run(ref_noise_model). The | ||
| propagator therefore has performed the configured propagation at least once. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring says propagation runs, but code doesn’t call run().
Either call propagator.run(...) here or update the docstring to avoid misleading behavior.
✏️ Suggested docstring fix
- propagation.Propagator using those objects, registers the observable list with the
- propagator, and runs a propagation using the reference noise model.
+ propagation.Propagator using those objects, registers the observable list with the
+ propagator, and returns it without executing a propagation.
@@
- - propagator: propagation.Propagator
- The propagator instance after calling set_observable_list(...) and run(ref_noise_model). The
- propagator therefore has performed the configured propagation at least once.
+ - propagator: propagation.Propagator
+ The propagator instance after calling set_observable_list(...). The propagation
+ is executed by the caller when needed.🤖 Prompt for AI Agents
In `@tests/noise_char/test_propagation.py` around lines 82 - 120, The docstring
for the test helper claims the Propagator performs the propagation (calls
propagator.run(...)) but the code never invokes run; either update the docstring
to remove the claim that the Propagator has performed a propagation, or modify
the test helper to call propagator.run(ref_noise_model) after
set_observable_list(obs_list) so the returned propagator has actually executed;
locate the propagator variable and the set_observable_list / run methods in this
helper to implement the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pyproject.toml`:
- Line 53: The dependency entry "pytorch>=2.5.1" in pyproject.toml is incorrect;
replace that package name with the correct PyPI package "torch" (i.e., change
"pytorch>=2.5.1" to "torch>=2.5.1") so pip can find and install the PyTorch
wheel; ensure no other references use the wrong package name.
♻️ Duplicate comments (1)
pyproject.toml (1)
52-55: Consider making these optional dependencies.As previously noted, these heavy dependencies (torch ~2GB+, botorch, gpytorch, cma) are only needed for the noise characterization optimization features. Making them optional would reduce the install footprint for users who don't need these features and avoid potential Python 3.14 compatibility issues with botorch/gpytorch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pyproject.toml`:
- Line 119: Update the mypy ignored modules list so submodules of PyTorch are
also ignored: in the module = [...] entry that currently contains "torch", add
"torch.*" (or replace "torch" with both "torch" and "torch.*") so imports like
torch.nn or torch.optim don't trigger mypy errors.
♻️ Duplicate comments (1)
pyproject.toml (1)
52-56: Consider keeping CMA/torch/botorch/gpytorch as optional extras.Line 52-56 adds heavy optimization libraries to core dependencies. This repeats a concern raised in earlier review: these are only needed for the new optimization paths and significantly increase baseline install size. Please confirm whether they should live under an optional extra with guarded imports instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pyproject.toml (1)
17-34: Align classifiers with the new Python support window.
requires-python = ">=3.10,<3.13"conflicts with the 3.13/3.14 classifiers. This can mislead users and indexers about supported versions.🧹 Suggested fix
- "Programming Language :: Python :: 3.13", - "Programming Language :: Python :: 3.14",
♻️ Duplicate comments (1)
pyproject.toml (1)
52-56: Consider upgrading torch 2.4.0 to 2.4.1+ for Python 3.12 wheel coverage.Most dependencies are compatible with Python 3.10+: botorch 0.16.0, gpytorch 1.14.2, and cma 4.0.0 all support this range. However, torch 2.4.0 has reported missing pip wheels for Python 3.12 on some platforms; use 2.4.1+ where wheels are complete, or document the limitation if staying on 2.4.0. The torch 2.1.0 pin for Python <3.12 is available from PyTorch's official index.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@pyproject.toml`:
- Around line 50-57: Align the Python support by either tightening pyproject's
requires-python to ">=3.10,<3.13" (so dependency markers for "torch", "botorch",
"gpytorch" remain valid) or make the optimization deps optional: in
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py guard
module-level imports of torch, botorch and gpytorch with try/except ImportError
and raise a clear, descriptive error only when Bayesian functions/classes (the
ones that use these libs) are invoked; reference the requires-python entry in
pyproject.toml and the import sites for torch, botorch, gpytorch in bayesian.py
to locate and apply the change.
♻️ Duplicate comments (1)
pyproject.toml (1)
121-121: Includetorch.*in mypy ignores.Submodules like
torch.nnaren’t covered bytorchalone. Consider addingtorch.*so mypy doesn’t flag those imports.🔧 Suggested tweak
-module = ["qiskit.*", "qiskit_aer.*", "opt_einsum.*", "botorch.*", "gpytorch.*", "torch", "cma"] +module = ["qiskit.*", "qiskit_aer.*", "opt_einsum.*", "botorch.*", "gpytorch.*", "torch", "torch.*", "cma"]
| "cma>=4.0.0", | ||
|
|
||
| "torch>=2.1.0; python_version < '3.12'", | ||
| "torch>=2.4.0; python_version >= '3.12' and python_version < '3.13'", | ||
|
|
||
|
|
||
| "botorch>=0.16.0; python_version < '3.13'", | ||
| "gpytorch>=1.14.2; python_version < '3.13'", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find the pyproject.toml file in the repository
find . -name "pyproject.toml" -type f | head -5Repository: munich-quantum-toolkit/yaqs
Length of output: 87
🏁 Script executed:
# Read the pyproject.toml file to verify the dependency markers and requires-python
cat -n ./pyproject.tomlRepository: munich-quantum-toolkit/yaqs
Length of output: 7179
🏁 Script executed:
# Check if torch, botorch, gpytorch are imported anywhere in the source code
find src -name "*.py" -type f | head -20Repository: munich-quantum-toolkit/yaqs
Length of output: 906
🏁 Script executed:
# Search for imports of torch, botorch, and gpytorch
rg "import torch|from torch|import botorch|from botorch|import gpytorch|from gpytorch" src/ --type pyRepository: munich-quantum-toolkit/yaqs
Length of output: 870
🏁 Script executed:
# Check the noise_char module to see if torch/botorch/gpytorch imports are guarded
cat -n src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py | head -50Repository: munich-quantum-toolkit/yaqs
Length of output: 1988
🏁 Script executed:
# Check if there's a guard around the imports in the noise_char module's __init__.py
find src/mqt/yaqs/noise_char -name "__init__.py" | xargs cat -nRepository: munich-quantum-toolkit/yaqs
Length of output: 1230
🏁 Script executed:
# Find the actual noise_char __init__.py and check imports
find src/mqt/yaqs/noise_char -type f -name "*.py" | xargs ls -laRepository: munich-quantum-toolkit/yaqs
Length of output: 1261
🏁 Script executed:
# Check the noise_char __init__.py to see what's exposed
cat -n src/mqt/yaqs/noise_char/__init__.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 360
🏁 Script executed:
# Check the characterizer.py to see if it imports bayesian directly
cat -n src/mqt/yaqs/noise_char/characterizer.py | head -80Repository: munich-quantum-toolkit/yaqs
Length of output: 3266
🏁 Script executed:
# Check if the imports in characterizer.py are guarded, and check where bayesian_opt is called
cat -n src/mqt/yaqs/noise_char/characterizer.py | grep -A 5 -B 5 "bayesian_opt"Repository: munich-quantum-toolkit/yaqs
Length of output: 1429
🏁 Script executed:
# Check if there are any conditional/try-except imports around torch/botorch/gpytorch
rg "try:|except|ImportError" src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py -B 2 -A 2Repository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
# Verify that the imports at the top of bayesian.py are not guarded
head -35 src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 1115
Align Python version support: either cap requires-python to <3.13 or add conditional imports for optimization dependencies.
torch, botorch, and gpytorch are excluded for Python ≥3.13 (via dependency markers), but classifiers only list 3.10–3.12 while requires-python = ">=3.10" technically permits 3.13+. Additionally, these libraries are imported unguarded at module level in src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py, so any user installing on Python 3.13+ will encounter ModuleNotFoundError when accessing Bayesian optimization.
Either:
- Cap
requires-python = ">=3.10,<3.13"to match classifiers and declared support, or - Wrap torch/botorch/gpytorch imports in try-except blocks and handle gracefully for 3.13+
🤖 Prompt for AI Agents
In `@pyproject.toml` around lines 50 - 57, Align the Python support by either
tightening pyproject's requires-python to ">=3.10,<3.13" (so dependency markers
for "torch", "botorch", "gpytorch" remain valid) or make the optimization deps
optional: in
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py guard
module-level imports of torch, botorch and gpytorch with try/except ImportError
and raise a clear, descriptive error only when Bayesian functions/classes (the
ones that use these libs) are invoked; reference the requires-python entry in
pyproject.toml and the import sites for torch, botorch, gpytorch in bayesian.py
to locate and apply the change.
Description
Noise characterization modules for the optimization of the decay rates in the Lindblad equation. The following gradient-free optimization methods are available: mcmc, cma, bayesian
Checklist: