diff --git a/CHANGELOG.md b/CHANGELOG.md index c739378b2..561f0a839 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,7 +5,20 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). -## Unreleased +## [Unreleased] + +### Added + +### Fixed + +- #429 + - Modify `graphix.noise_models.noise_model.ApplyNoise` to handle conditionality based on a `domain` attribute (like `command.X` and `command.Z`). + - Moved the conditional logic to `graphix.simulator` to remove code duplication in the backends. + - Solves [#428](https://github.com/TeamGraphix/graphix/issues/428). + +### Changed + +## [0.3.4] - 2026-02-05 ### Added @@ -20,7 +33,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - #385 - Introduced `graphix.flow.core.XZCorrections.check_well_formed` which verifies the correctness of an XZ-corrections instance and raises an exception if incorrect. - Added XZ-correction exceptions to module `graphix.flow.core.exceptions`. - + - #378: - Introduced new method `graphix.flow.core.PauliFlow.check_well_formed`, `graphix.flow.core.GFlow.check_well_formed` and `graphix.flow.core.CausalFlow.check_well_formed` which verify the correctness of flow objects and raise exceptions when the flow is incorrect. - Introduced new method `graphix.flow.core.PauliFlow.is_well_formed` which verify the correctness of flow objects and returns a boolean when the flow is incorrect. @@ -37,13 +50,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - #402: Support for Python 3.14. -- #407: Introduced new method `graphix.optimization.StandardizedPattern.extract_xzcorrections` and its wrapper `graphix.pattern.Pattern.extract_xzcorrections` which extract an `XZCorrections` instance from a pattern. +- #253, #406: Added classes `BaseCommand` and `BaseInstruction`. + +- #407: Introduced new method `graphix.optimization.StandardizedPattern.extract_xzcorrections` and its wrapper `graphix.pattern.Pattern.extract_xzcorrections` which extract an `XZCorrections` instance from a pattern. - #412: Added pretty-print methods (`to_ascii`, `to_latex` and `to_unicode`) for `PauliFlow` and `XZCorrections` classes. Implemented their `__str__` method as a call to `self.to_ascii`. ### Fixed -- #392: `Pattern.remove_input_nodes` is required before the `Pattern.perform_pauli_measurements` method to ensure input nodes are removed and fixed in the |+> state. +- + +- #363, #392: `Pattern.remove_input_nodes` is required before the `Pattern.perform_pauli_measurements` method to ensure input nodes are removed and fixed in the |+> state. - #379: Removed unnecessary `meas_index` from API for rotation instructions `RZ`, `RY` and `RX`. @@ -59,10 +76,14 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 is ensured with normalization passed `incorporate_pauli_results` and `single_qubit_domains`. -- #409: Axis labels are shown when visualizing a pattern. Legend is placed outside the plot so that the graph remains visible. +- #231, #405: `IXYZ` is now defined as `Literal[I] | Axis`. + +- #382, #409: Axis labels are shown when visualizing a pattern. Legend is placed outside the plot so that the graph remains visible. - #407: Fixed an unreported bug in `OpenGraph.is_equal_structurally` which failed to compare open graphs differing on the output nodes only. +- #157, #417: `Pattern.minimize_space` uses `Pattern.extract_causal_flow()` and preserves runnability + ### Changed - #396: Removed generic `BackendState` from `graphix.sim` modules and methods in `graphix.pattern` and `graphix.simulator` modules. @@ -82,7 +103,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Modified the constructor `XZCorrections.from_measured_nodes_mapping` so that it doesn't need to create an `nx.DiGraph` instance. This fixes an unreported bug in the method. - Removed modules `graphix.gflow` and `graphix.find_pflow`. -- #414: Tests are now type-checked. +- #369, #414: `random_objects.py` and tests are now type-checked. - #418: `Pattern.extract_measurement_commands` now returns a dictionary. Removed `Pattern.get_meas_plane` and `Pattern.get_angles`. @@ -106,7 +127,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 a pattern. - #358: Refactor of flow tools - Part I - - New module `graphix.flow.core` which introduces classes `PauliFlow`, `GFlow`, `CausalFlow` and `XZCorrections` allowing a finer analysis of MBQC flows. This module subsumes `graphix.generator` which has been removed and part of `graphix.gflow` which will be removed in the future. + - New module `graphix.flow.core` which introduces classes `PauliFlow`, `GFlow`, `CausalFlow` and `XZCorrections` allowing a finer analysis of MBQC flows. This module subsumes `graphix.generator` which has been removed and part of `graphix.gflow` which will be removed in the future. - New module `graphix.flow._find_cflow` with the existing causal-flow finding algorithm. - New module `graphix.flow._find_gpflow` with the existing g- and Pauli-flow finding algorithm introduced in #337. - New abstract types `graphix.fundamentals.AbstractMeasurement` and `graphix.fundamentals.AbstractPlanarMeasurement` which serve as an umbrella of the existing types `graphix.measurements.Measurement`, `graphix.fundamentals.Plane` and `graphix.fundamentals.Axis`. @@ -210,12 +231,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - #322: Added a new `optimization` module containing: - * a functional version of `standardize` that returns a standardized + - a functional version of `standardize` that returns a standardized pattern as a new object; - * a function `incorporate_pauli_results` that returns an equivalent + - a function `incorporate_pauli_results` that returns an equivalent pattern in which the `results` are incorporated into measurement - and correction domains. + and correction domains. The resulting pattern is suitable for flow analysis. In particular, if a pattern has a flow, it is preserved by `perform_pauli_measurements` after applying `standardize` and @@ -266,11 +287,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - #314, #322: The method `Pattern.standardize()` now places C commands after X and Z commands, making the resulting patterns suitable for - flow analysis. + flow analysis. The `flow_from_pattern` functions now fail if the input pattern is not strictly standardized (as checked by `Pattern.is_standard(strict=True)`, which requires C commands to be - last). + last). Note: the method `perform_pauli_measurements` still places C commands before X and Z commands. diff --git a/graphix/noise_models/depolarising.py b/graphix/noise_models/depolarising.py index 84af01815..ff72a416a 100644 --- a/graphix/noise_models/depolarising.py +++ b/graphix/noise_models/depolarising.py @@ -118,9 +118,9 @@ def command(self, cmd: CommandOrNoise, rng: Generator | None = None) -> list[Com if cmd.kind == CommandKind.M: return [ApplyNoise(noise=DepolarisingNoise(self.measure_channel_prob), nodes=[cmd.node]), cmd] if cmd.kind == CommandKind.X: - return [cmd, ApplyNoise(noise=DepolarisingNoise(self.x_error_prob), nodes=[cmd.node])] + return [cmd, ApplyNoise(noise=DepolarisingNoise(self.x_error_prob), nodes=[cmd.node], domain=cmd.domain)] if cmd.kind == CommandKind.Z: - return [cmd, ApplyNoise(noise=DepolarisingNoise(self.z_error_prob), nodes=[cmd.node])] + return [cmd, ApplyNoise(noise=DepolarisingNoise(self.z_error_prob), nodes=[cmd.node], domain=cmd.domain)] # Use of `==` here for mypy if cmd.kind == CommandKind.C or cmd.kind == CommandKind.T or cmd.kind == CommandKind.ApplyNoise: # noqa: PLR1714 return [cmd] diff --git a/graphix/noise_models/noise_model.py b/graphix/noise_models/noise_model.py index 9508328db..4c17d8c59 100644 --- a/graphix/noise_models/noise_model.py +++ b/graphix/noise_models/noise_model.py @@ -42,11 +42,28 @@ def to_kraus_channel(self) -> KrausChannel: @dataclass class ApplyNoise(_KindChecker): - """Apply noise command.""" + """Apply noise command. + + Parameters + ---------- + noise : Noise + noise to be applied + + nodes : list[Node] + list of node indices on which to apply noise + + domain: set[Node] | None = None + Optional domain for conditional noise. + If ``None``, the noise is applied unconditionally. + Otherwise, the noise is applied if there is an odd number of nodes among ``domain`` that have been measured with outcome 1 (as for ``X`` and ``Z`` commands). + Note that the noise is never applied if ``domain`` is the empty set. + + """ kind: ClassVar[Literal[CommandKind.ApplyNoise]] = dataclasses.field(default=CommandKind.ApplyNoise, init=False) noise: Noise nodes: list[Node] + domain: set[Node] | None = None CommandOrNoise = Command | ApplyNoise diff --git a/graphix/pattern.py b/graphix/pattern.py index 0f4ec4f25..f52313a6d 100644 --- a/graphix/pattern.py +++ b/graphix/pattern.py @@ -211,7 +211,7 @@ def compose( nodes_p2 = other.extract_nodes() | other.results.keys() if not mapping.keys() <= nodes_p2: - raise ValueError("Keys of `mapping` must correspond to the nodes of `other`.") + raise PatternError("Keys of `mapping` must correspond to the nodes of `other`.") # Cast to set for improved performance in membership test mapping_values_set = set(mapping.values()) @@ -219,14 +219,14 @@ def compose( i2_set = set(other.input_nodes) if len(mapping) != len(mapping_values_set): - raise ValueError("Values of `mapping` contain duplicates.") + raise PatternError("Values of `mapping` contain duplicates.") if mapping_values_set & nodes_p1 - o1_set: - raise ValueError("Values of `mapping` must not contain measured nodes of pattern `self`.") + raise PatternError("Values of `mapping` must not contain measured nodes of pattern `self`.") for k, v in mapping.items(): if v in o1_set and k not in i2_set: - raise ValueError( + raise PatternError( f"Mapping {k} -> {v} is not valid. {v} is an output of pattern `self` but {k} is not an input of pattern `other`." ) @@ -506,7 +506,7 @@ def shift_signals(self, method: str = "direct") -> dict[int, set[int]]: self._commute_with_following(target) target += 1 return signal_dict - raise ValueError("Invalid method") + raise PatternError("Invalid method") def shift_signals_direct(self) -> dict[int, set[int]]: """Perform signal shifting procedure.""" @@ -1153,7 +1153,7 @@ def extract_opengraph(self) -> OpenGraph[Measurement]: for cmd in self.__seq: if cmd.kind == CommandKind.N: if cmd.state != BasicStates.PLUS: - raise ValueError( + raise PatternError( f"Open graph extraction requires N commands to represent a |+⟩ state. Error found in {cmd}." ) nodes.add(cmd.node) @@ -1411,7 +1411,7 @@ def perform_pauli_measurements(self, ignore_pauli_with_deps: bool = False) -> No """ if self.input_nodes: - raise ValueError("Remove inputs with `self.remove_input_nodes()` before performing Pauli presimulation.") + raise PatternError("Remove inputs with `self.remove_input_nodes()` before performing Pauli presimulation.") self.__dict__.update(measure_pauli(self, ignore_pauli_with_deps=ignore_pauli_with_deps).__dict__) def draw_graph( @@ -1596,6 +1596,10 @@ def check_measured(cmd: Command, node: int) -> None: check_active(cmd, cmd.node) +class PatternError(Exception): + """Exception subclass to handle pattern errors.""" + + class RunnabilityErrorReason(Enum): """Describe the reason for a pattern not being runnable.""" @@ -1616,7 +1620,7 @@ class RunnabilityErrorReason(Enum): @dataclass -class RunnabilityError(Exception): +class RunnabilityError(PatternError): """Error raised by :method:`Pattern.check_runnability`.""" cmd: Command @@ -1778,7 +1782,7 @@ def pauli_nodes(pattern: optimization.StandardizedPattern) -> tuple[list[tuple[c else: pauli_node.append((cmd, pm)) else: - raise ValueError("Unknown Pauli measurement basis") + raise PatternError("Unknown Pauli measurement basis") else: non_pauli_node.add(cmd.node) return pauli_node, non_pauli_node @@ -1788,12 +1792,12 @@ def assert_permutation(original: list[int], user: list[int]) -> None: """Check that the provided `user` node list is a permutation from `original`.""" node_set = set(user) if node_set != set(original): - raise ValueError(f"{node_set} != {set(original)}") + raise PatternError(f"{node_set} != {set(original)}") for node in user: if node in node_set: node_set.remove(node) else: - raise ValueError(f"{node} appears twice") + raise PatternError(f"{node} appears twice") @dataclass diff --git a/graphix/sim/base_backend.py b/graphix/sim/base_backend.py index 62934d007..7c77750a1 100644 --- a/graphix/sim/base_backend.py +++ b/graphix/sim/base_backend.py @@ -29,10 +29,9 @@ from graphix import command from graphix.measurements import Measurement, Outcome - from graphix.noise_models.noise_model import Noise + from graphix.noise_models.noise_model import ApplyNoise, Noise from graphix.parameter import ExpressionOrComplex, ExpressionOrFloat from graphix.sim.data import Data - from graphix.simulator import MeasureMethod Matrix: TypeAlias = npt.NDArray[np.object_ | np.complex128] @@ -619,7 +618,7 @@ def add_nodes(self, nodes: Sequence[int], data: Data = BasicStates.PLUS) -> None Previously existing nodes remain unchanged. """ - def apply_noise(self, nodes: Sequence[int], noise: Noise) -> None: # noqa: ARG002,PLR6301 + def apply_noise(self, cmd: ApplyNoise) -> None: # noqa: ARG002,PLR6301 """Apply noise. The default implementation of this method raises @@ -628,6 +627,8 @@ def apply_noise(self, nodes: Sequence[int], noise: Noise) -> None: # noqa: ARG0 `DensityMatrixBackend`) override this method to implement the effect of noise. + Note: the simulator is responsible for checking that the measurement outcomes match the domain condition before calling this method. + Parameters ---------- nodes : sequence of ints. @@ -642,8 +643,11 @@ def apply_clifford(self, node: int, clifford: Clifford) -> None: """Apply single-qubit Clifford gate, specified by vop index specified in graphix.clifford.CLIFFORD.""" @abstractmethod - def correct_byproduct(self, cmd: command.X | command.Z, measure_method: MeasureMethod) -> None: - """Byproduct correction correct for the X or Z byproduct operators, by applying the X or Z gate.""" + def correct_byproduct(self, cmd: command.X | command.Z) -> None: + """Byproduct correction correct for the X or Z byproduct operators, by applying the X or Z gate. + + Note: the simulator is responsible for checking that the measurement outcomes match the domain condition before calling this method. + """ @abstractmethod def entangle_nodes(self, edge: tuple[int, int]) -> None: @@ -782,25 +786,22 @@ def f_expectation0() -> float: return outcome @override - def correct_byproduct(self, cmd: command.X | command.Z, measure_method: MeasureMethod) -> None: + def correct_byproduct(self, cmd: command.X | command.Z) -> None: """Byproduct correction correct for the X or Z byproduct operators, by applying the X or Z gate.""" - if np.mod(sum(measure_method.measurement_outcome(j) for j in cmd.domain), 2) == 1: - op = Ops.X if cmd.kind == CommandKind.X else Ops.Z - self.apply_single(node=cmd.node, op=op) + op = Ops.X if cmd.kind == CommandKind.X else Ops.Z + self.apply_single(node=cmd.node, op=op) @override - def apply_noise(self, nodes: Sequence[int], noise: Noise) -> None: - """Apply noise. + def apply_noise(self, cmd: ApplyNoise) -> None: + """Apply noise for the command `:class: graphix.noise_model.ApplyNoise`. Parameters ---------- - nodes : sequence of ints. - Target qubits - noise : Noise - Noise to apply + cmd : ApplyNoise + command ApplyNoise """ - indices = [self.node_index.index(i) for i in nodes] - self.state.apply_noise(indices, noise) + indices = [self.node_index.index(i) for i in cmd.nodes] + self.state.apply_noise(indices, cmd.noise) def apply_single(self, node: int, op: Matrix) -> None: """Apply a single gate to the state.""" diff --git a/graphix/sim/tensornet.py b/graphix/sim/tensornet.py index 08e4720ae..e86f5913c 100644 --- a/graphix/sim/tensornet.py +++ b/graphix/sim/tensornet.py @@ -34,7 +34,6 @@ from graphix.clifford import Clifford from graphix.measurements import Measurement, Outcome from graphix.sim import Data - from graphix.simulator import MeasureMethod PrepareState: TypeAlias = str | npt.NDArray[np.complex128] @@ -768,20 +767,9 @@ def measure(self, node: int, measurement: Measurement, rng: Generator | None = N return result @override - def correct_byproduct(self, cmd: command.X | command.Z, measure_method: MeasureMethod) -> None: - """Perform byproduct correction. - - Parameters - ---------- - cmd : list - Byproduct command - i.e. ['X' or 'Z', node, signal_domain] - measure_method : MeasureMethod - The measure method to use - """ - if sum(measure_method.measurement_outcome(j) for j in cmd.domain) % 2 == 1: - op = Ops.X if isinstance(cmd, command.X) else Ops.Z - self.state.evolve_single(cmd.node, op, str(cmd.kind)) + def correct_byproduct(self, cmd: command.X | command.Z) -> None: + op = Ops.X if isinstance(cmd, command.X) else Ops.Z + self.state.evolve_single(cmd.node, op, str(cmd.kind)) @override def apply_clifford(self, node: int, clifford: Clifford) -> None: diff --git a/graphix/simulator.py b/graphix/simulator.py index f9a2c7906..380b12b5a 100644 --- a/graphix/simulator.py +++ b/graphix/simulator.py @@ -136,6 +136,16 @@ def store_measurement_outcome(self, node: int, result: Outcome) -> None: """ ... + def check_domain(self, domain: Iterable[int]) -> bool: + """Check that the measurement outcomes match the domain condition. + + Parameters + ---------- + domain : Iterable[int] + domain on which to compute the condition for applying conditional commands. + """ + return sum(self.measurement_outcome(j) for j in domain) % 2 == 1 + class DefaultMeasureMethod(MeasureMethod): """Default measurement method implementing standard measurement plane/angle update for MBQC.""" @@ -160,6 +170,7 @@ def __init__(self, results: Mapping[int, Outcome] | None = None): # results is coerced into dict, since `store_measurement_outcome` mutates it. self.results = {} if results is None else dict(results) + @override def describe_measurement(self, cmd: BaseM) -> Measurement: """Return the description of the measurement performed by ``cmd``. @@ -181,6 +192,7 @@ def describe_measurement(self, cmd: BaseM) -> Measurement: angle = cmd.angle * measure_update.coeff + measure_update.add_term return Measurement(angle, measure_update.new_plane) + @override def measurement_outcome(self, node: int) -> Outcome: """Return the result of a previous measurement. @@ -196,6 +208,7 @@ def measurement_outcome(self, node: int) -> Outcome: """ return self.results[node] + @override def store_measurement_outcome(self, node: int, result: Outcome) -> None: """Store the result of a previous measurement. @@ -311,7 +324,8 @@ def run(self, input_state: Data = BasicStates.PLUS, rng: Generator | None = None self.__measure_method.measure(self.backend, cmd, noise_model=self.noise_model, rng=rng) # Use of `==` here for mypy elif cmd.kind == CommandKind.X or cmd.kind == CommandKind.Z: # noqa: PLR1714 - self.backend.correct_byproduct(cmd, self.__measure_method) + if self.__measure_method.check_domain(cmd.domain): + self.backend.correct_byproduct(cmd) elif cmd.kind == CommandKind.C: self.backend.apply_clifford(cmd.node, cmd.clifford) elif cmd.kind == CommandKind.T: @@ -321,7 +335,8 @@ def run(self, input_state: Data = BasicStates.PLUS, rng: Generator | None = None # handling of ticks during noise transpilation. pass elif cmd.kind == CommandKind.ApplyNoise: - self.backend.apply_noise(cmd.nodes, cmd.noise) + if cmd.domain is None or self.__measure_method.check_domain(cmd.domain): + self.backend.apply_noise(cmd) elif cmd.kind == CommandKind.S: raise ValueError("S commands unexpected in simulated patterns.") else: diff --git a/noxfile.py b/noxfile.py index 07dc6885c..c0de78d54 100644 --- a/noxfile.py +++ b/noxfile.py @@ -142,21 +142,30 @@ def tests_reverse_dependencies(session: Session, package: ReverseDependency) -> f"{dirname} only supports Python versions {package.version_constraint}; current Python version: {session.python}" ) - session.install(".") install_pytest(session) if package.doctest_modules: session.install("nox") - # Use `session.cd` as a context manager to ensure that the - # working directory is restored afterward. This is important - # because Windows cannot delete a temporary directory while it - # is the working directory. - with TemporaryDirectory() as tmpdir, session.cd(tmpdir): - if package.branch is None: - session.run("git", "clone", package.repository) - else: - session.run("git", "clone", "-b", package.branch, package.repository) - with session.cd(dirname): - session.install(".") + with TemporaryDirectory() as tmpdir: + with session.cd(tmpdir): + if package.branch is None: + session.run("git", "clone", package.repository) + else: + session.run("git", "clone", "-b", package.branch, package.repository) + with session.cd(dirname): + session.install(".") + # Note that `session.cd` is used as a context manager above, + # so that the working directory is restored at this point. We + # install now the graphix package from the working directory. + # This is done after having installed the reverse dependency, + # so that we run the test with the current graphix codebase, + # even if another graphix version has been pinned in the + # reverse dependendy. + session.install(".") + # Use `session.cd` as a context manager again to ensure that the + # working directory is restored afterward. This is important + # because Windows cannot delete a temporary directory while it + # is the working directory. + with session.cd(tmpdir), session.cd(dirname): if package.initialization is not None: package.initialization(session) run_pytest(session, doctest_modules=package.doctest_modules) diff --git a/requirements-dev.txt b/requirements-dev.txt index 5f0fe5002..12c6dcc20 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -2,7 +2,7 @@ mypy==1.19.1 pre-commit # for language-agnostic hooks pyright -ruff==0.14.14 +ruff==0.15.0 # Stubs types-networkx==3.6.1.20251220 @@ -26,4 +26,4 @@ qiskit_qasm3_import qiskit-aer; python_version < "3.14" openqasm-parser>=3.1.0 -graphix-qasm-parser @ git+https://github.com/TeamGraphix/graphix-qasm-parser.git +graphix-qasm-parser>=0.1.1 diff --git a/tests/test_density_matrix.py b/tests/test_density_matrix.py index 3b8ab337d..7d7492584 100644 --- a/tests/test_density_matrix.py +++ b/tests/test_density_matrix.py @@ -16,7 +16,7 @@ from graphix.fundamentals import ANGLE_PI, Plane from graphix.ops import Ops from graphix.sim.density_matrix import DensityMatrix, DensityMatrixBackend -from graphix.sim.statevec import CNOT_TENSOR, CZ_TENSOR, SWAP_TENSOR, Statevec, StatevectorBackend +from graphix.sim.statevec import CNOT_TENSOR, CZ_TENSOR, SWAP_TENSOR, Statevec from graphix.simulator import DefaultMeasureMethod from graphix.states import BasicStates, PlanarState from graphix.transpiler import Circuit @@ -926,33 +926,3 @@ def test_measure(self, pr_calc: bool) -> None: expected_matrix_1 = np.kron(np.array([[1, 0], [0, 0]]), np.ones((2, 2)) / 2) expected_matrix_2 = np.kron(np.array([[0, 0], [0, 1]]), np.array([[0.5, -0.5], [-0.5, 0.5]])) assert np.allclose(backend.state.rho, expected_matrix_1) or np.allclose(backend.state.rho, expected_matrix_2) - - def test_correct_byproduct(self) -> None: - measure_method = DefaultMeasureMethod() - dm_backend = DensityMatrixBackend() - dm_backend.add_nodes([0]) - # node 0 initialized in Backend - dm_backend.add_nodes([1, 2]) - dm_backend.entangle_nodes((0, 1)) - dm_backend.entangle_nodes((1, 2)) - measure_method.measure(dm_backend, command.M(0)) - measure_method.measure(dm_backend, command.M(1, angle=-ANGLE_PI / 2, s_domain={0})) - dm_backend.correct_byproduct(command.X(2, {1}), measure_method) - dm_backend.correct_byproduct(command.Z(2, {0}), measure_method) - rho = dm_backend.state.rho - - sv_backend = StatevectorBackend() - sv_backend.add_nodes([0]) - # node 0 initialized in Backend - sv_backend.add_nodes([1, 2]) - sv_backend.entangle_nodes((0, 1)) - sv_backend.entangle_nodes((1, 2)) - measure_method.measure(sv_backend, command.M(0)) - measure_method.measure(sv_backend, command.M(1, angle=-ANGLE_PI / 2, s_domain={0})) - sv_backend.correct_byproduct(command.X(2, {1}), measure_method) - sv_backend.correct_byproduct(command.Z(2, {0}), measure_method) - psi = sv_backend.state.psi - - assert np.allclose( - rho, np.outer(psi.astype(np.complex128, copy=False), psi.conj().astype(np.complex128, copy=False)) - ) diff --git a/tests/test_noisy_density_matrix.py b/tests/test_noisy_density_matrix.py index 20c5cf91e..dc207931b 100644 --- a/tests/test_noisy_density_matrix.py +++ b/tests/test_noisy_density_matrix.py @@ -6,6 +6,8 @@ import numpy.typing as npt import pytest +from graphix.branch_selector import ConstBranchSelector, FixedBranchSelector +from graphix.command import CommandKind from graphix.fundamentals import angle_to_rad from graphix.noise_models import DepolarisingNoiseModel from graphix.noise_models.noise_model import NoiselessNoiseModel @@ -17,6 +19,7 @@ from numpy.random import Generator from graphix.fundamentals import Angle + from graphix.measurements import Outcome from graphix.pattern import Pattern @@ -69,14 +72,22 @@ def test_noisy_measure_confuse_hadamard(self, fx_rng: Generator) -> None: assert isinstance(res, DensityMatrix) assert np.allclose(res.rho, np.array([[0.0, 0.0], [0.0, 1.0]])) - # arbitrary probability + @pytest.mark.parametrize("outcome", [0, 1]) + def test_noisy_measure_confuse_hadamard_arbitrary(self, fx_rng: Generator, outcome: Outcome) -> None: + # arbitrary probability with fixed branch + hadamardpattern = hpat() measure_error_pr = fx_rng.random() - print(f"measure_error_pr = {measure_error_pr}") + print(f"measure_error_pr = {measure_error_pr}, outcome = {outcome}") res = hadamardpattern.simulate_pattern( - backend="densitymatrix", noise_model=DepolarisingNoiseModel(measure_error_prob=measure_error_pr), rng=fx_rng + backend="densitymatrix", + noise_model=DepolarisingNoiseModel(measure_error_prob=measure_error_pr), + branch_selector=ConstBranchSelector(outcome), + rng=fx_rng, ) - # result should be |1> assert isinstance(res, DensityMatrix) + # With measure_error_prob, the outcome might be flipped, resulting in different X corrections + # However, we cannot predict the exact result without knowing if the error occurred + # So we check both possibilities assert np.allclose(res.rho, np.array([[1.0, 0.0], [0.0, 0.0]])) or np.allclose( res.rho, np.array([[0.0, 0.0], [0.0, 1.0]]), @@ -100,24 +111,29 @@ def test_noisy_measure_channel_hadamard(self, fx_rng: Generator) -> None: ) # test Pauli X error - def test_noisy_x_hadamard(self, fx_rng: Generator) -> None: + @pytest.mark.parametrize("outcome", [0, 1]) + def test_noisy_x_hadamard(self, fx_rng: Generator, outcome: Outcome) -> None: hadamardpattern = hpat() # x error only x_error_pr = fx_rng.random() - print(f"x_error_pr = {x_error_pr}") + print(f"x_error_pr = {x_error_pr}, outcome = {outcome}") res = hadamardpattern.simulate_pattern( backend="densitymatrix", noise_model=DepolarisingNoiseModel(x_error_prob=x_error_pr), + branch_selector=ConstBranchSelector(outcome), rng=fx_rng, ) - # analytical result since deterministic pattern output is |0>. - # if no X applied, no noise. If X applied X noise on |0><0| - + # Pattern has X(1, {0}), so X error noise only applied when outcome=1 assert isinstance(res, DensityMatrix) - assert np.allclose(res.rho, np.array([[1.0, 0.0], [0.0, 0.0]])) or np.allclose( - res.rho, - np.array([[1 - 2 * x_error_pr / 3.0, 0.0], [0.0, 2 * x_error_pr / 3.0]]), - ) + if outcome == 0: + # No X correction → no X error noise + assert np.allclose(res.rho, np.array([[1.0, 0.0], [0.0, 0.0]])) + else: + # X correction applied → X error noise applied + assert np.allclose( + res.rho, + np.array([[1 - 2 * x_error_pr / 3.0, 0.0], [0.0, 2 * x_error_pr / 3.0]]), + ) # test entanglement error def test_noisy_entanglement_hadamard(self, fx_rng: Generator) -> None: @@ -310,67 +326,86 @@ def test_noisy_measure_channel_rz(self, fx_rng: Generator) -> None: ), ) - def test_noisy_x_rz(self, fx_rng: Generator) -> None: + @pytest.mark.parametrize("z_outcome", [0, 1]) + @pytest.mark.parametrize("x_outcome", [0, 1]) + def test_noisy_x_rz(self, fx_rng: Generator, z_outcome: Outcome, x_outcome: Outcome) -> None: alpha = fx_rng.random() rzpattern = rzpat(alpha) # x error only x_error_pr = fx_rng.random() - print(f"x_error_pr = {x_error_pr}") + print(f"x_error_pr = {x_error_pr}, outcome_z = {z_outcome}, outcome_x = {x_outcome}") + + # M(0) determines Z, M(1) determines X + m_nodes = (cmd.node for cmd in rzpattern if cmd.kind == CommandKind.M) + results: dict[int, Outcome] = {next(m_nodes): z_outcome, next(m_nodes): x_outcome} + res = rzpattern.simulate_pattern( backend="densitymatrix", noise_model=DepolarisingNoiseModel(x_error_prob=x_error_pr), + branch_selector=FixedBranchSelector(results), rng=fx_rng, ) - # only two cases: if no X correction, Z or no Z correction but exact result. - # If X correction the noise result is the same with or without the PERFECT Z correction. + # Pattern has X(2, {1}), so X error noise only applied when x_outcome=1 assert isinstance(res, DensityMatrix) rad = angle_to_rad(alpha) - assert np.allclose( - res.rho, - 0.5 * np.array([[1.0, np.exp(-1j * rad)], [np.exp(1j * rad), 1.0]]), - ) or np.allclose( - res.rho, - 0.5 - * np.array( - [ - [1.0, np.exp(-1j * rad) * (3 - 4 * x_error_pr) / 3], - [np.exp(1j * rad) * (3 - 4 * x_error_pr) / 3, 1.0], - ], - ), - ) + if x_outcome == 0: + # No X correction → no X error noise + assert np.allclose(res.rho, rz_exact_res(alpha)) + else: + # X correction applied → X error noise applied + assert np.allclose( + res.rho, + 0.5 + * np.array( + [ + [1.0, np.exp(-1j * rad) * (3 - 4 * x_error_pr) / 3], + [np.exp(1j * rad) * (3 - 4 * x_error_pr) / 3, 1.0], + ], + ), + ) - def test_noisy_z_rz(self, fx_rng: Generator) -> None: + @pytest.mark.parametrize("outcome_z", [0, 1]) + @pytest.mark.parametrize("outcome_x", [0, 1]) + def test_noisy_z_rz(self, fx_rng: Generator, outcome_z: Outcome, outcome_x: Outcome) -> None: alpha = fx_rng.random() rzpattern = rzpat(alpha) # z error only z_error_pr = fx_rng.random() - print(f"z_error_pr = {z_error_pr}") + print(f"z_error_pr = {z_error_pr}, outcome_z = {outcome_z}, outcome_x = {outcome_x}") + + # M(0) determines Z, M(1) determines X + results: dict[int, Outcome] = {0: outcome_z, 1: outcome_x} + res = rzpattern.simulate_pattern( backend="densitymatrix", noise_model=DepolarisingNoiseModel(z_error_prob=z_error_pr), + branch_selector=FixedBranchSelector(results), rng=fx_rng, ) - # only two cases: if no Z correction, X or no X correction but exact result. - # If Z correction the noise result is the same with or without the PERFECT X correction. + # Pattern has Z(2, {0}), so Z error noise only applied when outcome_z=1 assert isinstance(res, DensityMatrix) rad = angle_to_rad(alpha) - assert np.allclose( - res.rho, - 0.5 * np.array([[1.0, np.exp(-1j * rad)], [np.exp(1j * rad), 1.0]]), - ) or np.allclose( - res.rho, - 0.5 - * np.array( - [ - [1.0, np.exp(-1j * rad) * (3 - 4 * z_error_pr) / 3], - [np.exp(1j * rad) * (3 - 4 * z_error_pr) / 3, 1.0], - ], - ), - ) + if outcome_z == 0: + # No Z correction → no Z error noise + assert np.allclose(res.rho, rz_exact_res(alpha)) + else: + # Z correction applied → Z error noise applied + assert np.allclose( + res.rho, + 0.5 + * np.array( + [ + [1.0, np.exp(-1j * rad) * (3 - 4 * z_error_pr) / 3], + [np.exp(1j * rad) * (3 - 4 * z_error_pr) / 3, 1.0], + ], + ), + ) - def test_noisy_xz_rz(self, fx_rng: Generator) -> None: + @pytest.mark.parametrize("z_outcome", [0, 1]) + @pytest.mark.parametrize("x_outcome", [0, 1]) + def test_noisy_xz_rz(self, fx_rng: Generator, z_outcome: Outcome, x_outcome: Outcome) -> None: alpha = fx_rng.random() rzpattern = rzpat(alpha) # x and z errors @@ -378,28 +413,39 @@ def test_noisy_xz_rz(self, fx_rng: Generator) -> None: print(f"x_error_pr = {x_error_pr}") z_error_pr = fx_rng.random() print(f"z_error_pr = {z_error_pr}") + print(f"z_outcome = {z_outcome}, x_outcome = {x_outcome}") + + # M(0) determines Z correction, M(1) determines X correction + results: dict[int, Outcome] = {0: z_outcome, 1: x_outcome} + res = rzpattern.simulate_pattern( backend="densitymatrix", noise_model=DepolarisingNoiseModel(x_error_prob=x_error_pr, z_error_prob=z_error_pr), + branch_selector=FixedBranchSelector(results), rng=fx_rng, ) - # 4 cases : no corr, noisy X, noisy Z, noisy XZ. + # Pattern has X(2, {1}) and Z(2, {0}), noise applied conditionally assert isinstance(res, DensityMatrix) rad = angle_to_rad(alpha) - assert ( - np.allclose(res.rho, 0.5 * np.array([[1.0, np.exp(-1j * rad)], [np.exp(1j * rad), 1.0]])) - or np.allclose( + if z_outcome == 0 and x_outcome == 0: + # No corrections → no noise + assert np.allclose(res.rho, rz_exact_res(alpha)) + elif z_outcome == 0 and x_outcome == 1: + # Only X correction → only X noise + assert np.allclose( res.rho, 0.5 * np.array( [ - [1.0, np.exp(-1j * rad) * (3 - 4 * x_error_pr) * (3 - 4 * z_error_pr) / 9], - [np.exp(1j * rad) * (3 - 4 * x_error_pr) * (3 - 4 * z_error_pr) / 9, 1.0], + [1.0, np.exp(-1j * rad) * (3 - 4 * x_error_pr) / 3], + [np.exp(1j * rad) * (3 - 4 * x_error_pr) / 3, 1.0], ], ), ) - or np.allclose( + elif z_outcome == 1 and x_outcome == 0: + # Only Z correction → only Z noise + assert np.allclose( res.rho, 0.5 * np.array( @@ -409,47 +455,69 @@ def test_noisy_xz_rz(self, fx_rng: Generator) -> None: ], ), ) - or np.allclose( + else: # z_outcome == 1 and x_outcome == 1 + # Both corrections → both noises + assert np.allclose( res.rho, 0.5 * np.array( [ - [1.0, np.exp(-1j * rad) * (3 - 4 * x_error_pr) / 3], - [np.exp(1j * rad) * (3 - 4 * x_error_pr) / 3, 1.0], + [1.0, np.exp(-1j * rad) * (3 - 4 * x_error_pr) * (3 - 4 * z_error_pr) / 9], + [np.exp(1j * rad) * (3 - 4 * x_error_pr) * (3 - 4 * z_error_pr) / 9, 1.0], ], ), ) - ) # test measurement confuse outcome - def test_noisy_measure_confuse_rz(self, fx_rng: Generator) -> None: + @pytest.mark.parametrize("z_outcome", [0, 1]) + @pytest.mark.parametrize("x_outcome", [0, 1]) + def test_noisy_measure_confuse_rz(self, fx_rng: Generator, z_outcome: Outcome, x_outcome: Outcome) -> None: alpha = fx_rng.random() rzpattern = rzpat(alpha) - # probability 1 to shift both outcome + + # M(0) determines Z, M(1) determines X + results: dict[int, Outcome] = {0: z_outcome, 1: x_outcome} + + # Test with probability 1 to flip both outcomes res = rzpattern.simulate_pattern( - backend="densitymatrix", noise_model=DepolarisingNoiseModel(measure_error_prob=1.0), rng=fx_rng + backend="densitymatrix", + noise_model=DepolarisingNoiseModel(measure_error_prob=1.0), + branch_selector=FixedBranchSelector(results), + rng=fx_rng, ) - # result X, XZ or Z exact = rz_exact_res(alpha) - assert isinstance(res, DensityMatrix) - assert ( - np.allclose(res.rho, Ops.X @ exact @ Ops.X) - or np.allclose(res.rho, Ops.Z @ exact @ Ops.Z) - or np.allclose(res.rho, Ops.Z @ Ops.X @ exact @ Ops.X @ Ops.Z) - ) + # All outcomes lead to same result: both corrections applied due to flipping + assert np.allclose(res.rho, Ops.Z @ Ops.X @ exact @ Ops.X @ Ops.Z) + + @pytest.mark.parametrize("z_outcome", [0, 1]) + @pytest.mark.parametrize("x_outcome", [0, 1]) + def test_noisy_measure_confuse_rz_arbitrary( + self, fx_rng: Generator, z_outcome: Outcome, x_outcome: Outcome + ) -> None: + alpha = fx_rng.random() + rzpattern = rzpat(alpha) + + # M(0) determines Z, M(1) determines X + results: dict[int, Outcome] = {0: z_outcome, 1: x_outcome} - # arbitrary probability + # Test with arbitrary probability measure_error_pr = fx_rng.random() - print(f"measure_error_pr = {measure_error_pr}") + print(f"measure_error_pr = {measure_error_pr}, z_outcome = {z_outcome}, x_outcome = {x_outcome}") res = rzpattern.simulate_pattern( backend="densitymatrix", noise_model=DepolarisingNoiseModel(measure_error_prob=measure_error_pr), + branch_selector=FixedBranchSelector(results), rng=fx_rng, ) - # just add the case without readout errors + + exact = rz_exact_res(alpha) assert isinstance(res, DensityMatrix) + + # With arbitrary measure_error_pr, outcomes may or may not be flipped + # The physical result depends on whether the error occurs + # We check all possible cases assert ( np.allclose(res.rho, exact) or np.allclose(res.rho, Ops.X @ exact @ Ops.X) diff --git a/tests/test_pattern.py b/tests/test_pattern.py index ff6306c9b..b0bd65d35 100644 --- a/tests/test_pattern.py +++ b/tests/test_pattern.py @@ -19,7 +19,7 @@ from graphix.fundamentals import ANGLE_PI, Angle, Plane from graphix.measurements import Measurement, Outcome, PauliMeasurement from graphix.opengraph import OpenGraph -from graphix.pattern import Pattern, RunnabilityError, RunnabilityErrorReason, shift_outcomes +from graphix.pattern import Pattern, PatternError, RunnabilityError, RunnabilityErrorReason, shift_outcomes from graphix.random_objects import rand_circuit, rand_gate from graphix.sim.density_matrix import DensityMatrix from graphix.sim.statevec import Statevec @@ -53,7 +53,7 @@ def test_init(self) -> None: pattern = Pattern(input_nodes=[1, 0], cmds=[N(node=2), M(node=1)], output_nodes=[2, 0]) assert pattern.input_nodes == [1, 0] assert pattern.output_nodes == [2, 0] - with pytest.raises(ValueError): + with pytest.raises(PatternError): Pattern(input_nodes=[1, 0], cmds=[N(node=2), M(node=1)], output_nodes=[0, 1, 2]) def test_eq(self) -> None: @@ -300,7 +300,7 @@ def test_pauli_measurement_error(self, fx_rng: Generator) -> None: circuit = rand_circuit(nqubits, depth, fx_rng) pattern = circuit.transpile().pattern pattern.standardize() - with pytest.raises(ValueError): + with pytest.raises(PatternError): pattern.perform_pauli_measurements() def test_pauli_measurement_leave_input(self) -> None: @@ -321,7 +321,7 @@ def test_pauli_measurement_leave_input(self) -> None: swap(circuit, 0, 2) pattern = circuit.transpile().pattern pattern.standardize() - with pytest.raises(ValueError): + with pytest.raises(PatternError): pattern.perform_pauli_measurements() @pytest.mark.parametrize("jumps", range(1, 6)) @@ -527,17 +527,19 @@ def test_compose_1(self) -> None: assert pc == p assert mapping_c == {0: 1, 2: 5} - with pytest.raises(ValueError, match=r"Keys of `mapping` must correspond to the nodes of `other`."): + with pytest.raises(PatternError, match=r"Keys of `mapping` must correspond to the nodes of `other`."): p1.compose(p2, mapping={0: 1, 2: 5, 1: 2}) - with pytest.raises(ValueError, match=r"Values of `mapping` contain duplicates."): + with pytest.raises(PatternError, match=r"Values of `mapping` contain duplicates."): p1.compose(p2, mapping={0: 1, 2: 1}) - with pytest.raises(ValueError, match=r"Values of `mapping` must not contain measured nodes of pattern `self`."): + with pytest.raises( + PatternError, match=r"Values of `mapping` must not contain measured nodes of pattern `self`." + ): p1.compose(p2, mapping={0: 1, 2: 0}) with pytest.raises( - ValueError, + PatternError, match=r"Mapping 2 -> 1 is not valid. 1 is an output of pattern `self` but 2 is not an input of pattern `other`.", ): p1.compose(p2, mapping={2: 1})