Skip to content

CompatHelper: bump compat for TensorKit to 0.16, (keep existing compat)#314

Open
github-actions[bot] wants to merge 20 commits intomasterfrom
compathelper/new_version/2025-12-10-01-26-47-328-02937557752
Open

CompatHelper: bump compat for TensorKit to 0.16, (keep existing compat)#314
github-actions[bot] wants to merge 20 commits intomasterfrom
compathelper/new_version/2025-12-10-01-26-47-328-02937557752

Conversation

@github-actions
Copy link
Contributor

This pull request changes the compat entry for the TensorKit package from 0.15 to 0.15, 0.16.
This keeps the compat entries for earlier versions.

Note: I have not tested your package with this new compat entry.
It is your responsibility to make sure that your package tests pass before you merge this pull request.

@lkdvos lkdvos force-pushed the compathelper/new_version/2025-12-10-01-26-47-328-02937557752 branch from 4e86ffc to 6c52cd3 Compare December 10, 2025 01:26
@Yue-Zhengyuan Yue-Zhengyuan mentioned this pull request Feb 2, 2026
@lkdvos
Copy link
Member

lkdvos commented Feb 3, 2026

Update that this can now be tackled, as MPSKit has been released

@Yue-Zhengyuan Yue-Zhengyuan self-assigned this Feb 4, 2026
@codecov
Copy link

codecov bot commented Feb 4, 2026

Codecov Report

❌ Patch coverage is 94.87179% with 2 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/algorithms/bp/beliefpropagation.jl 50.00% 1 Missing ⚠️
src/algorithms/time_evolution/simpleupdate3site.jl 0.00% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/PEPSKit.jl 100.00% <ø> (ø)
src/algorithms/bp/gaugefix.jl 71.59% <100.00%> (-25.00%) ⬇️
.../algorithms/contractions/ctmrg/renormalize_edge.jl 55.29% <ø> (-10.59%) ⬇️
src/algorithms/ctmrg/ctmrg.jl 85.29% <100.00%> (-5.48%) ⬇️
src/algorithms/ctmrg/gaugefix.jl 70.71% <100.00%> (-27.16%) ⬇️
src/algorithms/ctmrg/sequential.jl 100.00% <100.00%> (+1.58%) ⬆️
src/algorithms/ctmrg/simultaneous.jl 97.01% <100.00%> (-2.99%) ⬇️
src/algorithms/time_evolution/evoltools.jl 49.51% <100.00%> (-47.61%) ⬇️
src/algorithms/truncation/bond_truncation.jl 94.73% <100.00%> (ø)
src/algorithms/truncation/fullenv_truncation.jl 96.92% <100.00%> (ø)
... and 5 more

... and 29 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Yue-Zhengyuan Yue-Zhengyuan removed their assignment Feb 4, 2026
@leburgel leburgel self-assigned this Feb 4, 2026
@leburgel
Copy link
Member

leburgel commented Feb 4, 2026

It seems the gauge fixing for CTMRG environments is broken here somehow, which then also breaks any attempt at using fixed-point differentation for gradient computations I think. I'll try to have a look and figure out why this is happening now.

@leburgel
Copy link
Member

leburgel commented Feb 4, 2026

So this seems to be an issue with the use of MatrixAlgebraKit.left_orth! here

Qprev, = left_orth!(ρprev)
Qfinal, = left_orth!(ρfinal)

Somehow, something about the default behavior of left_orth! must have changed. In particular, the default left orth via QR (alg = :qr) leads to incorrect results here . Explicitly passing alg = :polar fixes the issue though.

Looking into the problem a bit further by explicitly trying MatrixAlgebraKit.qr_compact!, I could see that even in this case the behavior depends on the chosen algorithm. Here alg = LAPACK_HouseholderQR() leads to test failures, while for example alg = Native_HouseholderQR() allows tests to pass.

The quickest fix is to just switch to left_orth!(A; alg = :polar), but it might be good to understand what actually changed. Would you have an idea what could have changed about the default behavior of left_orth!/qr_compact! @lkdvos?

[EDIT] I think the source of the behavior change is the loss of a default positive = true for left orth via QR after QuantumKitHub/TensorKit.jl#312. Explicitly using left_orth!(A; positive = true) fixes the tests here. While the fix is easy, this took a bit of digging to figure out.

I personally think the change in the default behavior of left_orth in TensorKit.jl might be a bit annoying from a practical point of view. It is clear that TensorKit.jl now adheres to the MatrixAlgebraKit.jl defaults, which is great. But figuring out that this specific behavior was changed required some explicit MatrixAlgebraKit.default_algorithm usage and further digging, as well as explicitly checking what the previous default behavior was. Is there any way to notify users of this, or is it just one of too many changes to really record it individually?

@leburgel
Copy link
Member

leburgel commented Feb 4, 2026

Aside from one timeout everything seems to be working now. Do we want to wait for a DiagonalTensorMap(::SectorVector) constructor and corresponding TensorKit.jl patch release before merging this?

@Yue-Zhengyuan
Copy link
Member

Oh, one more thing we need to eliminate:

# hacky way of computing the truncation error for current version of svd_trunc!
# TODO: replace once TensorKit updates to new MatrixAlgebraKit which returns truncation error as well
function _truncate_compact((U, S, V⁺), trunc::TruncationStrategy)
if !(trunc isa NoTruncation) && !isempty(blocksectors(S))
Ũ, S̃, Ṽ⁺ = truncate(svd_trunc!, (U, S, V⁺), trunc)[1]
truncerror = sqrt(norm(S)^2 - norm(S̃)^2)
return Ũ, S̃, Ṽ⁺, truncerror
else
return U, S, V⁺, zero(real(scalartype(S)))
end
end

Can you also help digging into MAK/TensorKit to find the function that is used to truncate already calculated U, S, Vh and return the truncation error, and call it here?

@lkdvos lkdvos self-assigned this Feb 4, 2026
@lkdvos lkdvos removed their assignment Feb 4, 2026
@leburgel
Copy link
Member

leburgel commented Feb 4, 2026

Can you also help digging into MAK/TensorKit to find the function that is used to truncate already calculated U, S, Vh and return the truncation error, and call it here?

I think I found it, so I removed _truncate_compact.

@leburgel
Copy link
Member

leburgel commented Feb 4, 2026

I now somehow broke the type inference in sequential_projectors here:

T_dst = Base.promote_op(
sequential_projectors, NTuple{3, Int}, typeof(network), typeof(env), typeof(alg)
)

Running without preallocating the projectors and info works fine though, so I'm very confused by this.

@leburgel
Copy link
Member

leburgel commented Feb 5, 2026

There's some remaining issues with the new _singular_value_distance implementation:

LoadError: MethodError: no method matching iterate(::TensorKit.DiagonalTensorMap{Float64, TensorKit.GradedSpace{TensorKitSectors.FermionParity, Tuple{Int64, Int64}}, Vector{Float64}})
  The function `iterate` exists, but no method is defined for this combination of argument types.
  
  Stacktrace:
    [1] _singular_value_distance(::Tuple{TensorKit.DiagonalTensorMap{Float64, TensorKit.GradedSpace{TensorKitSectors.FermionParity, Tuple{Int64, Int64}}, Vector{Float64}}, TensorKit.DiagonalTensorMap{Float64, TensorKit.GradedSpace{TensorKitSectors.FermionParity, Tuple{Int64, Int64}}, Vector{Float64}}})
      @ PEPSKit ~/work/PEPSKit.jl/PEPSKit.jl/src/algorithms/ctmrg/ctmrg.jl:183

and

  MethodError: no method matching similar(::TensorKit.SectorVector{Float64, TensorKitSectors.FermionParity, Vector{Float64}}, ::Type{Float64}, ::TensorKit.GradedSpace{TensorKitSectors.FermionParity, Tuple{Int64, Int64}})
  The function `similar` exists, but no method is defined for this combination of argument types.
  
  Stacktrace:
    [1] _singular_value_distance(::Tuple{TensorKit.SectorVector{Float64, TensorKitSectors.FermionParity, Vector{Float64}}, TensorKit.SectorVector{Float64, TensorKitSectors.FermionParity, Vector{Float64}}})
      @ PEPSKit ~/work/PEPSKit.jl/PEPSKit.jl/src/algorithms/ctmrg/ctmrg.jl:181

@lkdvos
Copy link
Member

lkdvos commented Feb 5, 2026

I think these are solved as soon as TensorKit releases a version, which I'm aiming to get done today. Does that work?

@lkdvos
Copy link
Member

lkdvos commented Feb 5, 2026

@Yue-Zhengyuan
Copy link
Member

Please remember to remove sv_to_dtm as it is now in TensorKit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants