Conversation
189b140 to
c6f7c15
Compare
c6f7c15 to
d6bf440
Compare
|
I've only browsed through this quickly so far. The design is definitely different than what I had in mind. What I had in mind was that "all" TensorKit functions would accept three final arguments (probably in this order):
where Ultimately that is of course the same, except for the fact that now you first have to wrap the underlying TensorOperations |
|
Also, as response to the question, yes also |
|
I think having some kind of wrapper is a bit inevitable, since for example in The logic of using a One thing that I am kind of becoming more and more in favour of is the idea of simply putting the In some sense, what I would see as a balance between these things is:
To keep consistency, I'm also okay with having struct BlockAlgorithm <: AbstractAlgorithm
scheduler
algorithm
endIn any case, it's a bit hard to reason about this without the |
|
I am definitely in favor of something that can be controlled via scoped values. |
bb01e1a to
9a845b0
Compare
9a845b0 to
22739bb
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #203 +/- ##
==========================================
- Coverage 82.51% 77.34% -5.17%
==========================================
Files 43 44 +1
Lines 5552 5620 +68
==========================================
- Hits 4581 4347 -234
- Misses 971 1273 +302 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Jutho
left a comment
There was a problem hiding this comment.
A first set of comments; I still have to go through src/tensors/tensoroperations.jl, but this looks very promising!
|
Some feedback: ERROR: LoadError: MethodError: no method matching setindex!(::ScopedValue{Scheduler}, ::DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive})
The function `setindex!` exists, but no method is defined for this combination of argument types.2.When ERROR: LoadError: MethodError: no method matching getindex(::Base.Generator{Vector{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}}, TensorKit.var"#157#158"{BraidingTensor{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}}}}, ::ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}})3.When ERROR: LoadError: ArgumentError: Arguments of type TensorKit.BlockIterator{TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}, 2, 1, Vector{Float64}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Tuple{Tuple{Int64, Int64}, UnitRange{Int64}}}} are not compatible with chunks, either implement a custom chunks method for your type, or implement the custom type interface (see https://juliafolds2.github.io/ChunkSplitters.jl/dev/) |
|
In the second case, I changed bAs and bBs to dicts, and the program ran successfully. I then simulated a Hubbard model of size 2×5 with D=512 and found that the speed was faster than both the current master version and the old version. |
|
Warning: The function `add!` is not implemented for (values of) type `Tuple{Base.ReshapedArray{Float64, 2, SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Tuple{}}, Float64, VectorInterface.One, VectorInterface.One}`;
│ this fallback will disappear in future versions of VectorInterface.jl
└ @ VectorInterface ~/.julia/packages/VectorInterface/J6qCR/src/fallbacks.jl:143
ERROR: LoadError: ArgumentError: No fallback for applying `add!` to (values of) type `Tuple{Base.ReshapedArray{Float64, 2, SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Tuple{}}, Float64, VectorInterface.One, VectorInterface.One}` could be determined
Stacktrace:
[1] add!(y::Base.ReshapedArray{Float64, 2, SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Tuple{}}, x::Float64, α::VectorInterface.One, β::VectorInterface.One)
@ VectorInterface ~/.julia/packages/VectorInterface/J6qCR/src/fallbacks.jl:150
[2] add!(ty::TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}, 2, 2, Vector{Float64}}, tx::BraidingTensor{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}}, α::VectorInterface.One, β::VectorInterface.One)
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/vectorinterface.jl:77
[3] add!
@ ~/.julia/packages/VectorInterface/J6qCR/src/interface.jl:124 [inlined]
[4] add(ty::TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}, 2, 2, Vector{Float64}}, tx::BraidingTensor{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}}, α::VectorInterface.One, β::VectorInterface.One)
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/vectorinterface.jl:71
[5] add
@ ~/.julia/packages/VectorInterface/J6qCR/src/interface.jl:107 [inlined]
[6] +(t1::TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}, 2, 2, Vector{Float64}}, t2::BraidingTensor{Float64, GradedSpace{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, U1Irrep, U1Irrep}}, Int64}}})
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/linalg.jl:7 |
|
Thanks for the feedback! For 3. this is an interesting thing we might want to work around or with: the collections that |
|
Thank you very much for your prompt fix -- it has resolved all the bugs I have found. |
|
Possible bug? In multithreading, using SU(2) symmetry causes @planar to throw an error, but using only U(1) symmetry does not: TensorKit.with_blockscheduler(DynamicScheduler()) do
TensorKit.with_subblockscheduler(DynamicScheduler()) do
E = e_plus(Float64, SU2Irrep, U1Irrep; side=:L, filling=filling)'
F = isomorphism(storagetype(E), flip(space(E, 2)), space(E, 2))
@planar e⁻[-1; -2 -3] := E[-1 1; -2] * F[-3; 1]
end
end
ERROR: LoadError: ArgumentError: Arguments of type Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}} are not compatible with chunks, either implement a custom chunks method for your type, or implement the custom type interface (see https://juliafolds2.github.io/ChunkSplitters.jl/dev/)
Stacktrace:
TensorKit.with_blockscheduler(DynamicScheduler()) do
TensorKit.with_subblockscheduler(DynamicScheduler()) do
E = e_plus(Float64, U1Irrep, U1Irrep; side=:L, filling=filling)'
F = isomorphism(storagetype(E), flip(space(E, 2)), space(E, 2))
@planar e⁻[-1; -2 -3] := E[-1 1; -2] * F[-3; 1]
println(e⁻)
end
end
TensorMap(Vect[(FermionParity ⊠ Irrep[U₁] ⊠ Irrep[U₁])]((0, 0, 1)=>1, (0, 0, -1)=>1, (1, 1, 0)=>1, (1, -1, 0)=>1) ← (Vect[(FermionParity ⊠ Irrep[U₁] ⊠ Irrep[U₁])]((0, 0, 1)=>1, (0, 0, -1)=>1, (1, 1, 0)=>1, (1, -1, 0)=>1) ⊗ Vect[(FermionParity ⊠ Irrep[U₁] ⊠ Irrep[U₁])]((1, -1, -1)=>1))):
* Data for sector ((FermionParity(0) ⊠ Irrep[U₁](0) ⊠ Irrep[U₁](-1)),) ← ((FermionParity(1) ⊠ Irrep[U₁](1) ⊠ Irrep[U₁](0)), (FermionParity(1) ⊠ Irrep[U₁](-1) ⊠ Irrep[U₁](-1))):
[:, :, 1] =
1.0
* Data for sector ((FermionParity(1) ⊠ Irrep[U₁](-1) ⊠ Irrep[U₁](0)),) ← ((FermionParity(0) ⊠ Irrep[U₁](0) ⊠ Irrep[U₁](1)), (FermionParity(1) ⊠ Irrep[U₁](-1) ⊠ Irrep[U₁](-1))):
[:, :, 1] =
-1.0 |
|
Could you also attach what |
|
eplus is the same creation operator you originally had in MPSKitModels, except that I added a filling parameter. Setting filling = (1,1) is fine. function e_plus(elt::Type{<:Number}, ::Type{SU2Irrep}, ::Type{U1Irrep}; side=:L, filling=filling)
I = FermionParity ⊠ SU2Irrep ⊠ U1Irrep
P, Q = filling
pspace = Vect[I]((0,0,-P)=>1, (1,1//2,Q-P)=>1, (0,0,2*Q-P)=>1)
vspace = Vect[I]((1,1//2,Q)=>1)
if side == :L
e⁺ = TensorMap(zeros, elt, pspace ← pspace ⊗ vspace)
block(e⁺, I(0,0,2*Q-P)) .= sqrt(2)
block(e⁺, I(1,1//2,Q-P)) .= 1
elseif side == :R
E = e_plus(elt, SU2Irrep, U1Irrep; side=:L, filling=filling)
F = isomorphism(storagetype(E), vspace, flip(vspace))
@planar e⁺[-1 -2; -3] := E[-2; 1 2] * τ[1 2; 3 -3] * F[3; -1]
end
return e⁺
end |
|
The bug occurs when e_min is created, this is the full error info: ERROR: LoadError: ArgumentError: Arguments of type Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}} are not compatible with chunks, either implement a custom chunks method for your type, or implement the custom type interface (see https://juliafolds2.github.io/ChunkSplitters.jl/dev/)
Stacktrace:
[1] err_not_chunkable(::Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}})
@ ChunkSplitters.Internals ~/.julia/packages/ChunkSplitters/p2yrz/src/internals.jl:91
[2] ChunkSplitters.Internals.IndexChunks(s::ChunkSplitters.Consecutive; collection::Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}}, n::Int64, size::Nothing, minsize::Nothing)
@ ChunkSplitters.Internals ~/.julia/packages/ChunkSplitters/p2yrz/src/internals.jl:33
[3] index_chunks(collection::Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}}; n::Int64, size::Nothing, split::ChunkSplitters.Consecutive, minsize::Nothing)
@ ChunkSplitters.Internals ~/.julia/packages/ChunkSplitters/p2yrz/src/internals.jl:47
[4] _index_chunks(sched::DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive}, arg::Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}})
@ OhMyThreads.Implementation ~/.julia/packages/OhMyThreads/eiaNP/src/implementation.jl:27
[5] _tmapreduce(f::Function, op::Function, Arrs::Tuple{Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}}}, ::Type{Nothing}, scheduler::DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive}, mapreduce_kwargs::@NamedTuple{init::Nothing})
@ OhMyThreads.Implementation ~/.julia/packages/OhMyThreads/eiaNP/src/implementation.jl:106
[6] #tmapreduce#22
@ ~/.julia/packages/OhMyThreads/eiaNP/src/implementation.jl:85 [inlined]
[7] tmapreduce
@ ~/.julia/packages/OhMyThreads/eiaNP/src/implementation.jl:69 [inlined]
[8] tforeach(f::Function, A::Base.Iterators.ProductIterator{Tuple{Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}, TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}, Base.Iterators.ProductIterator{Tuple{TensorKitSectors.SectorSet{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.var"#93#94"{GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}}, Vector{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}}}}}}}; kwargs::@Kwargs{scheduler::DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive}})
@ OhMyThreads.Implementation ~/.julia/packages/OhMyThreads/eiaNP/src/implementation.jl:308
[9] tforeach
@ ~/.julia/packages/OhMyThreads/eiaNP/src/implementation.jl:307 [inlined]
[10] _add_general_kernel!
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/indexmanipulations.jl:631 [inlined]
[11] add_transform_kernel!
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/indexmanipulations.jl:585 [inlined]
[12] add_transform!(tdst::TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 2, 1, Vector{Float64}}, tsrc::TensorKit.AdjointTensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 2, 1, TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 1, 2, Vector{Float64}}}, ::Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, transformer::Function, α::VectorInterface.One, β::VectorInterface.Zero, backend::TensorKit.TensorKitBackend{TensorOperations.DefaultBackend, DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive}, DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive}}, allocator::TensorOperations.DefaultAllocator)
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/indexmanipulations.jl:490
[13] add_transform!(C::TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 2, 1, Vector{Float64}}, A::TensorKit.AdjointTensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 2, 1, TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 1, 2, Vector{Float64}}}, pA::Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, transformer::Function, α::VectorInterface.One, β::VectorInterface.Zero, backend::TensorOperations.DefaultBackend, allocator::TensorOperations.DefaultAllocator)
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/indexmanipulations.jl:462
[14] add_transform!
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/indexmanipulations.jl:456 [inlined]
[15] add_transpose!
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/indexmanipulations.jl:439 [inlined]
[16] planarcontract!(C::TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 2, 1, Vector{Float64}}, A::TensorKit.AdjointTensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 2, 1, TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 1, 2, Vector{Float64}}}, pA::Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, B::TensorMap{Float64, GradedSpace{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, TensorKit.SortedVectorDict{ProductSector{Tuple{FermionParity, SU2Irrep, U1Irrep}}, Int64}}, 1, 1, Vector{Float64}}, pB::Tuple{Tuple{Int64}, Tuple{Int64}}, pAB::Tuple{Tuple{Int64, Int64}, Tuple{Int64}}, α::VectorInterface.One, β::VectorInterface.Zero, backend::TensorOperations.DefaultBackend, allocator::TensorOperations.DefaultAllocator)
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/planar/planaroperations.jl:161
[17] planarcontract!
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/planar/planaroperations.jl:115 [inlined]
[18] planarcontract!
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/planar/planaroperations.jl:110 [inlined]
[19] e_min(elt::Type{Float64}, particle_symmetry::Type{SU2Irrep}, spin_symmetry::Type{U1Irrep}; side::Symbol, filling::Tuple{Int64, Int64})
@ DynamicalCorrelators ~/Library/Mobile Documents/com~apple~CloudDocs/mygit/DynamicalCorrelators.jl/src/operators/fermions.jl:239
[20] (::var"#2#4")()
@ Main ~/Library/Mobile Documents/com~apple~CloudDocs/mygit/projects/000_test/tdvp/OhMyTh/LNO.jl:243
[21] #with_subblockscheduler#162
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/backends.jl:54 [inlined]
[22] with_subblockscheduler
@ ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/backends.jl:52 [inlined]
[23] (::var"#1#3")()
@ Main ~/Library/Mobile Documents/com~apple~CloudDocs/mygit/projects/000_test/tdvp/OhMyTh/LNO.jl:238
[24] with_blockscheduler(f::var"#1#3", scheduler::DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive}; kwargs::@Kwargs{})
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/backends.jl:39
[25] with_blockscheduler(f::Function, scheduler::DynamicScheduler{OhMyThreads.Schedulers.FixedCount, ChunkSplitters.Consecutive})
@ TensorKit ~/Library/Mobile Documents/com~apple~CloudDocs/Clone/Jutho/TensorKit.jl-ld-multithreading2/src/tensors/backends.jl:37
[26] top-level scope
@ ~/Library/Mobile Documents/com~apple~CloudDocs/mygit/projects/000_test/tdvp/OhMyTh/LNO.jl:237
in expression starting at /Users/zongyy/Library/Mobile Documents/com~apple~CloudDocs/mygit/projects/000_test/tdvp/OhMyTh/LNO.jl:237 |
|
Should now be resolved. Interesting to note here is that this occurs when permuting an |
|
Any updates on the PR? I was wondering if a working version might come out soon. |
This is a continuation of #100 and #117 in an attempt to properly address multithreading over blocks in the various parts of the code.
To achieve this, I added:
backend, allocatorsupport to the TensorOperations functionsbackend, allocatorsupport to the indexmanipulationsTensorKitBackendthat holds a scheduler and a backend to pass onBlockIteratorto avoid having to find the cached structures in a multithreaded loop, reducing overall cache lookups and hopefully avoiding lock contentionBefore continuing and pushing this through to the other functions, some questions:
mul!take a scheduler or a backend?permute!,add_permute!andtensoradd!all do more or less the same thingTensorOperationsfunctions, and write everything in terms of that?