-
Notifications
You must be signed in to change notification settings - Fork 22
Add default methods toarray and tosparse to LinearOperator
#368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Conversation
… _tosparse_array, that contains the functionality to get both the sparse matrix and the full matrix
…ide the main for loop that kept allocating memory unecessarily
|
@mateomarin97 We see that you have made changes to the code after Elena's questions, but it is not clear what exactly they are for. Could you please answer explicitly? |
Hello. The changes were just aesthetic. I made the code more readable by implementing the suggestions Elena mentioned. The only thing stopping this function from being ready is that for the sparse matrix case, I could not write a Pyccel kernel to replace the for loop that appends the new matrices entries since Pyccel did not support Python lists. As the code stands it works but it is too slow due to this problem. |
Now that I think about it I am fairly certain I have a newer version of this branch that I did not push. I will push it later. |
e-moral-sanchez
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For clarity and to avoid repetition, I would group the lines within a bunch of if BoS =='b' elif ...
Erased unnecessary file.
Add multigrid modules
toarray and tosparse of LinearOperator
toarray and tosparse of LinearOperatorLinearOperator.toarray and tosparse
LinearOperator.toarray and tosparsetoarray and tosparse to LinearOperator
|
@kvrigor The unit tests fail when uploading the coverage report to Codacy. Do you see anything wrong with our current configuration? |
I think I have understood the issue now. It is due to the fact that this PR originates from a forked repository which does not have a proper Codacy token. |
| Parameters | ||
| ---------- | ||
| out : numpy.ndarray, optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle out could also be scipy.sparse.csr.csr_matrix , no? If we don't allow it, then we should add it in the function docstrings.
| ---------- | ||
| out : numpy.ndarray, optional | ||
| If given, the output will be written in-place into this array. | ||
| is_sparse : bool, optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rename is_sparse as return_sparse
| Returns | ||
| ------- | ||
| out : numpy.ndarray or scipy.sparse.csr.csr_matrix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this method there is no variable called out
| for i0 in range(itterables2[0][0],itterables2[0][1]): | ||
| if(tmp2[pds[0]+counter0] != 0): | ||
| row = i0 | ||
| out[spoint2list+row,col] = tmp2[pds[0]+counter0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't the ghost region formed by "pads*shifts"? so here (and in the other kernels) you would need
out[spoint2list+row,col] = tmp2[pds[0]*mld[0]+counter0]
where mld is from V.cart.shifts
| N1arrloc = N1P.tosparse().toarray() | ||
| #We get the global matrix of the PowerLinearOperator | ||
| N1arr = np.zeros(np.shape(N1arrloc), dtype=dtype) | ||
| comm.Allreduce(N1arrloc, N1arr, op=MPI.SUM) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to assemble the dense matrix. We can call Allreduce on the indices and data vectors of the sparse matrix.
|
|
||
| assert isinstance(N1P, PowerLinearOperator) | ||
| assert isinstance(N1P.domain, StencilVectorSpace) | ||
| compare_arrays(N1P.dot(v0), np.matmul(N1arr,v0arr), rank) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the same comments as in test_tosparse_parallel.py. Since we assemble the dense matrices, we should compare the arrays directly instead of the dot product with a vector.
| Bpow = B**3 | ||
| Bpowarr = Bpow.toarray() | ||
| assert isinstance(Bpow, PowerLinearOperator) | ||
| assert np.array_equal(np.matmul(Bpowarr,vbarr),Bpow.dot(vb).toarray()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the same comments as in test_toarray_parallel.py. Since we assemble the dense matrices, we should compare the arrays directly instead of the dot product with a vector.
| @pytest.mark.parametrize('p1', p1array) | ||
| @pytest.mark.parametrize('p2', p2array) | ||
|
|
||
| def test_square_block_basic(n1, n2, p1, p2, P1=False, P2=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the name of the test could be more descriptive, for instance: test_toarray_powerlinearoperator_block
| assert isinstance(Sinv, InverseLinearOperator) | ||
|
|
||
| Itest = np.matmul(A1,Sinvarr) | ||
| assert np.allclose(Itest, np.eye(np.size(Itest[0])),atol= 10**(-5)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would Itest be the identity? Isn't A1 different than S?
The default toarray and tosparse functions have been added to linearoperator.
Also, parallel tests for these functions have been written.
A special case of the
toarrayfunction istoarray_1dimplemented in PR #538