WIP: lk: Address memory aliasing issue#265
Open
vishals4gh wants to merge 1 commit intolittlekernel:masterfrom
Open
WIP: lk: Address memory aliasing issue#265vishals4gh wants to merge 1 commit intolittlekernel:masterfrom
vishals4gh wants to merge 1 commit intolittlekernel:masterfrom
Conversation
By default most of the platforms map all the DRAM at boot time causing scenarios where same physical address initially mapped with cacheable attributes maybe allowed to get mapped with non-cacheable attributes via different VM range. This is discouraged in architectures like ARM and gets tedious to ensure the coherent view of DRAM with other masters. This change ensures following for qemu-arm platform: 1) Add support of Arenas which might not be mapped in kernel space initially after platform is setup. 2) All the malloc calls use the pages from the arenas which are already mapped to kernel space. 3) vmm_alloc* APIs use the arenas which are not already mapped to any virtual address range. 4) vmm_free_region for memory allocated via vmm_alloc* APIS using cacheable memory mappings will clean caches for reuse by the next vmm_alloc* API call that can map memory with different attributes. 5) Memory for unmapped arena is initially allowed to be mapped and then unmapped later during platform initialization. This avoids remapping of the same physical memory to different virtual address ranges with different memory attributes. This effectively ensures that at any given memory from the vmm_arena can be owned by singal entity with a particular memory attributes. Caveats: 1) paddr_to_kvaddr API will not work for Physical addresses allocated using vmm_alloc* APIs ToDo: 1) Address the shortfalls of the current implementation 2) Update other platforms to allow unmapped ram arenas if this implementation is ok to pursue.
Contributor
Author
|
Rather than a merge request, this is more of review request to seek more feedback about whether:
Thanks, |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
By default most of the platforms map all the DRAM
at boot time causing scenarios where same physical
address initially mapped with cacheable attributes
maybe allowed to get mapped with non-cacheable
attributes via different VM range. This is discouraged
in architectures like ARM and gets tedious to ensure
the coherent view of DRAM with other masters.
This change ensures following for qemu-arm platform:
in kernel space initially after platform is setup.
from the arenas which are already mapped to kernel
space.
mapped to any virtual address range.
APIS using cacheable memory mappings will clean caches
for reuse by the next vmm_alloc* API call that can map memory
with different attributes.
and then unmapped later during platform initialization.
This avoids remapping of the same physical memory to
different virtual address ranges with different memory
attributes. This effectively ensures that at any given
memory from the vmm_arena can be owned by singal entity
with a particular memory attributes.
Caveats:
allocated using vmm_alloc* APIs
ToDo:
this implementation is ok to pursue.
Signed-off-by: vannapurve vannapurve@google.com