Implement EL1 MMU linearmap#407
Conversation
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25142977751. |
|
✅ All jobs completed successfully, see https://github.com/vivoblueos/kernel/actions/runs/25142977751. |
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25144067446. |
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25144203899. |
|
❌ Job failed. Failed jobs: check_format (failure), build_and_check_boards (failure), see https://github.com/vivoblueos/kernel/actions/runs/25144067446. |
|
✅ All jobs completed successfully, see https://github.com/vivoblueos/kernel/actions/runs/25144203899. |
|
Please draw the current kernel's memory layout. |
|
[AI-generated review summary] I found one boot/SMP concern that should be addressed before merging.
Please either keep a synchronization point for both page tables before secondary cores program TTBR0/TTBR1, or document the boot-order guarantee that makes it impossible for secondary cores to enter this code until CPU0 has completed both initializations. One related question: the boot path now comments out |
You’re right. I removed the synchronization code because I found that qemu_virt64_aarch64 only boots CPU0 initially, and the other cores won’t come up until CPU0 executes secondary_cpu_setup. That’s why I took out those sync routines — I thought they were unnecessary.
You're right. I commented them out during the initial development because I assumed the EL2 MMU was disabled, and that EL2 would encounter errors when accessing symbol addresses with a linear offset. However, I found that nearly all the code in virt_init() is compiled into PC-relative addressing instructions by rustc, meaning this code won’t cause errors during the initialization phase. But if an exception ever occurs that causes the CPU to enter EL2—with the EL2 MMU still disabled—these offset exception entry addresses will trap the CPU in a nested EL2 exception infinite loop. That said, during testing, I’ve confirmed that no normal operations trigger EL2 exceptions under the current setup. So I’ve decided to uncomment the code. |
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25156221089. |
|
❌ Job failed. Failed jobs: build_and_check_boards (failure), see https://github.com/vivoblueos/kernel/actions/runs/25156221089. |
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25156635635. |
|
❌ Job failed. Failed jobs: build_and_check_boards (failure), see https://github.com/vivoblueos/kernel/actions/runs/25156635635. |
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25157043440. |
|
✅ All jobs completed successfully, see https://github.com/vivoblueos/kernel/actions/runs/25157043440. |
| static mut TABLE_MANAGER: PageTableManager = PageTableManager::new(); | ||
|
|
||
| #[used] | ||
| #[link_section = ".data"] |
There was a problem hiding this comment.
Why is it necessary to explicitly specify placement in the .data section? Won't it be automatically placed there?
There was a problem hiding this comment.
Yes, LINEARMAP_MANAGER needs to be explicitly placed in the .data section.
Since the initial contents of PageTableManager::new() are all zero-initialized, Rust/LLVM tends to place it in the .bss section instead of .data.
I verified this with:
nm out/shell_test/bin/shell | grep LINEARMAP_MANAGERBy default, it is placed into the .bss section.
Once LINEARMAP_MANAGER is in .bss, after the linear mapping is set up, when execution reaches kernel/kernel/src/boot.rs in the call chain init -> init_runtime -> init_bss(), the function will zero out the entire .bss section.
This resets the EL1 page tables and eventually causes kernel boot failure.
Therefore, LINEARMAP_MANAGER must be explicitly placed in the .data section.
| #[cfg(target_board = "qemu_virt64_aarch64")] | ||
| pub(crate) const KERNEL_VIRT_START: u64 = u64::MAX << KERNEL_VA_BITS; | ||
| #[cfg(not(target_board = "qemu_virt64_aarch64"))] | ||
| pub(crate) const KERNEL_VIRT_START: u64 = 0; |
There was a problem hiding this comment.
Here, using kconfig for configuration might be a better choice. It's best not to hardcode assumptions about the target_board in the kernel code. If it's needed temporarily, please add a FIXME note indicating that it will be modified later.
Auto code review★ Insight ─────────────────────────────────────
No issues found. Checked for bugs and CLAUDE.md compliance. 🤖 Generated with Claude Code |
| asm::isb_sy(); | ||
| } | ||
|
|
||
| pub fn el1_add_linearmap() { |
There was a problem hiding this comment.
this function looks like a init function, may rename as init_el1_boot_linearmap or el1_init_kernel_linearmap
There was a problem hiding this comment.
done.
I rename:
enable_el1_mmu -> init_el1_enable_mmu
el1_add_linearmap -> init_el1_boot_linearmap
bd01482 to
be9323d
Compare
I add memory layout diagram in // ============================================================================
// Linear Mapping Layout (AArch64 EL1)
// ============================================================================
//
// KERNEL_VA_BITS = 39 → 512 GB virtual address space
// LINEAR_OFFSET = 0xFFFF_FF80_0000_0000 (u64::MAX << 39)
//
// Virtual Address Space (TTBR1, 39-bit)
// ┌─────────────────────────────────────────────────────────┐ 0xFFFF_FFFF_FFFF_FFFF
// │ │
// │ ┌─────────────────────────────────────────────────────┐│ 0xFFFF_FFFF_0000_0000 + 4 GB
// │ │ Linear-mapped DRAM (4 × 1 GB L1 blocks) ││
// │ │ PA 0x0000_0000_0000 → VA 0xFFFF_FF80_0000_0000 ││
// │ │ PA 0x0000_4000_0000 → VA 0xFFFF_FF80_4000_0000 ││
// │ │ PA 0x0000_8000_0000 → VA 0xFFFF_FF80_8000_0000 ││
// │ │ PA 0x0000_C000_0000 → VA 0xFFFF_FF80_C000_0000 ││
// │ └─────────────────────────────────────────────────────┘│ 0xFFFF_FF80_0000_0000 ← KERNEL_VIRT_START
// │ │
// │ (unmapped) │
// │ │
// └─────────────────────────────────────────────────────────┘ 0xFFFF_FF80_0000_0000
//
// Translation:
// VA = PA + LINEAR_OFFSET (kernel_phys_to_virt)
// PA = VA - LINEAR_OFFSET (kernel_virt_to_phys)
//
// ============================================================================ |
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25721648189. |
1.disable virtualization
2.rename enable_mmu -> enable_el1_mmu
3.mmu.rs: add el1_add_linearmap()
4.invoke el1_add_linearmap when booting blueos on aarch64 platform
5.after eret, jump to 'jump_to_high_va'
6.add 'ldr x1, ={stack_end}'
7.change jump_to_high_va: br x16 jump to aarch64::init
8.add KERNEL_OFFSET in qemu_virt64_aarch64/link.x
9.add kernel_virt_to_phys / kernel_phys_to_virt
10.change virtio: virt_to_phys/phys_to_virt
enable_el1_mmu -> init_el1_enable_mmu el1_add_linearmap -> init_el1_boot_linearmap
|
❌ Job failed. Failed jobs: build_and_check_boards (failure), see https://github.com/vivoblueos/kernel/actions/runs/25721648189. |
b697912 to
ef75e98
Compare
|
build_prs |
|
Job is started, see https://github.com/vivoblueos/kernel/actions/runs/25722028627. |
|
✅ All jobs completed successfully, see https://github.com/vivoblueos/kernel/actions/runs/25722028627. |

Summary
This PR implements an EL1 MMU linear mapping for the AArch64 kernel, using
TTBR1_EL1to map physical memory into the kernel’s high virtual address space. It also updates the AArch64 boot path so the kernel switches from the early identity-mapped execution environment into the high virtual address space aftereret.The default kernel virtual base for
qemu_virt64_aarch64is now configured as:Main Changes
CONFIG_KERNEL_VIRT_STARTfor AArch64.mmu.rs.kernel_phys_to_virt()kernel_virt_to_phys()init_el1_enable_mmu()init_el1_boot_linearmap()TTBR1_EL1for the kernel linear map.T1SZ/T0SZ.jump_to_high_va()aftereret.qemu_virt64_aarch64linker script to link the kernel at the high virtual address while loading it at the physical address.Memory Layout
The kernel now uses a high virtual linear map for AArch64 EL1:
Files Changed
arch/arm/arm64/KconfigCONFIG_KERNEL_VIRT_START.kernel/src/arch/aarch64/mmu.rsTTBR1_EL1walks for the kernel linear map.kernel/src/arch/aarch64/mod.rsjump_to_high_va().TTBR0_EL1clearing forqemu_virt64_aarch64.kernel/src/boards/qemu_virt64_aarch64/config.rskernel/src/boards/qemu_virt64_aarch64/init.rsDRAM_BASEto a kernel virtual address.kernel/src/boards/qemu_virt64_aarch64/link.xAT(...)to preserve the physical load address.kernel/src/devices/virtio.rsNotes
Some AArch64 boot behavior is currently specific to
qemu_virt64_aarch64, especially theTTBR0_EL1clearing logic. FIXME comments were added to mark this as board-specific code that should be generalized when additional AArch64 platforms are adapted.