Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions loader/src/aarch64/cpus.c
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
#include <stdint.h>

#include "smc.h"
#include "../arch.h"
#include "../cpus.h"
#include "../cutil.h"
#include "../loader.h"
Expand Down Expand Up @@ -110,6 +111,14 @@ void arm_secondary_cpu_entry(int logical_cpu, uint64_t mpidr_el1)

plat_save_hw_id(logical_cpu, mpidr_el1);

int r = arch_mmu_enable(logical_cpu);
if (r != 0) {
LDR_PRINT("ERROR", logical_cpu, "failed to enable MMU: ");
puthex32(r);
puts("\n");
goto fail;
}

start_kernel(logical_cpu);

fail:
Expand Down Expand Up @@ -175,6 +184,10 @@ int plat_start_cpu(int logical_cpu)
/* zero out what was here before */
sp[1] = 0;

/* clean up the cache line containing sp before use by secondary core */
asm volatile("dc cvac, %0" :: "r"(sp) : "memory");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is an argument that we should make the loader use outer shareable instead of inner shareable, then I think this should be fine.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that would solve any coherency problems when the caches are disabled on the secondary cores.

Copy link
Copy Markdown
Contributor Author

@bruelc bruelc Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that would solve any coherency problems when the caches are disabled on the secondary cores.

Caches are enabled early in arm_secondary_cpu_entry(). Before, the only data that must be coherent is the parameter read from the stack by arm_secondary_cpu_entry_asm() (without this I read 0 as logical_cpu parameter, which fails). Instructions should be fine.

We could conservatively clean the entire cache here, but that seems like overkill. Am I missing anything, or do you see any other possible source of incoherence before caches are enabled in this sequence?

(edit: arm_secondary_cpu_entry_asm() read the stack, not the PSCI call)

Copy link
Copy Markdown
Contributor

@midnightveil midnightveil Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies, I didn't notice this was for the stack, which probably needs a cache clean here yes.

What I had thought of was that the print_lock is a variable that exists across cores that won't be in the same shareability domain. I probably need to recheck the manual here, it might be that a "dsb sy" (full system barrier) would be OK when the memory is mapped ISH as opposed to OSH, but I also don't know if that's what the compiler atomics expect and whether the instructions they emit work for that case.

Admittedly this isn't an issue with this PR at all...

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem. It's best to discuss the details now :). I think the print_lock exclusive accesses are in the same shareability domain, because they are accessed after enable_mmu() on all cores, with the same MMU/cache attributes. Also, we enable both ISH and OSH shareability in the translation tables

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm lost. is the scenario you describe is with my dc cvac, w/a or with the early enable_mmu fix ?

I meant this PR as it is at this moment, without early MMU enable or other changes.

My question is why that works, as it triggers the corner case I'm concerned about. (Cached data shadowing uncached writes.)

using my w/a the sequence is:
[...]
that's probably why I see it OK at my end;

I don't know what you mean with "w/a", but the pop I mean is the return from the function call, not retrieving the arguments passed from core 0, that's always fine.

If you mean enabling caches without enabling the MMU, that should work for L2 as that's PIPT. Not entirely sure how that interacts with a VIPT L1 cache though, that might be implementation defined.

now if there is an issue about the cache not being invalidated after the clean, I can use "dc civac" instead of "dc cvac"

No, that would only hide the issue, as speculative reads at the wrong moment have the same effect. Better to always trigger the scenario so it breaks immediately when something changes, instead of once in a blue moon.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question is why that works, as it triggers the corner case I'm concerned about. (Cached data shadowing uncached writes.)

It should only work if, when returning from the last call that enables the MMU, no (important) registers were pushed to stack and only LR is used to return. If LR got pushed when entering the MMU enable function, on leave, if we pop that value from cache, LR should contain an invalid (cached) value. That's okay as long as we don't do another return, but seems somewhat precarious. E.g. it may only work when compiling with optimisations enabled.

Same for other registers, but with so many registers on AARch64 that's unlikely, so it's mostly the return stack that's of concern.

That's why I said above that writing to stack isn't save with MMU disabled. I don't mean the arguments passed from core 0 to secondary cores, I meant stack writes done by the secondary cores themselves.

Copy link
Copy Markdown
Contributor Author

@bruelc bruelc Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, sorry, You were talking about the ABI pop, I was still with the 'logical_cpu' pop... I see your point now.

It might be a stupid question, but how can we have a cache conflict between the main core stack and other core's ? they are not at the same address. (not even in the same cache line), and if they did because of cache associativity, one of them would be evicted.

Copy link
Copy Markdown
Contributor Author

@bruelc bruelc Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... or the clean and invalidate of the cache clears out L2 too, ???

I think this the case : doing a full clean cache enforces what ARM calls PoC, and this is called from el2_mmu_enable->el2_mmu_disable->flush_dcache

So I see the sequence as your sequence now as

Core 0: Write data to core 1's stack, bringing it into L1/L2.
Core 0: Clean, but not invalidate cache line (this PR) and start core 1.
Core 1: Call function that cleans and invalidates cache and enables the MMU. -> PoC (*)
Core 1: Pops registers,

(*) Any data cleaned to PoC is guaranteed to be visible

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be a stupid question, but how can we have a cache conflict between the main core stack and other core's ? they are not at the same address. (not even in the same cache line).

It's not a conflict between the main core's stack and other core's stack. It's a conflict in value between the cached view and the uncached view of the other core's stacks.

Looks like just doing a full clean cache before jumping to secondary core is the way to enforce what ARM calls PoC

Yes, and that's fine. That's part of what I meant with "retrieving the arguments passed from core 0, that's always fine.", as that's easily made coherent with the cache clean.

The incoherence is introduced by the other core writing any data to uncached memory, while the same cache line is in any other cache. Those writes bypass the cache altogether. The writes that are most concerning are pushes to stack.

If this happens, after enabling the cache, the core will suddenly see the cached value.

Now, I think the reason it works in practice is because flush_dcache does a clean and invalidate by set/way for all cache levels in el1/2_mmu_disable, which is the deepest function. At that point all values stored on stack are made coherent. Very importantly, it doesn't do any other memory writes (directly or via push) between the invalidate and the cache and MMU enable.

If it did just a clean, we should hit this issue and it probably would crash when returning from one of the functions called.

asm volatile("dsb sy" ::: "memory");

/* Arguments as per 5.1.4 CPU_ON of the PSCI spec.

§5.6 CPU_ON and §6.4 describes that:
Expand Down
16 changes: 7 additions & 9 deletions loader/src/loader.c
Original file line number Diff line number Diff line change
Expand Up @@ -100,15 +100,6 @@ static int print_lock = 0;

void start_kernel(int logical_cpu)
{
LDR_PRINT("INFO", logical_cpu, "enabling MMU\n");
int r = arch_mmu_enable(logical_cpu);
if (r != 0) {
LDR_PRINT("ERROR", logical_cpu, "failed to enable MMU: ");
puthex32(r);
puts("\n");
for (;;) {}
}

LDR_PRINT("INFO", logical_cpu, "jumping to kernel\n");

#ifdef CONFIG_PRINTING
Expand Down Expand Up @@ -175,6 +166,13 @@ int main(void)
puthex32(plat_get_active_cpus());
puts("\n");

r = arch_mmu_enable(0);
if (r != 0) {
LDR_PRINT("ERROR", 0, "failed to enable MMU: ");
puthex32(r);
fail();
}

for (int cpu = 1; cpu < plat_get_active_cpus(); cpu++) {
r = plat_start_cpu(cpu);
if (r != 0) {
Expand Down
Loading