Enable user to take ownership of MPI#722
Conversation
…unction name to comm_getMpiComm
…ing for subcomm compiled
…for SubComm, because of course it enforces CXX
…declare that they take ownership of MPI
… updated setComm for init only workflow
…ything but it makes MPI library implementers less nervous
…ith user owned MPI
|
On the plus side, a distributed QFT benchmark circuit runs perfectly correctly: |
| */ | ||
| void initCustomQuESTEnv(int useDistrib, int useGpuAccel, int useMultithread); | ||
|
|
||
| void initCustomMpiQuESTEnv(int useDistrib, int userOwnsMpi, int useGpuAccel, int useMultithread); |
There was a problem hiding this comment.
Do we not want a new initialiser so its really clear where there is a different setup? Also this probably braked compatibility for older software.
There was a problem hiding this comment.
That is a new initialiser! This is effectively a C interface, so no overloading allowed.
There was a problem hiding this comment.
Apologies I had missread this I thought it was replacing initCustomQueSTEnv hence the concern. Once again reading comprehension issue must do better.
There was a problem hiding this comment.
The difference is more subtle than I might have liked, but the alternative would be an initialiser that was way too long!
There was a problem hiding this comment.
initCustomMpiCommQuESTEnv?
|
Looks good not run yet. My only comment is above and raises a wider point where we need to think about how we managed change in the programming interface as we probably want to stabilise as much as possible and only add to existing functionality. |
Agreed, If we're being proper about it, breaking changes to the API require a new major version release, and I don't think we have the time or the inclination currently. |
|
Remaining testing required:
|
iarejula-bsc
left a comment
There was a problem hiding this comment.
Overall, looks good to me, really like the idea! I was wondering if this design would allow running quregs in parallel, each with their own communicator (different envs for each one, maybe?), or if that would be impossible with the current implementation. I think that would be a highly interesting feature. As my team is trying to apply malleability to QuEST, that would be a huge win for us.
A few small comments below, but note this is an untested review.
|
|
||
|
|
||
| void comm_init() { | ||
| void comm_init(int userOwnsMpi) { |
There was a problem hiding this comment.
Should userOwnsMpi be a bool instead of int? It's only ever used as a boolean. Is there a repo convention for this, or is it to maintain C compatibility?
There was a problem hiding this comment.
Yes, but repo convention is to use int for historical (C) reasons
There was a problem hiding this comment.
Oh bool is totally fine internally, and should be used. You'll see it all through the codebase, like here. A previous contention was whether we can expose bool in a C-visible header, as we did here, so that's a license to use it in all user-visible stuff too.
Note initCustomQuESTEnv accepts int flags because they are not bools; they can be -1 to specify "choose this automatically".
There was a problem hiding this comment.
Ah, thanks Tyson. I will update to bool in that case.
| MPI_Init(NULL, NULL); | ||
|
|
||
| MPI_Init(NULL, NULL); | ||
| MPI_Comm_dup(MPI_COMM_WORLD, &mpiCommQuest); |
There was a problem hiding this comment.
What if we simplify this by passing the communicator directly as a parameter instead of userOwnsMpi? If the user passes a communicator, QuEST uses it; if null/not provided, we initialize MPI and default to MPI_COMM_WORLD. This removes the ambiguity of who owns MPI and makes the API more explicit:
void comm_init(MPI_Comm userComm = MPI_COMM_NULL) {
if (userComm == MPI_COMM_NULL) {
MPI_Init(NULL, NULL);
MPI_Comm_dup(MPI_COMM_WORLD, &mpiCommQuest);
} else {
MPI_Comm_dup(userComm, &mpiCommQuest);
}
}There was a problem hiding this comment.
I think this is what I had in mind originally, but if I recall correctly the issue is that it brings MPI_Comm into the public headers, which we need to avoid as it causes headaches for users who are linking a non-MPI application.
I will double-check though!
| // set mpiCommQuest to user provided communicator | ||
| comm_setMpiComm(userQuestComm); | ||
|
|
||
| // initialise QuEST around that communicator | ||
| initCustomMpiQuESTEnv(USE_DISTRIB, USER_OWNS_MPI, useGpuAccel, useMultithread); |
There was a problem hiding this comment.
does this work correcty? initCustomMpiQuESTEnv calls on comm_init where we are overwriting the mpiCommQuest with MPI_COMM_WORLD, does not matter if the user has the ownsership.
void comm_init(int userOwnsMpi) {Expand commentComment on line R106
...
// QuEST must initialise MPI if the user does not own it
if (!userOwnsMpi)
MPI_Init(NULL, NULL);
MPI_Comm_dup(MPI_COMM_WORLD, &mpiCommQuest);
iarejula-bsc
left a comment
There was a problem hiding this comment.
In subcommunicator.cpp:17, userQuestComm appears to be duplicated into mpiCommQuest before the normal environment validation runs. If validation later fails under a throwing or custom error handler, the duplicated communicator may be left in mpiCommQuest with no cleanup, which is an MPI resource leak.
Haven't had time to trace this fully so it may be a non-issue depending on execution order, but probably worth a quick check.
|
|
||
| // safely callable before MPI initialisation, but NOT after comm_end() | ||
| int isInit; | ||
| MPI_Initialized(&isInit); |
There was a problem hiding this comment.
There are now two distinct initialized states that the code conflates into one:
MPI runtime initialized: MPI_Init() has been called
QuEST communicator initialized: mpiCommQuest != MPI_COMM_NULL
comm_isInit() only checks the first, So comm_isInit() returning true is not a safe guard against calling MPI with a null communicator.
Precise crash path:
1. MPI_Init()State After
MPI runtime: initialized
mpiCommQuest: MPI_COMM_NULL
QuEST env: not initialized
- initCustomMpiQuESTEnv(0, 1, 0, 0)
└─ validateAndInitCustomQuESTEnv(
useDistrib=0, userOwnsMpi=1,
useGpuAccel=0, useMultithread=0
)
├─ validate_envNeverInit(...)
├─ validate_newEnvDeploymentMode(...)
├─ autodep_chooseQuESTEnvDeployment(...)
├─ if (useDistrib)
│ └─ comm_init(userOwnsMpi)
│
│ ** skipped: useDistrib == 0 **
│
└─ validate_newEnvDistributedBetweenPower2Nodes(...)
└─ comm_getNumNodes()
├─ comm_isInit()
│ └─ MPI_Initialized(&isInit)
│ returns true <-- runtime is up, guard passes
│
└─ MPI_Comm_size(mpiCommQuest, &numNodes) CRASHState at Crash
MPI runtime: initialized (ok)
mpiCommQuest: MPI_COMM_NULL (!)
QuEST env: not initialized (!)
CRASH: MPI_Comm_size called with MPI_COMM_NULL -> abort
I suggest that comm_isInit() should distinguish between "MPI runtime is initialized" and "QuEST communicator is initialized", or all accessors should guard against mpiCommQuest != MPI_COMM_NULL.
| // set mpiCommQuest to user provided communicator | ||
| comm_setMpiComm(userQuestComm); | ||
|
|
||
| // initialise QuEST around that communicator | ||
| initCustomMpiQuESTEnv(useDistrib, userOwnsMpi, useGpuAccel, useMultithread); |
There was a problem hiding this comment.
I think there is a problem with the order of operations or you must ensure MPI is initialized: initCustomMpiCommQuESTEnv() duplicates the user communicator before verifying that the MPI runtime exists at all.
In the case were MPI is not being initialized, the guard will be skiped
void comm_setMpiComm(MPI_Comm newComm) {
if (mpiCommQuest != MPI_COMM_NULL) // false
error_commDoubleSetMpiComm();
MPI_Comm_dup(newComm, &mpiCommQuest); // <-- ... before any MPI_Init() check
}
We think it should allow that yes -- also useful for our CQ project. @JPRichings will explicitly add it to his testing to be sure. Thank you for catching some of the initialisation bugs -- I will have a more detailed look and try fix them later this week! Much appreciated. |
Nice! I was wondering whether using concurrent quregs at the same time could be an issue out of mpi. I have two main concerns:
|
|
Note to add single GPU multi-Qreg testing with multiple rank 1 communicators to confirm global GPU functionality implications here |
|
I'm happy to review this too; and eager to, since realising my comm software architecture was limiting! It would be good to first see an example of the imagined user flow. Does anyone have an example flow handy? I already (very weakly) lean to breaking API of
I think this is achievable internally, though with some substantial refactoring - it does not seem admitted by the current communicator-singleton design of the existing codebase, and of this PR. I could imagine a "stateful" user flow (to mirror the existing statefulness of // existing API
initQuESTEnv(); // calls MPI_Init
// new API enabled only with compile-time flag, like COMPILE_SUBCOMM
MPI_Comm questComm = getDefaultQuESTCommunicator();
// user-fiddling with MPI directly (let's say myComm has smaller size than WORLD)
MPI_Comm myComm;
MPI_Comm_split(questComm, ...)
// new API enabled only with compile-time flag, like COMPILE_SUBCOMM
setQuESTCommunicator(myComm);
// existing API
Qureg smallQureg = createQureg(...); // adopts myComm, distributed between e.g. 4 ranks
// smallQureg would NOT have buffers allocated on WORLD nodes outside myComm
setDefaultQuESTCommunicator();
Qureg bigQureg = createQureg(...); // adopts internal comm, distributed between all ranks, e.g. 16Here, we've left all existing API unchanged, but introduced new functions only exposed in a new header like Oliver's We can achieve above even without attaching MPI types to user-facing structs (which is illegal). One (semi-gross?) way is to attach only a communicator id which indexes a dynamic list (or dictionary?) in @otbrown ignoring any considerations of internal complexity, does such a design make sense? My rationale is that if we have to sully the file/library separation to enable custom MPI, we should get as much "bang for our buck" and try to enable all things an MPI superuser may wish to do. Would the stateful design above achieve that? The statefulness is just an alternative to adding an MPI_Comm arg to every distributed-object creator of course. EDIT: noting minor user pitfall: |
|
ARCHER2 GPU environment build issue: My Cmake-foo is not strong enough to figure out how this has failed on A2 and not in your own testing. |
|
I have my own QFT example working on archer2: 4 node run result: code for future reference / check ive not done a stupid thing: #include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#include "quest.h"
#include <mpi.h>
double calcPhaseShif(const int);
void qftQubit(Qureg, const int, const int);
void qft(Qureg, const int);
void writeState(const int * const, const size_t);
void getMonoTime(struct timespec *);
double getElapsedSeconds(const struct timespec * const,
const struct timespec * const);
int main (int argc, char *argv[])
{
int num_qubits;
MPI_Init(&argc, &argv);
if (argc < 2) {
num_qubits = 4;
} else if (argc > 2) {
printf("Error: Too many arguments! Usage: ./qft $NUMBER_OF_QUBITS\n");
return -1;
} else {
// arcg == 2
num_qubits = atoi(argv[1]);
if (num_qubits < 1) {
printf("Error: num_qubits < 1, you requested %d qubits\n", num_qubits);
return -1;
}
}
struct timespec run_start, run_stop;
struct timespec qft_start, qft_stop;
double run_time, qft_time;
getMonoTime(&run_start);
// Borrowing from https://rookiehpc.org/mpi/docs/mpi_comm_create/index.html
// initialise MPI
//MPI_Init(&argc, &argv);
// Extract info about comm world
int size;
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("size comm world: %d \n", size);
if(size < 3)
{
printf("Please run this application with at least 4 processes.\n");
MPI_Abort(MPI_COMM_WORLD, EXIT_FAILURE);
}
MPI_Group world_group;
MPI_Comm_group(MPI_COMM_WORLD, &world_group);
// Keep only the processes 0 and 1 in the new group.
int ranks[2] = {0, 1};
MPI_Group new_group;
MPI_Group_incl(world_group, 2, ranks, &new_group);
// Create the new communicator from that group of processes.
MPI_Comm new_communicator;
MPI_Comm_create(MPI_COMM_WORLD, new_group, &new_communicator);
// Test new communicator
int my_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
int value;
if(new_communicator == MPI_COMM_NULL)
{
// I am not part of the new communicator, I can't participate to that broadcast.
printf("Process %d did not take part to the new communicator broadcast.\n", my_rank);
}
else
{
// I am part of the new communicator, I can participate to that broadcast.
MPI_Bcast(&value, 1, MPI_INT, 0, new_communicator);
printf("Process %d took part to the new communicator broadcast.\n", my_rank);
}
if(new_communicator == MPI_COMM_NULL)
{
printf("Process %d did not take part in running QueST.\n", my_rank);
}
else
{
// initialise QuEST with sub communicator
initCustomMpiCommQuESTEnv(new_communicator, 0, 1);
// report on QuEST env
reportQuESTEnv();
// create quantum register
Qureg qureg = createQureg(num_qubits);
// report on qreg
reportQureg(qureg);
if (!getQuESTEnv().rank)
printf("Simulating %d-Qubit QFT\n\n", num_qubits);
// initialise input register to |0..0>
initZeroState(qureg);
// report model
if (!getQuESTEnv().rank) {
reportQuregParams(qureg);
printf("\n");
reportQuESTEnv();
printf("\n");
}
getMonoTime(&qft_start);
// apply QFT to input register
qft(qureg, num_qubits);
getMonoTime(&qft_stop);
if (!getQuESTEnv().rank)
printf("Total number of gates: %d\n", (num_qubits * (num_qubits+1))/2 );
// results test
qreal prob_0 = calcProbOfBasisState(qureg, 0);
if (!getQuESTEnv().rank) {
printf("Measured probability amplitude of |0..0> state: %g\n", prob_0);
printf("Calculated probability amplitude of |0..0>, C0 = 1 / 2^%d: %g\n",
num_qubits, 1.0 / pow(2,num_qubits));
printf("Measuring final state: (all probabilities should be 0.5)\n");
}
int * state = (int *) malloc(num_qubits * sizeof(int));
qreal * probs = (qreal *) malloc(num_qubits * sizeof(qreal));
for (int n = 0; n < num_qubits; ++n) {
state[n] = applyQubitMeasurementAndGetProb(qureg, n, probs+n);
}
if (!getQuESTEnv().rank) {
for (int n = 0; n < num_qubits; ++n)
printf("Qubit %d measured in state %d with probability %g\n",
n, state[n], probs[n]);
printf("\n");
printf("Final state:\n");
writeState(state, num_qubits);
}
// free resources and stop timer
free(state);
free(probs);
destroyQureg(qureg);
getMonoTime(&run_stop);
qft_time = getElapsedSeconds(&qft_start, &qft_stop);
run_time = getElapsedSeconds(&run_start, &run_stop);
if (!getQuESTEnv().rank)
printf("QFT run time: %gs\nTotal run time: %gs\n", qft_time, run_time);
// finalize QuESTEnv late because we need MPI rank
finalizeQuESTEnv();
}
return 0;
}
double calcPhaseShift(const int M) {
return ( M_PI / pow(2, (M-1)) );
}
void qftQubit(Qureg qureg, const int num_qubits, const int QUBIT_ID) {
int control_id = 0;
double angle = 0.0;
applyHadamard(qureg, QUBIT_ID);
int m = 2;
for (int control = QUBIT_ID+1; control < num_qubits; ++control) {
angle = calcPhaseShift(m++);
applyTwoQubitPhaseShift(qureg, control, QUBIT_ID, angle);
}
return;
}
void qft(Qureg qureg, const int num_qubits) {
for (int qid = 0; qid < num_qubits; ++qid)
qftQubit(qureg, num_qubits, qid);
return;
}
void writeState(const int * const STATE, const size_t num_qubits) {
printf("|");
for (size_t n = 0; n < num_qubits; ++n) printf("%d", STATE[n]);
printf(">\n");
return;
}
void getMonoTime(struct timespec * time) {
clock_gettime(CLOCK_MONOTONIC, time);
return;
}
double getElapsedSeconds(const struct timespec * const start,
const struct timespec * const stop) {
const unsigned long BILLION = 1000000000UL;
const unsigned long long TOTAL_NS =
BILLION * (stop->tv_sec - start->tv_sec)
+ (stop->tv_nsec - start->tv_nsec);
return (double) TOTAL_NS / BILLION;
} |
|
As noted by ticked items in #722 (comment) The following tests needed to confirm this pull request is working have been done on ARCHER2 CPU multinode:
Outstanding tests and issues:
|
|
@JPRichings super useful, tyvm! In that example, you call explicitly before initialising QuEST. It's my understanding that those args never actually affect the MPI env; MPI just conveniently removes MPI-specific args for us. So is there any utility in letting the user call |
|
I have called I agree for a pure QuEST application with multiple Qreg's each with their own subComm controlling this in QuEST with MPI initialised in I think we ultimately need both for different use cases but I am a bit weary of the amount of time we have for additional changes and debugging before our planned release. My concern is that we rush this and wish we had thought about it more carefully. |
What about when (As I mentioned above, this depends upon So it remains valid to call
Certainly we shouldn't rush through any drafted designs - we can always expose experimental stuff in non-main branches 🙏 |
|
I think I agree with your reasoning and I hadn't twigged that the If I have understood correctly in your proposal QuEST always knows about two communicators current and default and this will lead to the developer above keeping track of all the comms created for each Qreg themselves and we might end up needing to provide helpers for this. I feel like having a default and then setting the MPI comm for a given Qreg, as the Qreg is created, would be easier for the user. I think setting the current comm will get lost by the user and mistakes will ensue when it was set 9 files over and forgotten about. ( I don't like how I think what this could buy us is the ability to clone a Qreg or check that based on the default Comm that Qreg's are not overlapping and I think this helps us with communication between Qregs later down the line as we know which mpi communicators are involved from the Qreg's themselves. Not sure what I am suggesting makes sense and it is personal preference mainly informed by Note: Think about this a little more we aren't likely to end up with different running environments using different compute targets for each Qreg so we should seperate the QuEST init which is dealing with how we are going to simulate from on what bit of that available resource we plan to use for a given Qreg. I do think that Oliver's current code gets us to stage one and we should maybe focus the next full release on bottoming out this implementation. |
Yep for sure. Regardless of any of the API changes, we should move internally to a custom communicator.
In the design I propose, each The nuance of my proposal is all in the API. In lieu of creating a new "custom initialiser", nor new creators for each distributed object, the use of will bind Indeed there can and should exist helper functions like
I think we're imagining the same internal design!
Hmm was "aren't" a critical typo here 😅 ? In any case, each
I don't think we should release any API we intend to imminently break or rework entirely. But I suspect we can refine the interface of Oliver's current single-communicator design to be released in v4.3, in a way that doesn't require API break when we extend it to Qureg-specific communicators. One way is to use after If/when we later extend to Note that what we cannot do is create a |
|
I agree we should not release then immediately rework features I was hoping we could find an initial subset that gets us a minimal implementation that we can clean up for this release and then keep exploring this. I was concerend the level of change was increasing. OK on your point around using I think my remaining confusion is how do you propose picking the communicator set by |
|
Fixed: #722 (comment) ARCHER2 GPU compilation environment is always fun. |
|
@TysonRayJones Apologies, I was only weakly following along over the weekend on my phone! I did have a chat with James today about your suggestions, and the summary of my thoughts is:
Every week I say I will get to this soon, and then I get completely derailed 😅 I truly hope to have implemented the additional validation proposed by the end of this week. I've got a couple of lengthy train journeys this week which should help with that. I must insist however that we push the more significant work proposed above to a separate issue for post v4.3 contemplation! |
…o avoid unexpected test failure due to range checking when compild in Debug configuration
Co-authored-by: iarejula-bsc <inigo.arejula@bsc.es>
Closes #712.
MPI_COMM_WORLDeverywhere. Instead it usesmpiCommQuest(which will very often just be a duplicate ofMPI_COMM_WORLD). This aligns with best practise for library developers, as it helps avoid avoid clashes with user MPI calls.mpiCommQueststuff was broken essentially any QuEST program would also be broken.MPI_COMM_WORLD. This interface is guarded behind compile-time flags as it requires bringing the MPI header into the interface, which means all consumers of libQuEST need to include MPI as well. This interface is not yet documented or tested. I think this is tolerable in the short term as it should only appeal to advanced users.Speaking of tests! Test results on my laptop:
and min_example output:
There is a caveat here: distributed tests on Cirrus keep failing, but this appears to be true of devel as well as this branch, so I'll do some more digging and raise it as a separate issue once I have a better of idea of what is going on.