Skip to content

Fix/small forecasting pipeline fixes#2011

Open
BelhsanHmida wants to merge 22 commits intomainfrom
fix/small-forecasting-pipeline-fixes
Open

Fix/small forecasting pipeline fixes#2011
BelhsanHmida wants to merge 22 commits intomainfrom
fix/small-forecasting-pipeline-fixes

Conversation

@BelhsanHmida
Copy link
Contributor

@BelhsanHmida BelhsanHmida commented Mar 9, 2026

Description

This PR includes several small fixes and improvements to the forecasting pipeline jobs:

  • Fix the TypeError: Job.__init__() missing 1 required argument: 'connection' raised by the wrap-up job.
  • Improve the returned job information by including the number of jobs created together with the wrap-up job ID.
  • Update tests to reflect the updated job return structure.
  • Add db session commit before the forecasting jobs are created to prevent data_source not in database error
  • add n_cycles in calculation of forecasts assertion in test_forecasting_pipeline.py to correctly test how many forecasts generated in case of multiple cycles

These changes ensure that forecasting jobs run correctly and that job metadata returned by the pipeline is clearer and easier to interpret.


Sign-off

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on code under GPL or other license that is incompatible with FlexMeasures

Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
@read-the-docs-community
Copy link

read-the-docs-community bot commented Mar 9, 2026

Documentation build overview

📚 flexmeasures | 🛠️ Build #31753891 | 📁 Comparing 60a85b3 against latest (c6bb60f)


🔍 Preview build

Show files changed (2 files in total): 📝 2 modified | ➕ 0 added | ➖ 0 deleted
File Status
changelog.html 📝 modified
api/v3_0.html 📝 modified

@nhoening nhoening added this to the 0.31.2 milestone Mar 9, 2026
Flix6x added 7 commits March 9, 2026 13:22
Signed-off-by: F.N. Claessen <claessen@seita.nl>
Signed-off-by: F.N. Claessen <claessen@seita.nl>
Signed-off-by: F.N. Claessen <claessen@seita.nl>
Signed-off-by: F.N. Claessen <claessen@seita.nl>
…dated

Signed-off-by: F.N. Claessen <claessen@seita.nl>
…ters

Signed-off-by: F.N. Claessen <claessen@seita.nl>
…ycle job

Signed-off-by: F.N. Claessen <claessen@seita.nl>
@Flix6x
Copy link
Contributor

Flix6x commented Mar 9, 2026

Thanks. I also pushed some changes to the test myself now, which now covers the case where more than 1 cycle is asked, thereby resulting in a wrap-up job being returned. We did not cover that yet. There was also a mistake in an if-statement in the test that resulted in us checking each item in an empty list of cycle_job_ids. See f26c41b.

The new test case unexpectedly failed on creating 3 forecasts instead of the expected 2 (one per cycle). Could you please investigate why? From inspecting the forecasts, I believe the last one doesn't make sense. Also the assert comment needs to be updated. None of the test cases lead to 4 forecasts per horizon, I think.

        assert (
            len(forecasts) == m_viewpoints * n_hourly_horizons * n_events_per_horizon
        ), f"we expect 4 forecasts per horizon for each viewpoint within the prediction window, and {m_viewpoints} viewpoints with each {n_hourly_horizons} hourly horizons"

Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
… pipeline

Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
…ecasts

Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
…rity

Signed-off-by: Mohamed Belhsan Hmida <mohamedbelhsanhmida@gmail.com>
@BelhsanHmida
Copy link
Contributor Author

Thanks. I also pushed some changes to the test myself now, which now covers the case where more than 1 cycle is asked, thereby resulting in a wrap-up job being returned. We did not cover that yet. There was also a mistake in an if-statement in the test that resulted in us checking each item in an empty list of cycle_job_ids. See f26c41b.

The new test case unexpectedly failed on creating 3 forecasts instead of the expected 2 (one per cycle). Could you please investigate why? From inspecting the forecasts, I believe the last one doesn't make sense. Also the assert comment needs to be updated. None of the test cases lead to 4 forecasts per horizon, I think.

I investigated this.

The extra forecast comes from viewpoints within each cycle. In this case:

  • start = 2025-01-08 00:00+02:00
  • end = 2025-01-09 00:00+02:00
  • retrain-frequency = 12h
  • forecast-frequency = 12h
  • max-forecast-horizon = 1h
    So the pipeline runs 2 cycles, but each cycle also generates 2 viewpoints because the predict window is still 24h and the predict pipeline uses predict_period_in_hours / forecast_frequency = 24 / 12 = 2 viewpoints per cycle.

That gives these forecasted events:

  • cycle 1, viewpoint 1: event at 2025-01-08 00:00+02:00
  • cycle 1, viewpoint 2: event at 2025-01-08 12:00+02:00
  • cycle 2, viewpoint 1: event at 2025-01-08 12:00+02:00
  • cycle 2, viewpoint 2: event at 2025-01-09 00:00+02:00
    So the “last” forecast does make sense: it is the second viewpoint of the second cycle. The overlap is at 2025-01-08 12:00+02:00, which is forecast once from the first cycle and then again from the second cycle with a newer belief time.

This is also why:

with most_recent_beliefs_only=True, you see 3 rows, because the two beliefs for 2025-01-08 12:00+02:00 collapse to the most recent one
with most_recent_beliefs_only=False, you see the full 4 saved forecasts

to more correctly test which forecasts are being saved into the sensor i set most_recent_beliefs_only=False in sensor.search_beliefs and introduced number of cycles in calculation of forecasts assertion

@Flix6x
Copy link
Contributor

Flix6x commented Mar 11, 2026

Thank you for the clear explanation. That almost clears it up for me completely. Two lingering doubts:

Why does the second cycle have a viewpoint exactly at the end? That feels odd to me. The only case where that would actually make sense to me is when forecasting an instantaneous sensor (for which the last event in the predict period is exactly at the end).

And, if the end is meant to be included, why does the first cycle not include the end? That looks inconsistent, doesn't it?

@BelhsanHmida
Copy link
Contributor Author

BelhsanHmida commented Mar 11, 2026

Thank you for the clear explanation. That almost clears it up for me completely. Two lingering doubts:

Why does the second cycle have a viewpoint exactly at the end? That feels odd to me. The only case where that would actually make sense to me is when forecasting an instantaneous sensor (for which the last event in the predict period is exactly at the end).

And, if the end is meant to be included, why does the first cycle not include the end? That looks inconsistent, doesn't it?

yes true the last viewpoint we should see is end - sensor_resolution. i'm looking into this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants