K

K

Joined March 27, 2025

Karma 7

K

1

Posted by Abdullah

Hi @stubbi and @charlynesmith we are still facing error while submitting on the portal
K

2

Posted by Abdullah

I believe the same error pops up in the phone ,I have not taken a screenshot using phone but it has happened
K

2

Posted by Abdullah (edited)

Hi @stubbi, @paf and @charlynesmith We ran into issues while submitting our proposal. We downloaded the competitor agreement and the questionnaire a week back on our device but it got deleted due to some ongoing formatting work . We have created our technical proposal solution with the reference template given to us on the platform in overleaf. Now while submitting our files we tried downloading the competitor agreement and the questionnaire files again but it redirects us to a XML Error . We tried for many hours switching between devices and checking for internet issues but still the same error pops each time we try downloading the file to fill our details.
We have taken a screenshot of the error and the URL for your reference . Since this is a technical issue Which we cannot fix . We would like to request one day extension that is 19 april 2025 for the submission if this issue is fixed . We already have our technical proposal solution ready we just have to download and fill the competitors agreement and the questionnaire We would also like to thank the Aqora team and the EPRI Team for creating such an innovative challenge .We were able to come up with a proposal solution which addresses the issue very well in terms of scalabiltity and innovation .Error.png
K

1

Posted by Abdullah

Hi @stubbi and @yhaddad Good news we were finally able to upload the submission We took a small subset of the training data and took the entire data for testing we got a good score of 85 for that then we tested the data for the final upload to check whether the training will crash or not we were able to successfully get a score of 81 on the final upload I believe all the time we were trying to diagnose what was the error with testing the data with the entire dataset which was the reason for the crash we believe that due to taking all the test samples caused memory overload We tried many methods to fix this until one proven method finally worked for us . Keeping the deadline in mind and for fair evaluation we did not made any changes to our QML and Classical model . For reference we already have submitted our model notebook to @stubbi via email before deadline if there is need for cross checking the model . We just tried adjusting the data for training and applying methods to reduce testing time for our model which shows our model is robust and scalable We thank you for the patience , advise and the opportunity for participating in this challenge Awaiting the results Thank you Regards
K

1

Posted by Abdullah

Hi @stubbi we tried everything possible to be done The only solution now would be to run it on a HPC Device or GPU is it possible to send the notebook so you can diagnose the error and upload it on our behalf
K

1

Posted by Abdullah (edited)

Hi @yhaddad and @stubbi we are still facing the dimensionality error we did what was suggested by @yhaddad and we also took less sample data for training but while uploading it throws an error the only solution for us is to have some compute or HPC power to run the training for all the data samples because our device keeps crashing during the testing Or @stubbi we can send the file to you if you are able to upload it Our model is working well with 20k data sample giving us a score of 82
thank you ERROR.jpeg
K

2

Posted by Abdullah (edited)

Hi @yhaddad and @stubbi Thanks for your inputTrialHEP.png Thank you both for your valuable input.
Initially, we trained our model using a subset of the data for both training and testing, and this yielded a promising R² score of around 81. As we gradually increased the dataset size for both training and testing—while keeping the model architecture unchanged—we continued to observe good results. However, we noticed a significant increase in training time, reaching up to 2 hours per epoch. Interestingly, the MSE score decreased significantly during this phase, indicating better learning.
We also experimented with training on the full dataset (around 1,000,000 samples) while using a smaller validation subset. This setup resulted in an R² score of 74, which still demonstrates good performance.
From these observations, we conclude that our model is scalable for larger datasets, even with a relatively small number of qubits. It also shows potential to become more robust when slightly deepened by adding a few more layers. So far, we have intentionally kept the model architecture unchanged to ensure fair evaluation before the challenge deadline.
the thing is that it was not specified in the problem statement that we need to train on the entire dataset for the final metrics on the challenge backend which I believe was the reason for the dimensionality error. With additional compute resources, we could have reduced the training time and likely resolved the final submission issue.
Since the challenge backend doesn’t allow us to explicitly limit the evaluation to a data subset, we are adopting the approach suggested by @yhaddad:
We will train the model on a small data sample and evaluate it on the entire dataset. and we are also trying the second method to split the train dataset for our training and validation both . We will update you with the results shortly.
K

1

Posted by Abdullah

Hi @stubbi no as far as I am concerned there is no need for pre-processing We are currently playing with the data to see how well our model generalizes to the dataset
K

1

Posted by Abdullah (edited)

Hi @julian Jannes We are currently testing our model With the entire dataset using GPU with collab since the Gpu is supported using collab For now with two epochs the MSE Loss is significantly decreasing with the same model which shows model is learning well We are expecting a good score at the end of the training for this As I mentioned the training time is too much even on gpu which is 2 hours per epoch on a Tesla GPU P100 I will inform you by tomorrow the final R2 score we are getting after the training gets over .for submitting on the leaderboard we might have to take all the data which might be time consuming or we might have to change the metrics on the backend to pass the dimensionality error
K

2

Posted by Abdullah

Hi @stubbi Almost all the errors are resolved We are able to test get a score And even the uploading was working well But later on an error pops up saying dimensionality error due to a small subset of data samples taken during our training The metric or evaluation function backend takes into the whole data And if we take the whole data in our code It takes a lot of time for training