The second method might work for daily resolution, but what about higher resolutions like 1 minute?
I came across this issue myself, and my idea is to have a lambda function on AWS, that prepares the historical data and dumps it into an S3 bucket, then my algorithm would download the file with historical data. What do you think?
Thank you for your comments! You could use this method for minute or even tick resolutions unless you require a massive amount of data for warmup in which case you could store it somewhere else and use a different loading method. So far the largest object I've stored on QC had over 350k elements and only took up 4MB of space.
Great article btw, i prefer articles like this that focus on pain points instead of long tutorials!
The second method might work for daily resolution, but what about higher resolutions like 1 minute?
I came across this issue myself, and my idea is to have a lambda function on AWS, that prepares the historical data and dumps it into an S3 bucket, then my algorithm would download the file with historical data. What do you think?
Thank you for your comments! You could use this method for minute or even tick resolutions unless you require a massive amount of data for warmup in which case you could store it somewhere else and use a different loading method. So far the largest object I've stored on QC had over 350k elements and only took up 4MB of space.