-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No action at all (zero output) when evaluating a training result. #1
Comments
Hi applebull, did you just the exact code or do you have to modify a few things? thanks. |
I've found that the model isn't very stable. You can try evaluating the model with different saved instances like model_ep90, ep80 etc and there's some luck involved. I think the "no action" occurs when the model determines that selling starts at and remains the optimal strategy, but it isn't able to sell because nothing has been bought. I'll see if I can find another way to implement this constraint. You can also try changing the experience replay to random sampling by changing the first four lines of expReplay to |
Hi Edward, I have the same problem and tried to use the above changes "mini_batch = random.sample(self.memory, batch_size)." but still got zero output.... Any idea? Thanks in advance. |
the same problem with no result! python evaluate.py ^GSPC_2011 model_ep1000
|
I actually found out about this repo via Siraj's video which was the original I forked from. That said, I did issue a pull request to his. As stated above, you just need to issue a single buy which I do upon entry only and then allow the model to predict the rest of the way. I may look at performing a purchase when the local minimum has been reached in some window but currently I just use a bool for first iteration. I only trained up to 200 epochs, so I still have some more testing but it seems to be working descent. I will issue a pull request after some more testing is done, but should you wish to see changes, my repo is here: https://github.com/xtr33me/Reinforcement_Learning_for_Stock_Prediction I also had to modify Sigmoid to allow for larger numbers and math.exp overflow issues. Unsure if this will help anyone, but got me moving forward again. |
It is an interesting project, and I tried to it on my computer based on your readme. This is what I did.
mkdir models
python train.py ^GSPC 20 100
python evaluate.py ^GSPC_2011 model_ep100
And I got following output in evaluation
The agent did not do any thing to the test data set...
I know 100 training episodes is not enough to produce meaningful result. But I expect insufficient training would yield some bad strategy to loss money rather than no action at all.
My OS is macOS High Sierra. Do you think it is the problem of python environment or just too few training? Have you had such problem before?
Thanks!
The text was updated successfully, but these errors were encountered: