INTRODUCTION: Reinforcement learning causes an action that produced a satisfying effect in a particular situation to become more likely to occur again in that situation. The process is essential in adaptive behavior; however, actual choices often appear to diverge from what could be inferred from simple reinforcement learning. AIM(S): Here we investigate how the time intervals between actions affect the choices made. METHOD(S): Groups of C57BL6/J mice were housed in IntelliCages with access to water and chow ad libitum and were able to access bottles with a reward in the form of a saccharin solution (0.1% w/v), alcohol (4% w/v), or a mixture of the two. The probability of receiving a reward in two of the cage corners changed to 0.9 or 0.3 every 48 h over a period of ~33 days. RESULTS: We observed that, in most animals, the odds of repeating the choice of a corner were increased if that choice was previously rewarded. Interestingly, the time elapsed from the previous choice also increased the probability of repeating the choice, irrespective of the previous outcome. Behavioral data were fitted with a series of reinforcement learning models based on Q‑learning. We found that introducing an interval‑dependent adjustment allowed for better description of the observed behavior, and the size of the time effect differed depending on the type of reward offered. CONCLUSIONS: We find that, at longer time intervals, repeating the previous choice becomes more probable, irrespective of the previous outcome. Thus, at least in this specific case, time may make a past mistake more likely to be repeated.