In both supervised and reinforcement settings, there exist learning problems that are hard due to having high computational or sample complexity. Researchers have shown, using standard models, that this issue causes certain classes of problems to be provably unlearnable. We propose new learning models that incorporate both iterative data augmentation techniques and human intervention to overcome these issues. In the supervised learning setting, we extend and enhance the standard learning model to include an iterative, round-based learning structure. Additionally, we introduce a benevolent, knowledgeable teaching agent to the model. With these adjustments, we are able to show theoretically that certain unlearnable classes are efficiently teachable. We achieve improved results for previously studied classes and present new results for another class in our model. Additionally, we provide evidence that non-expert human users can play the role of the teaching agent effectively. Inspired by our work in the supervised setting, we leverage the same intuitions to create a reinforcement-learning model that takes advantage of an iterative round-based learning structure. We are able to train agents to learn arbitrarily complex behaviors induced by temporally defined goals in a simple grid world. We are able to show that reinforcement given by non-expert humans is sufficient to get the learning agent to exhibit the desired behavior.