It is a common misconception and a huge source of disappointment with ML -- without proper validation of the whole model building procedure (method selection + parameter tuning + feature selection + fitting) no amount of data and magic tricks will make you sure that there is no overfitting. Even a single hold-out test is risky because gives you no idea about the expected accuracy variance.