This paper compares the machine learning methods for fake news detection, including traditional algorithms (Naive Bayes, Logistic Regression, SVM, Decision Tree, K-Nearest Neighbors, Linear Regression), deep learning models (CNN, LSTM, RNN, GRU, BERT, RoBERTa), and ensemble methods (Random Forest, XGBoost, AdaBoost, CatBoost, Bagging, Boosting, Stacking, Voting Classifier). Accuracy, precision, recall, and F1-score are used to compare their performance across different datasets while taking language, modality, and label granularity into account. Results show that no single model performs well all the time; hybrid and ensemble techniques frequently strike the optimum compromise between robustness, scalability, and accuracy. Reduced interpretability, limited language coverage, high computing costs, and dataset imbalance are persistent issues. Additionally included are ethical hazards like bias, false positives, and possible abuse. Explainable AI, cross-domain adaptation, and real-time multilingual detection systems should be the primary areas of future research.