I see easyWDW and Touring Plans as two sides of the same coin. Just as there are both theoretical and empirical physicists, macro- and micro-economics, and quantitative and qualitative methods, there are multiple ways to approach crowd prediction.
Touring Plans uses a very quantitative approach. They rely heavily on things that can be measured or stated precisely and for which they have past data. This means that at times they're consciously excluding data that they can't quantify. Building statistical models is very difficult, and unlike things such as election polling, they don't have the resources of a organization such as Gallup, and they're trying to predict things six months out. Statistical models have the potential for being very reliable for things like elections or weather forecasting, but no one expects them to work that far in advance. Plus, TP is missing (I assume) some extremely useful data, such as the rate of bookings and ticket sales, data that Disney has access to. They might infer it from things like ADR availability, but that's tricky and difficult. Finally, they're really predicting just one aspect of crowds, and that's wait times for rides. That's important, but it excludes things like meet and greet times, QS restaurant times, and the overall feel of crowds for parades and such. I think they err in calling their predictions a crowd calendar instead of a ride wait calendar. But they're clearly not making things up. To whatever extent they're wrong (and I don't have the data to judge any of these crowd sides), the most that could be said is that their approach isn't successful or their models are wrong. Accusing them of making things up, in the absence of any evidence that they're lying about their methods, is uncalled for.
Easywdw and other sites use a very qualitative approach, on more of a macro level than micro. They look at the issues that may influence decisions, correlate them (informally, but still correctly) against reported results, and reason out good conclusions. They can more easily incorporate events such as ROL being announced, and then deferred, into their predictions. Historically, they've relied on the premise that only a small percentage actually follow their recommendations (so does TP), but in this day of social media, I don't know that the premise is still true. One could claim that what they're doing is educated guessing (thus making the "making things up" attribution more literally accurate), but that would be an injustice if perceived as a criticism. Sometimes educated guesses are not only correct, but they're also the most reasonable way to make predictions.
If any one of these sites, or an independent party, were to do a sound comparative analysis of the reliability of their predictions, that would be good. But it's not possible as long as the only result data we have are the reports on social media or the personal judgments of their own employees. In the absence of such an analysis, it's best that they avoid throwing any stones at each other, as their houses are all made of glass.