Maybe this should go in a thread of its own, and my apologies if it has been covered elsewhere, but once again we have a situation where a user is asking about boiler sizing, and the guidance being provided is to do a heat loss computation. Above and beyond the numerous reasons why unsatisfactory performance may result from sizing a coal boiler too close to the measured heat loss, doesn't the focus on heat loss overlook the fact that the actual load faced by the boiler is determined by the installed radiation it powers (plus DHW load if applicable)? If years/decades of experience has shown that "x" amount of radiation is needed to produce satisfactory heat, what would be the point of supplying less heat than that? Why is it necessary to introduce the complexity, hidden assumptions and potential situation-specific inaccuracy of a heat loss program when you have simple and reliable empirical data in the form of the amount of installed radiation and the user's degree of satisfaction with its adequacy? The boiler can't deal directly with the load - only with the installed radiation. So why aren't the heat loss computations saved for questions related to the adequacy of installed radiation, and why can't boiler sizing in existing buildings be driven by straightforward computations involving sf of installed radiation (plus any DHW load)?