This paper focuses on standard error estimation in FE models if there is serial correlation in the error process. Applied researchers have often ignored the problem, probably because major statistical packages do not estimate robust standard errors in FE models. Not surprisingly, this can lead to severe bias in the standard error estimates, both in hypothetical and real-life situations. The paper gives a systematic overview of the differnt standard error estimators and the assumptions under which they are consistent (in the usual large N, small T asymptotics). One of the possible reasons why the robust estimators are not used often is a fear of their bad finite sample properties. The most important results of the paper, based on an extensive Monte Carlo study, show that those fears are in general unwarranted. I also present evidence that it is the abolute size of the cross-sectional sample that primarily affects the finite-sample behavior, not the relaitve size compared to the time-series dimension. That indicates good small-sample behavior even when N ≈ T. I introduce a simple direct test analogous to that of White (1980) for the restrictive assumptions behind the estimators. Its finite sample properties are fine except for low power in very small samples.