Although automatic text analysis tools are available, little research has been conducted on the application of such tools in reading assessments. When the ratio of academic vocabulary and transitions are computed automatically and used in test development, the text selection-revision procedure can be fast and transparent by complementing test developers’ expertise. To obtain empirical evidence for the utility of automatic text complexity features, this study attempted to explore the role of automatically-derived text complexity features in an intensive English program (IEP) reading assessment. Based on previous literature and the testing context, a total of 11 text complexity features as lexical, syntactic, and semantic variables were chosen, and their accountability for the IEP reading item difficulty was automatically measured by using three text analysis tools—Lexile, the Compleat Lexical Tutor, and Coh-Metrix. Results showed that seven complexity features significantly correlated with the reading item difficulty. Stepwise multiple regressions showed that a set of four lexical and semantic text complexity features (i.e., word length, total word counts, Lexical Semantic Analysis (LSA), connectives) explained about 45% of the variance in the reading item difficulty. The results and findings of this study are discussed with regard to limitations and implications for both reading assessments and instruction.