본문 바로가기
  • Home

Intra- and Inter-Rater Agreement and Reliability of Forward Head Posture Assessment: A Comparison of Visual and Photographic Methods

  • Journal of the Korean Society of Physical Medicine
  • Abbr : J Korean Soc Phys Med
  • 2025, 20(4), pp.49~57
  • Publisher : The Korean Society of Physical Medicine
  • Research Area : Medicine and Pharmacy > Physical Therapy > Other physical therapy
  • Received : July 31, 2025
  • Accepted : September 13, 2025
  • Published : November 30, 2025

Il-young Moon 1 Hwang, Jongseok 2

1원주세브란스기독병원
2남부대학교

Accredited

ABSTRACT

PURPOSE: This study aimed to evaluate the intra- and inter-rater reliability of visual and photographic assessments of forward head posture (FHP) and to estimate measurement error from biological and technical sources. METHODS: Ten experienced physical therapists independently rated the FHP severity of 100 asymptomatic adults using both real-time visual observation and standardized photographic images. Reliability was analyzed using Cohen’s kappa coefficients and frequency distributions of absolute point differences. Biological and technical error components were estimated under the assumption of independent variance sources. RESULTS: Both methods demonstrated substantial intra- rater reliability (κ = .68 for visual, κ = .71 for photographic), with exact agreement exceeding 70%. Inter-rater reliability was lower, particularly for visual observation (κ = .39), whereas photographic assessments showed moderate agreement (κ = .51). Technical factors accounted for approximately 90% of intra-rater variability, whereas biological error contributed only 3%, indicating minimal short-term postural fluctuation in asymptomatic adults. CONCLUSION: Indirect assessments of FHP were consistent within raters but more variable across raters, especially in live visual settings. Variability was largely attributable to technical factors. These findings underscore the need for standardized scoring rubrics, rater calibration, and complementary objective tools (e.g., CVA measurement) to enhance reliability, particularly in multi-rater clinical contexts.

Citation status

* References for papers published after 2024 are currently being built.