This study proposed a key-frame automatic extraction method for video storyboard surrogates based on users’ cognitive responses, EEG signals and discriminant analysis. Using twenty participants, we examined which ERP pattern is suitable for each step, assuming that there are five image recognition and process steps (stimuli attention, stimuli perception, memory retrieval, stimuli/memory comparison, relevance judgement). As a result, we found that each step has a suitable ERP pattern, such as N100, P200, N400, P3b, and P600. Moreover, we also found that the peak amplitude of left parietal lobe (P7) and the latency of FP2 are important variables in distinguishing among relevant, partial, and non-relevant frames. Using these variables, we conducted a discriminant analysis to classify between relevant and non-relevant frames.