High performance audio and video codecs and ever increasing bandwidths of data communicationsnetworks have enabled multi-platform delivery of high definition media content via transmission channels such as terrestrial broadcast, broadband IP networks and mobile telecommunications systems. Media content consumption becomes ubiquitous and more personalized than ever before and audio is often rendered through headphones. Attentions should be paid to sound quality, and therefore high definition virtual auditory space (VAS) is sought after as an ideal solution. Stereophonic and surround sound are still commonplace while spatial audio is in its infancy to meet the demand of 2.5/3 D high definition video.
Binaural headphone rendering offers ultimate spatial audio and source localization; however, individual head-related transfer functions (HRTFs), head movement tracking and suitably synthesized signals are needed to achieve its full potential. Most audio recordings to date are intended and hence best for loudspeaker rendering. Good cross-feed filters can be an interim solution to improve headphone user experience. This paper proposes a universal cross-feed filter. HRTFs at specific source locations were first considered to derive the cross-feed filters. User preferred HRTF parameters were also identified with a variety of representative soundtracks. The transfer functions were measured and simplified to derive user preferred universal cross-feed filters for all. Validation tests confirmed improved listening experience.