With the advent of AI engines such as Grok or ChatGPT, LLMs (large language models) are well trained on all human knowledge, including about audio, I've been using Grok lately to help with software development and content marketing. When I query Grok about audio I get answers that are, for the most part, technically sound, however AI understands the mechanisms for audio involved in the analog domain, but in the digital domain it still does not understand what is going on and mainly provides only general suggestions.
For example, in the analog domain(speakers/headphones/amplifiers, etc.), there are well understood mechanisms of distortion, frequency response, linearity, plus all the various interactions of magnetic fields on signal. Additionally, with speakers, there is the importance of room acoustics, speaker positioning and coherence of sound pressure. Grok gets all this right.
However, ask Grok about the subtleties of digital and it provides no good explanation for why changes to the digital signal chain can affect sound the way audiophiles know it can. Grok knows that pops and snaps are about digital signal integrity. It knows that digital is well engineered to tolerate errors so that any flaw in transmission is the result of gross defects in cable, connection or bitstream. But issues regarding transparency are in the audible spectrum and digital signals are at frequencies well above. Grok insists that any perturbation in the digital signal should have no effect on what we hear.
Grok can explain why a bad USB cable causes dropouts but it can provide no good reason why, for example, a silver USB cables sounds different than copper ...or why a DAC sounds better with isolation footers. It lists all the possible mechanisms that are in the realm of possibility but strongly suggests listener bias should be considered. Notably, it does suggest that impact of RF noise (EMF) might be involved but there is no discussion of the exact mechanism.
So, its 2025 and we still don't have industry-wide consensus on the question of 'why' and 'how'. Science can't accept our implausibly sensitive ear/brain. Science can't understand how we can discern an impossible audible benefit where there should be none. Yet here we all are. My hope is that, soon, a next generation of AI will help unlock this mystery.