deepfake
Latest
Short and Synthetically Distort: Investor Reactions to Deepfake Financial News
Recent advances in artificial intelligence have led to new forms of misinformation, including highly realistic “deepfake” synthetic media. We conduct three experiments to investigate how and why retail investors react to deepfake financial news. Results from the first two experiments provide evidence that investors use a “realism heuristic,” responding more intensely to audio and video deepfakes as their perceptual realism increases. In the third experiment, we introduce an intervention to prompt analytical thinking, varying whether participants make analytical judgments about credibility or intuitive investment judgments. When making intuitive investment judgments, investors are strongly influenced by both more and less realistic deepfakes. When making analytical credibility judgments, investors are able to discern the non-credibility of less realistic deepfakes but struggle with more realistic deepfakes. Thus, while analytical thinking can reduce the impact of less realistic deepfakes, highly realistic deepfakes are able to overcome this analytical scrutiny. Our results suggest that deepfake financial news poses novel threats to investors.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
deepfake coverage
2 items