**Secure Messaging and AI Don’t Mix**
The widespread adoption of end-to-end encrypted messaging apps has revolutionized the way we communicate, providing a critical layer of security and confidentiality for our personal conversations. However, the growing trend of integrating artificial intelligence (AI) into these services poses a significant risk to this confidentiality.
Secure messengers like WhatsApp, Signal, DeltaChat, and built-in tooling such as Apple's iMessage or Google's Messages app are designed to keep messages confidential, so that only the sender and recipient can read them. This is essential for private communication and a fundamental aspect of a society that values civil liberties and freedom.
However, Meta has announced plans to introduce AI processing for WhatsApp messages, which raises concerns about the security and privacy implications. By integrating AI, WhatsApp will need to send all secure messages to Meta's servers so that the large language models (LLMs) can process them. This means that Meta would have access to the contents of our supposedly secure messages, potentially compromising their confidentiality.
Even if we assume that running a local AI model on your device is an acceptable solution, it still poses significant risks. Local models are becoming smaller and more powerful, but they would likely require higher-end hardware or make the app bulkier. Moreover, if the operating system itself embeds a network-connected AI service at a low level, no messenger would be secure.
Meta's proposed "Private Processing" solution is intended to mitigate these risks by introducing a Trusted Execution Environment (TEE) that ensures confidentiality and integrity of user data. However, several concerns arise:
- Data Confidentiality: The unreliability of confidentiality for data processed on AI servers. Most promises don't work reliably in the real world against well-resourced attackers with physical access to the hardware.
- Attestation and Code Integrity: The unreliability of attestation and code integrity, which are essential for trusting a network AI service with confidential data.
In particular, Meta's reliance on AMD and NVIDIA hardware signing keys raises concerns about the security of these keys. If either vendor can be coerced or tricked into leaking signing keys, or if there is a bug in their hardware implementation, the attestation promises don't hold.
The evaluation of code integrity also requires substantial independent auditing efforts to ensure that the code serves user needs rather than Meta's interests. However, Meta has not committed to publishing source code for their "Private Processing" machines, offering only image binaries and partial source access.
Given these concerns, it is clear that the integration of AI into secure messengers like WhatsApp poses significant risks to confidentiality and trustworthiness. While Meta may be attempting to address these issues through its "Private Processing" solution, the promises made are not sufficient to ensure the security and integrity of user data.
As users rely on messaging services for some of their most private conversations, it is essential that we prioritize transparency, accountability, and security in the design and implementation of AI-powered messaging apps. The risks associated with integrating AI into secure messengers far outweigh any supposed benefits, and users should be aware of these concerns before relying on such services.