As AI PCs Take Over, Business Leaders Must Bolster Their Cybersecurity Strategies, Experts Say
Worldwide shipments of artificial intelligence personal computers, known as AI PCs, are expected to total 114 million units this year, and are projected to represent 43% of all PC shipments in 2025. By next year, they're anticipated to be the only available PC that is sold to large companies, according to the research firm Gartner. The growing popularity of these devices — which have built-in AI hardware and software that help to speed up data processing — presents a new challenge for companies and their IT departments: Guarding their sensitive data troves against cyber threats.
AI PCs integrate a specialized processor called a neural processing unit, or NPU, which allows laptops and desktop computers to run AI workloads directly on them. The infrastructure in these devices can more effectively handle AI tasks because they're physically closer to where the data is generated than traditional computers that send data to a cloud-based or mainframe server. By moving workloads from the cloud to a PC, in many cases, data processing can be faster. Businesses can also save money by paying less for cloud storage, and the energy costs are lower because AI PCs require less data-center use.
However, this increased efficiency comes with a price: more proprietary data is being stored directly on these devices, making it essential for companies to consider and employ additional layers of security on their AI PCs. The issue around data privacy on AI PCs isn't necessarily a problem, but it is a new question, Vanessa Lyon, the managing director and senior partner at the consultancy BCG, told Business Insider. "Because it's a new capability and set up differently, it's a new kind of vulnerability," said Lyon.
For example, fraudsters could hack AI PCs and perform AI model inversion attacks, in which they use the output of a computer's large language model to infer the original data that was used to train the LLM. This could be problematic for companies like wealth management firms that may train AI algorithms to help with financial planning. If hackers are able to access the data that trained the LLM, they could use it to undercover who those clients are and where their money is held.
There's also data poisoning, or when cyberattackers may seek to add false data into training models. This can result in what's known as a hallucination, or a response from an AI system that contains false or misleading data. Cybercriminals could add poisoned data into a chatbot or generative AI application in a bid to alter and manipulate the outputs from those tools.
"I'm finding myself in more conversations about security because of AI," said Kevin Terwilliger, vice president of Dell's client solutions group. AI PC security starts with the device-buying process. Company leaders should consider various layers of security when purchasing this powerful piece of technology, experts say. For instance, enterprises need to be certain they trust the vendors they rely on for data storage, Kris Lovejoy, the global practice leader of security and resiliency at IT services provider Kyndryl, told Business Insider.
Lovejoy recommends always buying devices directly from PC manufacturers, wholesale distributors, and other reputable vendors to avoid any malware or other illicit capabilities that can be built into the machine. PC maker Dell, for example, uses a secured component verification process to ensure that AI PC components are tamper-free when they're made and then shipped to customers. To do this, Dell issues its customers certificates, which allows them to verify that the computer hardware in each AI PC is exactly what Dell sold to them.
Employee training and safeguards can help Mark Lee, the CEO and founder of the remote access and support software provider Splashtop, said enterprises need to balance the efficiency gains of giving employees greater access to more data that's stored on AI PCs with the risk of exposing troves of sensitive company data that's running on these devices. "It's finding that right balance to protect yourself and without impacting user productivity," Lee said.
Employee training can help safeguard companies against these kinds of attacks, Lovejoy added. For personal devices where work is also conducted, IT departments can create "virtual environments." These setups can prevent malware, which can come from software in the "untrusted" apps that workers may download, from interacting with company-endorsed software that's been vetted and built to keep data secure.
"A lot of the fundamental security concerns that you have, we've seen before," said Lovejoy. "It's just we're taking those principles and applying them to a different variation of technology."