The world of artificial intelligence (AI) has entered a new era, with everyone searching for ways to utilize AI agents to their advantage. Companies are looking to use AI agents to boost sales, while governments aim to leverage them to do what they've always done. Some consumer advocates believe that individuals need their own AI agents to defend themselves against the constant barrage of business and government activities.
During the recent Imagination in Action event, Alex "Sandy" Pentland took the stage to discuss this concept. He stated, "They're going to try and hack me, do bad things to me." referring to the omnipresent AI agents controlled by businesses, governments, or big interest groups. "My answer is I need an AI agent to defend me," he continued. "I need something who's on my side who can help me navigate returning things or avoiding scams, or all that whole sort of thing."
The idea behind Pentland's concept is that your personal AI agent addresses the malicious activity directed at you and intervenes on your behalf. It's akin to having a public defender in court - when there's a legal effort against you, you need someone advocating for you.
Although some might label these attorneys "public pretenders" due to underpayment, short staffing, or other issues, the hope is that AI agents are more effective globally. It also mirrors consumer reporting efforts, such as Consumer Reports' work with polls and tools over 80 years. As Pentland mentioned, "This is why we have seat belts in cars." He wants a personal AI agent that can do the same - find good products and services, while protecting him from scams.
Another similar concept exists in cybersecurity agents created by Twine to protect individuals from cyberattacks. However, Pentland's idea remains in its infancy. In fact, his presentation highlighted how major companies made a sudden appearance at the event, discussing personal AI defense agents. "We had C-level representation, the head of AI products for every single major AI producer, show up on one week’s notice," he explained.
It's largely liability that brought them to the table, Pentland suggested. If these AI agents are going to be used and interact with users, they must not cheat or be biased. "They have a lot of liability, legal liability, as well as reputational liability. They have to be fair in helping you do things, otherwise they're going to end up in class action courts," he said.
Pentland also discussed the concept of digital populism, where a large number of people with similar needs can come together to create competitive AI agents. "You're just you," he stated. "But if there were a million yous or 10 million yous all trying to get a good deal, avoid scams, fill out that legal form, you could actually have AIs that are pretty well-suited for your needs."
In response to questions, Pentland offered advice for those starting their careers now. He asked essential questions about how these defense agents will work, such as "How do I know what's good for me, and what I want?" He also discussed the importance of building a network effect by putting agents together.
Pentland highlighted another game theory concept where small changes can easily upset a system. Using the example of a traffic jam, he asserted that this type of game theory has to be factored into creating digital defense agent networks. With all these factors in mind, it's essential to think about building those digital defense agents.
They may not be perfect right away but could provide the necessary defense against an emerging army of hackers utilizing potent technologies. This concept also feeds back into the debate about open source and closed source models, and when tools should be published for all the world to use. It's imperative to keep a lid on bad actors that could jeopardize systems.