When a coupon suddenly appears on your phone as you approach a store, you might find it convenient and even helpful. But the same AI systems that know where you are and try to influence your purchases can be used to infer what you fear, what you trust and which stories you are likely to believe. AI-fueled marketing algorithms are becoming increasingly good at influencing human behavior .
当您接近商店时,优惠券突然出现在您的手机上,您可能会发现它很方便甚至很有帮助。 但是,知道你在哪里并试图影响你的购买的人工智能系统可以用来推断你害怕什么、你信任什么以及你可能相信哪些故事。 人工智能驱动的营销算法越来越擅长影响人类行为。
That raises concern about what various governments might do with these tools to influence citizens’ views about warfare. A clear-eyed look at how administrations are exploiting these systems may help people and their nations navigate an uncertain future.
这引起了人们的担忧,即各国政府可能会利用这些工具来影响公民对战争的看法。 清楚地了解政府如何利用这些系统可能会帮助人们和他们的国家应对不确定的未来。
I am a security researcher who studies ways to explore and characterize the risk technology poses to individuals and society. The rise of AI-mediated influence has raised questions about the erosion of people’s capacity to exercise free will and, by extension, society’s ability to distinguish a just war from an unjust war.
我是一名安全研究员,研究如何探索和描述技术对个人和社会造成的风险。 人工智能影响力的崛起引发了人们质疑人们行使自由意志的能力受到侵蚀,进而影响社会区分正义战争和非正义战争的能力。
The integration of AI with location-based services is pushing the marketing frontier. Location-based services use geographic data from indoor sensors, cellphone towers and satellites to promote goods and services that are tailored to your location, a capability called geofencing .
人工智能与基于位置的服务的集成正在推动营销前沿。 基于位置的服务使用来自室内传感器、手机信号塔和卫星的地理数据来推销适合您所在位置的商品和服务,这种功能称为地理围栏。
When marketing firms couple massive amounts of data about individuals’ behaviors – including information that people voluntarily or unknowingly share through mobile device applications – the firms can group, or segment, potential customers based on what they like, what they do and what they say.
当营销公司结合大量有关个人行为的数据(包括人们通过移动设备应用程序自愿或不知不觉地共享的信息)时,公司可以根据潜在客户的喜好、行为和言论对他们进行分组或细分。
Once an AI-powered marketing system knows where a user is and can make an informed guess about that person’s likes and dislikes, it can design targeted coupons and advertisements to influence the behavior of each person in a group, and possibly the group as a whole. This combination of AI with geofencing and segmentation makes hyperpersonalized marketing content possible at an unprecedented scale.
一旦人工智能驱动的营销系统知道用户在哪里,并且可以对该人的好恶做出明智的猜测,它就可以设计有针对性的优惠券和广告来影响群体中每个人甚至整个群体的行为。 人工智能与地理围栏和细分的结合使得超个性化的营销内容以前所未有的规模成为可能。
What might this advance have to do with warfare? The use of psychology to win battles or obviate the need for war is as old as armed conflict itself. Sun Tzu, the Chinese military general and philosopher who died in 496 B.C., wrote: “Therefore the skillful leader subdues the enemy’s troops without any fighting ; he captures their cities without laying siege to them; he overthrows their kingdom without lengthy operations in the field.”
这一进步与战争有什么关系? 利用心理学来赢得战斗或消除战争的必要性与武装冲突本身一样古老。 卒于公元前 496 年的中国军事将领和哲学家孙子写道:“故贤将不战而降之,不围城而夺,不战而亡,不战而亡。”
From Sun Tzu’s era until today, skilled practitioners of military strategy have sought to reduce the risk in fighting through reflexive control : getting opponents to willingly perform actions that are best for the strategist’s empire or nation.
从孙子时代直到今天,熟练的军事战略实践者一直试图通过反射性控制来降低战斗风险:让对手自愿采取对战略家的帝国或国家最有利的行动。
Today’s strategists increasingly rely on paid social media advertisements, influencers , AI-generated content and even fake social media accounts to sway popular opinion toward their goals. This power, and controversy surrounding it, has been implicated in recent national elections , domestic unrest and negotiations to end the conflict in Ukraine .
如今的战略家越来越依赖付费社交媒体广告、影响者、人工智能生成的内容,甚至虚假的社交媒体账户来影响公众舆论,以实现他们的目标。 这种权力以及围绕它的争议与最近的全国选举、国内动乱以及结束乌克兰冲突的谈判有关。
Unlike propaganda during the Cold War between the U.S. and the Soviet Union, modern influencers don’t rely on a single message broadcast to the masses. Strategists test and deploy thousands of narrative variations simultaneously, monitor how different groups respond and refine their approach in near-real time. The purveyors don’t need to convince everyone. They just need to nudge enough people at the right moment to change election outcomes , pressure domestic policies or even trigger ethnic violence .
与美苏冷战期间的宣传不同,现代影响者并不依赖于向大众广播的单一信息。 战略家同时测试和部署数千种叙事变体,监控不同群体的反应并近乎实时地改进他们的方法。 供应商不需要说服所有人。 他们只需要在适当的时候推动足够多的人来改变选举结果,对国内政策施加压力,甚至引发种族暴力。
As online influence becomes more automated and personalized, it is harder to determine where persuasion ends and coercion begins. If groups of people, or even a nation’s citizenry, can be guided toward certain beliefs or behaviors without overt force, democratic societies face a new problem: how to distinguish traditional attempts at influence from manipulation – especially during conflict.
随着网络影响力变得更加自动化和个性化,很难确定说服在哪里结束,强制在哪里开始。 如果无需公开武力就能引导一群人,甚至一个国家的公民走向某些信仰或行为,那么民主社会就面临着一个新问题:如何区分传统的影响力尝试和操纵行为——尤其是在冲突期间。
Recent studies show that Americans trust local news sources more than national ones, although trust in both local and national news media has declined across all age groups in the U.S. Ironically, this trust deficit is being exploited by unscrupulous media in various ways, such as AI-generated, pink-slime news – online news stories that only appear to be from authentic local news outlets. The stories are often technically accurate but presented with veiled political bias.
最近的研究表明,美国人对当地新闻来源的信任度超过了对全国新闻来源的信任,尽管美国所有年龄段的人对当地和全国新闻媒体的信任度都在下降。具有讽刺意味的是,这种信任赤字正被不道德的媒体以各种方式利用,例如人工智能生成的粉红色粘液新闻——似乎仅来自真实的当地新闻媒体的在线新闻报道。 这些故事通常在技术上是准确的,但带有隐蔽的政治偏见。
AI-driven propaganda directly challenges how people typically evaluate claims that their nation has been wronged – that it is the “good guy” standing up for what is right. Just war theory assumes that citizens can reasonably consent to war. Legitimate political authority requires an informed public that can decide violence is both necessary and proportional to the offense. However, when influence operations sway people’s views without them being aware of it, these systems threaten to undermine the moral preconditions that make war just.
人工智能驱动的宣传直接挑战了人们通常如何评价他们的国家被冤枉的说法——认为这是站出来维护正义的“好人”。 正义战争理论假设公民可以合理地同意战争。 合法的政治权威需要知情的公众能够决定暴力是否必要且与犯罪行为相称。 然而,当影响力行动在人们没有意识到的情况下影响他们的观点时,这些系统就有可能破坏使战争正义的道德前提。
The question citizens have to answer is how they will allow their information environments to evolve. Do they assume that deception is ubiquitous and therefore governments must control information and even preempt the truth by weaponizing AI-driven narratives? Or should the public accept the risk of AI-generated influence as a regrettable but necessary part of openness, pluralism and the belief that truth emerges through transparent debate and not under tight controls?
公民必须回答的问题是他们将如何允许他们的信息环境发展。 他们是否认为欺骗无处不在,因此政府必须控制信息,甚至通过将人工智能驱动的叙事武器化来抢占真相? 或者,公众是否应该接受人工智能产生影响的风险,将其视为开放、多元化和真理通过透明辩论而不是在严格控制下出现的信念的令人遗憾但必要的一部分?
The same systems that decide which coupon reaches your phone are starting to shape which narratives reach you, your community and a nation’s entire population during a crisis. Recognizing this connection is the first step toward deciding how much influence people are willing to accept from such algorithms and the propagandists who control them.
决定哪些优惠券到达您的手机的系统也开始决定在危机期间您、您的社区和一个国家的全体人民将收到哪些叙述。 认识到这种联系是决定人们愿意接受此类算法和控制它们的宣传者的影响力的第一步。