In the rapidly evolving landscape of cryptocurrency and artificial intelligence, a new breed of scams is emerging, leveraging AI technology to deceive and defraud. The Elliptic Report 2024, titled “AI-enabled Crime in the Cryptoasset Ecosystem,” sheds light on these emerging threats, highlighting how AI-powered scams are becoming increasingly sophisticated and harder to detect.
The rise of AI as a buzzword has led to a proliferation of fraudulent investment platforms, often promising AI-driven trading or arbitrage capabilities. The U.S. Commodity Futures Trading Commission (CFTC) has issued warnings about these so-called “AI trading bot” scams, which frequently use terms like “quantum,” “Web3,” and “DeFi” to attract unsuspecting victims.
Identifying scams in this landscape can be challenging. The traditional advice, “if it seems too good to be true, it probably is,” has become less reliable due to the extraordinary returns reported by some legitimate meme coins. While excessive use of technical jargon without clear explanations is a common red flag, even successful projects like the popular meme coin DogeWifHat avoid technical jargon altogether.
Deepfake technology has further complicated the situation, allowing scammers to create realistic videos of celebrities and political leaders endorsing fraudulent crypto projects. This tactic was recently used in Singapore, where a deepfake video of former Prime Minister Lee Hsien Loong promoted a bogus investment scheme.
AI’s role in facilitating scams doesn’t stop at creating fake endorsements. Large-scale scams, such as “pig butchering,” use AI-generated communication scripts and fake profile images to streamline operations and evade detection. Even high-level executives at major companies have fallen victim to deepfake scams during online meetings, resulting in significant financial losses.
The blurred line between legitimate AI use and deceptive practices raises critical ethical questions. While AI can accelerate the development of legitimate projects, it also opens doors for deception. Using AI to generate a website or content is acceptable, but fabricating team members with AI-generated avatars crosses ethical boundaries, misleading potential investors and questioning the project’s authenticity.
The crypto community needs to establish clear guidelines to navigate this evolving landscape. Transparency and ethical practices in AI use are paramount to maintaining trust. Consider the example of Sophia, the AI robot. Although everyone knows Sophia is not human, she interacts with people as if she were, highlighting our willingness to engage with AI as social beings.
Sophia’s advanced capabilities have earned her roles traditionally reserved for humans, such as being granted Saudi Arabian citizenship and named the United Nations Development Programme’s first Innovation Champion. In the future, project founders might use AI for coding or content creation and give the AI a face and name on their website, treating it as a team member. This practice, while transparent about the AI’s role, raises ethical concerns about presenting AI as a human employee.
The introduction of AI into professional settings necessitates clear ethical standards and guidelines. Since ChatGPT’s release in November 2022, AI’s potential for both innovation and deception has been evident. While AI offers remarkable capabilities, it also poses significant risks, including job displacement and economic disruption. However, the primary concern is AI’s potential to deceive even the most discerning individuals, leading to detrimental financial and personal decisions.
As AI becomes more integrated into our lives, it is crucial for the crypto community and society to establish ethical standards for AI representation in professional settings. The awe-inspiring potential of AI must be balanced with measures to protect against its misuse, ensuring transparency and authenticity in all AI applications.
