Executive Summary
The OPCOPRO “Truman Show” operation is a fully synthetic, AI‑powered investment scam that uses legitimate Android and iOS apps from the official mobile app stores, and AI‑generated communities to steal money and identity data from victims.
Instead of relying on malicious code, the attackers use social engineering. The attackers pull victims using phishing SMS/ads/Telegram into tightly-controlled WhatsApp and Telegram groups, where AI‑generated “experts” and synthetic peers simulate an institutional‑grade trading community for weeks before any money or personal details are requested.
The mobile apps themselves contain no trading logic and act only as WebView shells connected to attacker‑controlled infrastructure, allowing all balances, trades and agreements to be fabricated server‑side while appearing fully compliant and regulated.
This campaign demonstrates how modern fraudsters can industrialize social engineering with large language models, turning what used to be manual “pig‑butchering” scams into scalable systems that cross languages, regions and platforms.
For defenders, this case shows that mobile risk now extends beyond malware.
- Malicious by design, not by code: The apps can look “clean” but the fraud happens in the backend + social layer using LLMs.
- Trust manufacturing at scale using AI: Coordinated groups, staged conversations, and synthetic AI-powered “peer” validation create a controlled “show” for the victim.
- App Store/Google Play – legitimacy abuse: Official distribution is used as a trust signal to lower suspicion.
- Identity + money theft: KYC-style document collection enables identity theft alongside direct deposits.
- Rapid redeployment: Templates and LLM-based modular infrastructure enable fast rebranding and regional reuse.
Background
In October 2025, a coordinated and sophisticated investment scam began targeting users through SMS and messaging platforms. Victims were invited into WhatsApp groups run by AI-assisted automation that appeared to host active trading communities led by knowledgeable “experts”. In reality, the groups exhibited strong indicators of coordinated inauthentic behavior: the admins used AI-generated identities, and many “participants” appeared centrally controlled or automated to simulate engagement, ask staged questions, and report fabricated profits — elements used to craft a controlled “show” for the victim.
In this campaign, Check Point’s mobile researchers identified a large, reusable fraud operation that combined mobile applications (both iOS and Android in App Store and Google Play), attacker-controlled backend infrastructure, and AI-assisted social engineering. In most other cases, our detection work can determine risk by analyzing the app itself — its code, permissions, certificate, behavior, and metadata. In this case, the mobile apps were technically lightweight and did not exhibit traditional “malware” behaviors during analysis – everything happens in the web-server. The risk became clear only after correlating the broader operation: the messaging groups that drove victims, the external infrastructure that rendered the “trading” experience, and the legitimacy narrative built around the brand.
We assess that attackers used automation — including large language models (LLMs) and generative tools — to produce convincing personas, multilingual conversations, and continuous group activity at scale. This does not mean the app itself was “LLM-powered”. Instead, the LLM-assisted layer amplified the trust-building phase: scripted expert commentary, synthetic peer reinforcement, and responsive one-to-one persuasion. Once a victim was conditioned to trust the community, the app acted as a simple gateway to attacker-controlled content and workflows, enabling identity data collection (KYC-style documents) and deposit theft without requiring malicious code on the device.
This operation blends together several modern fraud patterns: AI-assisted social engineering, synthetic communities, abuse of official app stores to reduce suspicion, and patient psychological manipulation. What makes this case especially dangerous is how repeatable it is — the social layer, content, and infrastructure can be rapidly redeployed and scaled across languages, regions, and platforms.
The following sections provide a step-by-step analysis of the operation and the Android/iOS apps associated with it, presenting verified artifacts and supporting evidence that strongly indicate coordinated investment fraud.
Indicators and supporting artifacts (domains, backend infrastructure, press-release amplification links, app hashes, and wallets) are provided in the Appendix for verification and enforcement.

Entry: Impersonation as the Entry Vector
The campaign begins with unsolicited outreach designed to impersonate legitimate financial institutions. In many cases, victims received an SMS message claiming to be from a prominent financial firm, promoting a “skyrocketing stock” opportunity with promised returns exceeding 70%. The message typically included a link inviting the recipient to join a WhatsApp group for additional details.
We contacted the firm to verify whether this outreach was affiliated with the company and confirmed it was not a legitimate message from them. Based on their response, the message and the associated “investment group” were not authorized by an institution and represented an impersonation attempt.
- Flagged as spam by the device – The native messaging app already classifies the SMS as potential spam, based on known malicious patterns and user reports.
- Brand and identity mismatch – The sender is labeled irregularly , and there is no confirmation of an official business sender ID.
- Out-of-the-blue contact – The user did not initiate any recent process that would justify such a message (e.g., no new account, product signup, or pending action).
- Unrealistic investment-style offer – The SMS suggests unusually high returns with minimal effort and limited context, which is a common pattern in financial scam campaigns.
- Lack of personalization and account context – The text is generic, does not mention the recipient’s name, and contains no specific account details or references.
- No trusted domain or official contact path – The SMS does not direct the user to a clearly recognizable, official domains or to an authenticated app.
This was not the only distribution channel, other victims reported similar approaches through Google Ads, Telegram and additional social platforms. Regardless of the platform, the objective remained consistent: move the victim from a public or semi-public channel into a private messaging environment, controlled and run by the attackers.
By directing victims into a WhatsApp group or similar private channel, the attackers move them into an environment where trust can be built gradually and skepticism can be managed. Once inside, the operation shifts away from direct persuasion and into a sustained confidence-building phase, using social proof, fabricated expertise and repeated validation.
Here’s where the show truly begins.
Evidence Snapshot (Entry)
- Impersonation outreach redirected victims into private messaging groups.
- Brand legitimacy claims later reappeared in platform materials and press-style content (see Appendix: Fabricated Media and Narrative Amplification).
Developing Trust: Inside the Fake Investment Community
Using automation and fully AI-generated content, the attackers construct a convincing investment reality around the victim — complete with fake experts, peers, market analysis, success stories, corporate backing, and regulatory references. Each element reinforces the others, gradually reducing skepticism and increasing emotional commitment.
Although victims were contacted and engaged across multiple platforms, our analysis focuses on the WhatsApp groups, where we had the most complete visibility into the trust-building phase of the campaign.
Once victims join the WhatsApp group, the campaign moves beyond phishing and into a sustained trust-building phase. What initially appears to be a simple investment discussion group is, in practice, a fully controlled AI generated environment, designed to simulate legitimacy, expertise and social validation.
The interaction in the WhatsApp group was conducted fluently in the victim’s native language (Hebrew). This native-language immersion lowers psychological defenses, increases perceived authenticity, and reinforces the illusion of a local, culturally familiar investment community.
At this stage, there is no technical exploitation. The attack is entirely psychological.
The “Experts” Running the Room
From the victim’s point of view, the group is led by experienced professionals who provide guidance and direction.
Two personas dominate the conversation:
Their profile images are AI-generated, there is no evidence of real individuals with these names or roles on LinkedIn, professional databases or public records.
They answer questions, publish analyses and maintain a confident tone, closely matching what victims expect from legitimate financial professionals.
To reinforce credibility, the group regularly publishes financial content that appears professional and institutional:
- Market outlooks and macro commentary
- Daily trade summaries
- Reports attributed to well-known banks
- “Institutional” trading insights
For victims, this creates the sense of learning from professionals rather than being sold to. The content is polished, structured, and authoritative.
A Fake Crowd That Never Disagrees
The WhatsApp group typically includes around 90 participants, creating the impression of a large and active trading community. This immediately establishes social proof: many people are present, engaged and seemingly benefiting from the program.
In reality, the group members are not real users:
- Phone numbers are unreachable and linked to VoIP services
- Caller ID databases show mismatched or unrelated identities
- Profile images have no online footprint
- Messages share similar structure, timing and tone
The group exhibits constant enthusiasm and agreement, with no visible doubt, disagreement or critical discussion.
Beyond the public group, some members initiate private conversations with victims. These chats appear friendly and supportive: answering questions, presenting examples of their alleged profits and encouraging participation.
This creates the impression of independent confirmation from peers. In reality, this is automated astroturfing, bots simulating one-on-one validation to reinforce trust and reduce hesitation.
Daily “Wins” That Don’t Match Reality
One of the strongest trust signals is the daily presentation of successful trades. Each day, the group shares tables and summaries showing profits, followed by positive reactions from group members.

Closer inspection reveals clear red flags:
- Trades are always reported after the trading day ends
- No real-time trade alerts exist
- Some prices contradict historical market data
- Profit narratives follow identical templates
For example, the group reported a VTRS trade at a price the stock never reached on the claimed date. These are not trading results, they are fabricated performance indicators designed to trigger confidence and FOMO.
Borrowed Legitimacy – Fake Partnerships
Throughout the interaction, the group repeatedly references:
- Collaboration with prominent financial firms
- Partnership with Oppenheimer Holdings
- An “institutional trading program”
- An AI-based trading model promising extreme returns
None of these claims are supported by public, corporate or regulatory evidence. Well-known financial brands are used solely to borrow credibility and lower suspicion.
Regulatory Appearance: A Company That “Looks Legitimate”
As part of the trust-building process, the “experts” introduce what appears to be a formally registered company operating under U.S. regulation.
Victims are presented with a company named OPCOPRO, described as a U.S. registered financial entity with international operations. To support this narrative, the group references:
- Company registration references
- Mentions of SEC filings
- Money Services Business (MSB) registration
- Frequent use of compliance terms
At a glance, these elements suggest regulatory oversight and corporate legitimacy. In practice, however, the records are self-reported, unverified, and do not authorize investment or trading activity. This distinction is never disclosed to victims.
Here, regulatory complexity itself becomes a tool of deception: by presenting fragments of real but meaningless compliance artifacts, the scammers create the appearance of legitimacy without any of its substance.
Manufactured Public Presence
To complete the illusion, victims are exposed to what looks like independent public validation:
- Multiple OPCOPRO websites with identical branding
- Press releases on syndication platforms
- Articles presented as financial news
These sources repeat the same claims, names and partnerships, found in the WhatsApp group. The repetition across channels creates perceived legitimacy, even though all content originates from the same operation. This is coordinated narrative amplification, not real media coverage.
Crucially, this “public validation” still falls squarely in the realm of unverified third-party content. Most victims are unfamiliar with how easily these channels can be manufactured — often through simple paid placement — without meaningful editorial review or accountability for accuracy. For example, press releases distributed via syndication platforms can be published broadly, with limited responsibility for verifying the underlying claims, allowing scammers to project legitimacy at scale while avoiding real external content verification.
Evidence Snapshot (Public Presence)
- Multiple OPCOPRO-branded domains presented the same branding and claims (see Appendix: company websites).
- Press release-style publications repeated near-identical narratives across multiple sites (see Appendix: Fabricated Media and Narrative Amplification).
Summary: A Controlled Investment Reality
By the end of this stage, the victim is no longer evaluating individual claims. They are immersed in a fully constructed reality.
They see:
- Confident experts leading the discussion
- A large community that never questions success
- Continuous proof of profitable trades
- Technical explanations that sound institutional
- A company that appears registered, compliant, and publicly recognized
What began as a WhatsApp group, becomes a Truman Show–like setting, where everything is part of a controlled script, aimed at manipulating the victim’s emotions and trust. This engineered trust sets the conditions for the final stage of the scam: application installation and financial exploitation.
The Final Stage: From Trust to Exploitation
After weeks of sustained interaction, education and reinforcement, the scam transitions into its final and most critical phase.
Presenting the Trading Platform
Victims are told they are being granted access to an exclusive, institutional-grade AI trading platform. According to the group administrators, the system:
- Is used by large financial institutions
- Relies on quantitative and algorithmic trading strategies
- Can generate unusually high returns, 370%-700% within few months
- Is available only to selected members of the group
At this stage, the offer no longer feels speculative. It is presented as the natural next step after weeks of preparation.
Why “It’s In The App Store” Doesn’t Mean Safe
To finalize the scam, victims install the O-PCOPRO app via official channels to lower their guard. However, official status does not equal safety:
- Google Play: Recently removed the application independently, a move that validates the threat but highlights the delay in store-side vetting (https://play.google.com/store/apps/details?id=com.yme.opcopro)
- Apple App Store: The application remains live and accessible to users today (https://apps.apple.com/us/app/o-pcopro/id6755084177).
Harmony Mobile’s advantage is proactive protection. App stores often respond after the fact and our AI engines can help surface the broader malicious ecosystem and disrupt both the app and its supporting infrastructure. In this case, our detections came ahead of Google’s removal and still available on Apple’s App Store. For CISOs, the takeaway is that relying only on store vetting can create a protection gap.
App Store (iOS):

Google Play:

Distribution through official app stores plays a critical role in lowering suspicion. Many users equate store availability with legitimacy, a trust assumption the attackers deliberately exploit.
From the user’s perspective, the app appears professional: clean interface, financial terminology and references to institutional trading concepts. There are no immediate indicators that suggest malicious behavior.
KYC: The Identity Grab
Before trading is enabled, users are required to complete an identity verification process. This includes sensitive and personal data including:
- Full legal name
- Government-issued ID number
- A photo of the ID document
The process closely resembles standard KYC procedures used by legitimate financial platforms. In practice, it provides the attackers with high-value identity data collected at a moment of strong user trust.
This step also increases user commitment, making disengagement less likely.
Funding the Account
After verification, victims are instructed to deposit money or cryptocurrency. Supported methods include:
- Bank transfers – for money
- Cryptocurrency payments – for crypto.
Once funds are transferred, the app displays balances, trade activity and apparent profits. These displays reinforce the perception of real trading activity and encourage continued engagement.
What Victims Lose
- Financial loss: deposits sent via bank transfer or crypto to attacker-controlled accounts, often pushed further by fake “profits.”
- Identity exposure: KYC-style collection of ID numbers and document photos, enabling identity theft or resale.
- Follow-on fraud: victims may be targeted again (e.g., “recovery” scams) using existing trust and details.
What the App Actually Does?
From a technical perspective, the application itself is not a trading platform.
Our analysis shows that the app contains:
- No trading logic
- No market data processing
- No portfolio management functionality
Instead, it functions as a WebView wrapper that loads all content from a remote server, controlled by the attackers. All charts, balances and trade results shown to the user, can be generated or modified server-side at any time.
This design gives the operator full server-side control over what users see — including charts, balances, and “trade outcomes” — without implementing real trading functionality on-device.
Evidence Snapshot (App & Infrastructure)
- The apps loaded platform content from an attacker-controlled backend domain (see Appendix: Backend Infrastructure).
- App hashes and recipient crypto wallets are provided for blocking and triage (see Appendix: IOCs).
How the App Fits into the Scam Infrastructure
The mobile app is one and the most important component of a larger, coordinated system. The same OPCOPRO branding, visuals, and messaging appear across:
- The mobile application
- Company websites
- Press materials


This consistency reinforces the perception of a single, legitimate organization.
A key element appears within the app itself: a “Cooperation Agreement” (attached in the “Appendix: Cooperation Agreement”) to users. The document explicitly references OPCOPRO and repeats the claimed association with the financial institution.
While the agreement has no legal validity, it connects the technical platform with the legitimacy narrative and makes the impersonation appear formal and contractual.
Why This Stage Works
By the time victims reach this phase:
- Trust has already been established through repeated interaction.
- Authority has already been demonstrated.
- Perceived risk has been systematically reduced.
The application does not introduce the scam — It formalizes it.
The financial and identity theft occur not because of malicious code, but because victims are guided through a process that feels legitimate, familiar, and safe.
As such, despite containing no traditional malicious code, the application is inherently malicious by design, as it directly enables identity theft and financial exploitation.
AI as the Scam’s Force Multiplier
AI was a major force multiplier in this campaign — not because it replaced the scam infrastructure, but because it reduced the cost of running high-touch social engineering at scale. We assess that LLM-assisted automation enabled consistent personas, fluent multilingual messaging, and sustained engagement patterns that would be difficult to maintain with human operators alone.
AI-Driven Social Engineering at Scale
This campaign succeeds not because of technical sophistication, but because it carefully shapes how victims think, feel, and make decisions over time.
From the beginning, victims are exposed to figures presented as experienced professionals — confident, articulate and consistent. These “experts” explain market movements, answer questions and provide clear guidance, creating the impression that knowledge and control are coming from a trusted authority.
At the same time, victims are surrounded by what appears to be an active community. Other members ask the “right” questions, share apparent profits and express gratitude toward the group leaders. Seeing so many others participate, without expressing doubt, makes skepticism feel unnecessary and even irrational.
Importantly, this doesn’t stop at one language. The same AI-driven infrastructure allows the attackers to replicate the operation across regions, generating fluent, culturally adapted conversations in many different languages with minimal additional effort — a capability that would be impractical to sustain using human operators alone.
Rather than rushing to ask for money, the scammers maintain daily engagement for weeks. This slow friendly interaction builds emotional investment. Over time, leaving the group no longer feels like rejecting an offer, it feels like walking away from people who have already invested time and care.
Over time, victims stop making their own decisions. Trading decisions are framed as institutional, automated and professionally managed. Victims are told when to buy and sell, reducing the need — and the habit — of independent judgment.
Only after this environment is fully established, the attackers introduce extraordinary profit opportunities. At this point, the fear of missing out is amplified by staged success stories and daily “results”, making hesitation feel like a personal mistake rather than a rational decision.
By the time money is requested, the victim is no longer evaluating risk, they are acting inside a social reality that feels legitimate, validated, and safe.
Maintaining this level of realism manually would require a large, disciplined human workforce. AI removes that constraint entirely.
By automating conversations, persona behavior and emotional reinforcement, the attackers transformed what was once a labor-intensive scam into a continuously running system. The result is a persistent, highly responsive operation that adapts to victims in real time, without fatigue, inconsistency or human error.
This turns social engineering into a system: automated, scalable, persistent, and able to run in almost any language.
AI in Infrastructure and Asset Generation
The use of AI is also evident in the technical and visual layers of the campaign.
The mobile application itself is extremely lightweight and generic, built around a minimal WebView template with little original logic. Several permissions appear unused or loosely justified, suggesting the app was assembled using a reusable scaffold rather than developed as a genuine product. This copy-paste structure aligns with rapid, template-driven generation rather than bespoke engineering.
Similarly, the OPCOPRO websites exhibit classic AI-generated characteristics:
- Overlapping and repetitive content across multiple domains
- Polished but generic marketing language
- AI-generated profile images not only for the main “experts”, but also for secondary personas
- Extensive navigation menus (advertising institutional services and legal documentation) where links are non-functional and do not resolve to any pages
These assets appear designed to maximize perceived legitimacy while minimizing development effort — a pattern consistent with AI-assisted content creation pipelines.
From Scams to Systems
Taken together, this campaign demonstrates a structural shift in online fraud.
AI does not merely improve existing scam techniques — it industrializes them. Fraud becomes:
- Modular
- Repeatable
- Language-agnostic
- Rapidly redeployable across regions and platforms
This marks a transition from individual scam operators to AI-enabled fraud ecosystems, where trust, infrastructure and manipulation, are all generated on demand.
The Limits of Today’s LLMs — and Why Those Limits Won’t Last
Despite the sophistication of this campaign, it still exhibits artifacts that reveal the involvement of large language models.
Detectable AI Artifacts
Across multiple layers of the operation, we observed inconsistencies characteristic of current-generation LLMs:
- Language slips and mixed scripts
Messages occasionally switch languages mid-sentence or include misplaced words from unrelated languages (e.g., Arabic or English fragments embedded in Hebrew text). These token-level anomalies are consistent with multilingual model confusion, not human typing behavior.
- Over-formal or unnatural phrasing
The WhatsApp conversations lack common human imperfections such as typos, slang, or interrupted thought patterns. Messages are consistently well-structured, emotionally calibrated and delivered instantly — a strong indicator of automated generation.
- Hallucinated or inaccurate content
Some market claims and trade details contradict real-world data, suggesting AI-generated financial narratives that were never validated against actual market conditions.
- Template-driven technical artifacts
The mobile application and websites show signs of rapid, scaffold-based construction: unused permissions, generic UI flows, duplicated content blocks and overlapping visual assets. These are consistent with AI-assisted or low-effort automated generation rather than deliberate engineering.
- Synthetic imagery tells
Profile images and backgrounds exhibit subtle irregularities commonly associated with AI-generated visuals, including unnatural lighting, facial symmetry issues and complete absence of any historical footprint online.
These limitations are not flaws of the attackers, but limitations of the technology available today.
As these capabilities mature, the artifacts that currently expose AI-assisted scams will become far less visible — or vanish entirely.
At that point, the distinction between a legitimate digital business and a fabricated one may no longer be apparent at the surface level.
What This Means for Defenders?
This changes the core problem defenders have to solve.
Traditional red flags — poor language, sloppy infrastructure, crude impersonation — are becoming unreliable indicators. Instead, defenders will increasingly need to:
- Correlate behavior across platforms.
- Analyze social dynamics, not just content.
- Examine inconsistencies between claimed legitimacy and verifiable reality.
- Detect and analyze generative AI traffic and it’s context.
This campaign is not an outlier. It is an early signal of how AI-enabled fraud will evolve, and a reminder that what is detectable today may be invisible tomorrow.
Detection & Protection
Even apps distributed via official stores (Google Play / App Store) can be malicious by design when they act as gateways into a broader fraud ecosystem. Defending against this class of threat requires correlating social behavior + app behavior + infrastructure, not only code inspection.
For Individuals
Red flags
- Unsolicited “investment” messages that quickly move you to WhatsApp/Telegram
- A “community” with scripted positivity, no dissent, and members DM’ing you to reassure you
- Extreme/guaranteed returns, “exclusive access,” or time pressure
- “Legitimacy” built via repeated press releases / lookalike news links
- Apps that behave like websites (WebView-heavy) and request KYC/ID uploads
Safer actions
- Verify any broker/company via official regulator sources (not chat links).
- Don’t send crypto to wallets shared in chats; treat it as irreversible.
- If you shared ID docs or paid: report to the platform + bank/exchange and monitor for identity misuse.
For Enterprises & Defenders
- Apply higher scrutiny to new finance/trading apps, especially WebView-first apps tied to newly registered domains.
- Correlate app backend domains with related brand sites/press content and domain clustering patterns.
- Treat “messaging-group → install app → deposit/KYC” funnels as high risk even if the app is in official stores.
- Create a fast internal reporting path for employees, and triage for identity exposure + financial loss scenarios.
From “Personal Scam” to Corporate Breach: The CISO’s Nightmare
While the Truman Show Scam appears to target the individual’s bank account, its true danger to the enterprise lies in the total compromise of the employee’s digital identity. For a CISO, an employee trapped in this synthetic reality isn’t just a fraud victim; they are a high-risk entry point into the corporate network.
The transition from a personal scam to an organizational breach follows a devastatingly effective path:
1. The KYC Trap – a Possible Security Breach
To “verify” their trading account, the employee uploads a high-resolution photo of their government ID and a “liveness” selfie.
- The Corporate Risk: Armed with these high-quality identity assets, the attacker no longer needs to “guess” security questions. They can call your IT Help Desk or the employee’s Mobile Carrier, posing as the victim with “proof” of identity to request a SIM swap or a password reset.
- Technical Impact: This effectively bypasses two-factor authentication (2FA). Once the attacker controls the phone number or the primary device, they can intercept SMS codes or push notifications to access corporate SaaS applications (email, Slack, Salesforce) and VPNs.
2. Financial Coercion and the “Accidental” Insider Threat
The “Truman Show” environment is designed to create a cycle of “wins” followed by sudden “liquidity crises” where the victim is told they owe money to the platform to “unlock” their profits.
- The Corporate Risk: An employee under extreme financial duress or being blackmailed with the sensitive personal data they provided (KYC) is a prime target for socially engineered insider threats.
- The Scenario: The “expert” (AI persona) may offer to “forgive the debt” if the employee performs a “simple task”—such as downloading a “diagnostic tool” (malware) on a work laptop or sharing a single internal document.
3. Mobile as the Weakest Link
The OPCOPRO app, while technically a “WebView wrapper,” still creates ongoing risk on the device because it maintains a trusted, persistent presence. Since it is installed through an official store, it may not be flagged by baseline mobile device management (MDM) checks that focus on jailbreak indicators or known-malware signatures.
- Technical Impact: If the employee uses the same device for work (BYOD) or reuses passwords between the scam app and corporate portals, the attackers can exploit the victim’s attention and habits rather than the device’s OS. The app becomes a reliable delivery mechanism for urgent, credible-looking notifications that drive the victim to take risky actions-clicking links, entering credentials, sharing MFA codes, or installing “helper” tools-creating a practical pathway from personal fraud into corporate compromise.
4. Use Case: The Help Desk Hijack
The Scenario: An executive at a Fortune 500 company can easily falls for the OPCOPRO scam. The attackers harvest his ID. They might wait until 2:00 AM on a Sunday, call the company’s outsourced IT support, and claim they’ve lost their phone while traveling. They also able to provide the stolen ID scan as “proof.” The Help Desk, following protocol, resets the executive’s MFA and registers the attacker’s device. Within minutes, the attacker has moved from a “personal scam” to the corporate OneDrive, exfiltrating sensitive M&A documents.
How Harmony Mobile Might Be The Answer?
Traditional mobile security looks usually only for “known bad” code. But the Truman Show Scam uses “known good” infrastructure (official stores, legitimate-looking WebViews) to perform “known bad” social engineering.
Harmony Mobile protects your organization where others fall short – not only by detecting independent behavioral signals, but also by:
- Vetting the ecosystem, not just the app: Identifying the malicious intent of an app by correlating it with newly registered domains and known fraudulent infrastructure.
- Anti-phishing: Blocking the initial SMS and messaging lures before the employee even enters the “Truman Show” environment.
- Behavioral risk scoring: Detecting when a “benign” app starts behaving as a gateway for credential harvesting or identity theft.
For the CISO, the goal isn’t just protecting the employee’s wallet – it’s preventing a personal lapse in judgment from becoming a headline-making corporate breach.
Conclusion
The OPCOPRO operation is a fully synthetic, AI-powered financial scam built around a victim like a personalized Truman Show. Every part of the experience — the experts, the group members, the profits, the media coverage, the company, the apps — is fake.
The significance of this campaign is not only the harm caused to its victims, but what it signals about the future threat landscape.
As AI continues to lower the cost of producing convincing identities, content and software, scams will increasingly resemble legitimate digital businesses — complete with apps, websites, media coverage and regulatory-looking artifacts.
Detecting such operations will require defenders to look beyond individual components and analyze how social behavior, infrastructure and technical design converge.
This is not just a scam — it is a new model of cyber fraud, leveraging modern AI to automate trust-building, manipulation and operational execution.
Appendix
Identified Infrastructure and Digital Assets
This appendix lists the primary digital assets associated with the OPCOPRO operation, identified during the investigation. All items below were active or referenced during the campaign timeframe and are presented for transparency and verification purposes.
Company Websites (OPCOPRO brand domains)
The following domains were observed presenting OPCOPRO branding, claims of regulatory compliance and investment-related services:
- https[:]//opcopro[.]com
- https[:]//opcoprog[.]com
- https[:]//opcoprox[.]com
- https[:]//opcoproy[.]com
- https[:]//opcoprov[.]com
All domains were registered within a narrow time window (August–December 2025), used low-cost registrars, obscured registrant details via privacy services, and were hosted behind Cloudflare, masking the origin infrastructure.
Backend Infrastructure (in-app WebView content server)
Analysis of the mobile applications revealed a hard-coded backend domain used to render all in-app content via WebView:
- https[:]//jshlshaushdisk[.]com
All user-visible data: balances, charts, trade results, was delivered dynamically from this backend, enabling full server-side control of the displayed “trading activity”.
Fabricated Media and Narrative Amplification
To reinforce perceived legitimacy, the operation relied on coordinated publication of press-style content across multiple syndication platforms. These sources presented OPCOPRO as a legitimate financial entity and repeated claims of partnerships with well-known institutions.
Observed platforms included, but were not limited to:
- Digital Journal – https://www.digitaljournal.com/pr/news/insights-news-wire/joint-market-study-two-major-1693772806.html#:~:text=Meeting%20Highlights
- Digital Journal #2 – https://www.digitaljournal.com/pr/news/binary-news-network/opco-announces-institutional-trading-program-17424677.html
- Financial-news – https://www.financial-news.co.uk/joint-market-study-by-two-major-institutions-reveals-institutional-trading-program-and-rwa-innovations
- Submit My PR – https://newsroom.submitmypressrelease.com/2025/11/19/opco-and-gs-announce-joint-institutional-trading-program-and-rwa-innovation-initiative_1924062.html
- Open PR – https://www.openpr.com/news/4275872/opco-announces-an-institutional-trading-program-and-rwa-based#:~:text=Email%3A%20Send%20Email%20%5Bhttps%3A%2F%2Fdashboard.kingnewswire.com%2Frelease
These platforms function as press-release hosting services, not independent news outlets. Articles typically included disclaimers stating that content was not verified or editorially reviewed. Identical phrasing, recurring personas, and consistent narratives were observed across multiple sites, indicating centralized content generation rather than independent reporting.
No coverage was identified in reputable financial media, regulatory publications, or mainstream news outlets.
Cooperation Agreement (User-Facing Legitimacy Prop)
A cooperation agreement file was presented to victims inside the mobile applications to reinforce claims of institutional partnerships. This document has no observable legal assurance or trading authorization and appears to be part of narrative amplification used to suppress skepticism and formalize identity verification and deposit requests. The full file is included below for transparency as it appeared to victims during the campaign.


IOCs
Mobile Applications
Android:
38eb3c10ff67f0a0a58a5ac1f606fb3357e04d79724fa6eed800702966310392
iOS:
5967c264a077c3baf1e62bab0d60e198a765f3153a672a19ebbb0ee1574380ba
Crypto Wallets
- USDC-ERC20: 0xeba87a21a638bc47c457c185f17a00e10869ff49
- USDT-TRC20: TL92U2K2AiuX7D4txSuoGL8SZ1PYzHjiMh
- USDT-ERC20: 0x259e4916177d5529877cce749fcffd3566d9a1c8



