Why Multi-Source Signal Aggregation Beats Single-Source Analysis
If you only watch Reddit, you will see pump-and-dump schemes that look like organic momentum. If you only watch SEC filings, you will miss the social catalysts that drive short-term price action. If you only watch volume, you will catch moves after they have already started but miss the context of why. Single-source analysis is inherently limited because each data source has its own biases, blind spots, and failure modes. Multi-source aggregation overcomes these limitations by requiring convergence across independent channels.
Independence matters
The value of multi-source detection comes from source independence. Reddit users posting about a ticker do not know what SEC insiders are filing. Congressional trade disclosures are published independently of options market activity. Volume spikes are driven by actual order flow, not social media posts. When multiple independent sources converge on the same ticker, the probability of a real catalyst rises dramatically — because coordinating a fake signal across all of these channels simultaneously is nearly impossible.
Source weights encode reliability
Not all sources contribute equally to a signal's aggregate score. SignalScope assigns weights based on historical predictive value and the difficulty of manipulation. SEC insider purchases carry the highest weight (3.0x) because they represent verified, real-money transactions from people with deep knowledge of the company. Options flow and congressional trades carry 2.5x weight. Volume spikes carry 2.0x. Social media sources — Reddit, X/Twitter, StockTwits — carry 1.0-1.2x weight. These weights mean that a single insider filing contributes as much as three Reddit posts, reflecting the relative signal quality.
The candidacy threshold
Raw mentions flood in from all seven sources on every scan. Most are noise. The aggregation step applies a candidacy threshold: a ticker must appear at least twice from a single source, appear in at least two different sources, or come from a high-value source (SEC Insider, Congress, Volume Spike, Options Flow) to qualify for AI scoring. This single step eliminates the vast majority of one-off mentions and social media noise, focusing AI evaluation on tickers with meaningful signal density.
Velocity and momentum
Beyond simple counts, aggregation tracks signal velocity — how quickly mentions are accumulating — and cross-scan momentum — whether a ticker's signal strength is growing or fading over time. A ticker that appeared in 2 sources yesterday and 5 sources today is trending differently than one that went from 5 sources to 2. Velocity feeds into AI scoring and stage assignments: high velocity with rising cross-scan appearances can push a ticker from Emerging to Building stage, indicating growing market interest.
Anti-manipulation by design
Multi-source aggregation is inherently resistant to manipulation. A bad actor can flood Reddit with ticker mentions, buy StockTwits followers, or run coordinated Twitter campaigns. But they cannot fake an SEC insider filing, create a real congressional trade disclosure, or generate actual volume on exchanges. By requiring corroboration from sources with different incentive structures and different manipulation costs, the aggregation step filters out the majority of pump-and-dump schemes before AI scoring even begins. The 13-flag P&D filter catches the rest.
The result: a prioritized watchlist
After aggregation, AI scoring, and P&D filtering, what remains is a prioritized watchlist of tickers with genuine multi-source backing. Each ticker comes with an AI confidence score reflecting evidence strength, an Opportunity score reflecting early-mover potential, a signal stage indicating conviction level, source breakdown showing exactly where the signals came from, and on-demand AI reports with trade setups. This is the output of the entire pipeline: not a flood of mentions, but a curated set of candidates worth investigating further.