Our Methodology
Non-Obvious Metrics That Reveal Obvious Distress
Numbers tell a clear story, but they don’t always tell the full story.
Everyone talks about the losses from FTX, Luna, Berachain, Movement Labs, and Axie Infinity. Few discuss the gains realised before collapse. Even fewer understand why those collapses were predictable months before the capital structure imploded.
Distress doesn’t always manifest as obvious financial struggle or quantifiable bankruptcy metrics. Sometimes it reveals itself in the number of high-profile partnership announcements that churn within a month. Sometimes it’s visible in how individual team members throw shade at their employer on social channels. Sometimes it’s just opening five competing apps in different tabs and immediately knowing which team actually understands their market.
The following is a high-level breakdown of the key pillars we look at when assessing distress. To see how the methodology gets applied in practice, please email shadow@arasakalabs.io to request access to our confidential resources.
The Five Pillars of Distress
1. Audience Fragility
Crypto startups consistently fall into the same trap: pumping numbers early to facilitate exchange listings, KOL partnerships, and token generation events. Projects routinely accumulate tens or hundreds of thousands of followers, airdrop farmers, and community members - but only a fraction translates into monetisation, customer lifetime value, or acquisition value.
The airdrop farming industry has professionalised to an extreme degree. Protocols now deploy machine learning systems to detect multi-wallet Sybil behavior through on-chain pattern analysis. The ecosystem has distributed approximately $4 billion in airdrops in 2024 alone, but the “meta is overheated” - entire platforms like Kaito have done nothing but add fuel to the fire of garbage metrics over genuine engagement.
Projects have responded by shifting from snapshot-based airdrops to point systems that reward duration and consistency of engagement. A user active for 20 months is rewarded more than someone who appeared three months before TGE. But this creates a new problem: these engagement farming strategies generate inflated pre-snapshot activity that collapses post-distribution. Only about 20% (rough estimate) of activity-based interactions translate to genuine long-term engagement.
Assessing Audience Strength
The critical distinction is between extractive participation and genuine engagement. Look for:
Unpaid technical engagement: Developer groups, students, or hobbyists committing significant hours to building with your tool without financial incentive. This is one of the strongest signals of product-market fit in infrastructure.
Active feedback loops: Community members who provide detailed feedback, generate word-of-mouth value, and engage substantively with updates - not just “wen TGE” comments and generic hype.
Monetisation predictability: If a buyer acquired this company, would the audience translate into predictable revenue over a 3+ year horizon? Or would acquisition merely inherit a captive audience waiting for their airdrop exit?
Pivot resilience: If product direction changes, does the community possess enough engagement depth to be re-educated and re-activated? Or is the audience so narrowly focused on the original narrative that any strategic shift triggers immediate abandonment and confusion?
Put yourself in the buyer's shoes. When you acquire a company, you’re buying the ability to reach and monetise an audience. An audience that exists solely to extract airdrop value provides no acquisition premium. An audience that builds, provides feedback, and demonstrates loyalty through market cycles represents genuine strategic value.
Communities that survive the first year post-mainnet release without continuous token incentives have passed a natural selection filter that artificial engagement cannot replicate.
2. Deployment Gap
The deployment gap refers to the barrier between initial customer interest and meaningful production integration. Some B2B products encounter minimal friction: Stripe, Chainlink, and Alchemy provide business-critical infrastructure that integrates quickly and becomes difficult to remove. Other products face prohibitively large deployment gaps despite theoretical value propositions.
Mina (zk), Brevis (co-processor), Clique (TEE) exemplify this problem. On paper, they unlock new design space for decentralised applications. In practice, the barrier to entry, integration costs, switching friction, and security considerations are often prohibitively high.
To understand deployment gap dynamics, let’s look at some key details from a case study in which our founder was previously involved. The subject was Chainsight, an oracle platform designed to convert any API endpoint into an onchain feed for DeFi applications. The platform used threshold ECDSA with distributed nodes, targeting applications requiring custom data beyond standard price oracles.
The Case Study at a Glance
The engagement process followed a typical funnel approach. Initial conversations explored use cases and technical fit. Promising discussions advanced to steering documents - technical specifications that validated integration feasibility before committing to pilots. These steering documents bridged customer problems and product capabilities with enough detail to surface real constraints.
From steering documents, formal proposals outlined the implementation approach, timelines, and resource requirements. Proposals that survived stakeholder review proceeded to quotations with commercial terms. Finally, a subset was converted to actual customer relationships.
The Results
Over 18 months of engagement:
500+ projects engaged for use case exploration
150 steering documents produced to validate technical feasibility
50 formal proposals developed from steering documents
30 quotations generated from proposals
12 customers acquired pre-pivot
3 customers retained post-pivot
That final conversion represents a 75% customer churn rate when the product pivoted, and a 0.6% conversion rate from initial engagement to retained customer. This isn’t incompetence - it’s what deployment gap friction looks like when quantified.
Critical Bottlenecks Identified
This engagement revealed three fundamental problems:
Customer Acquisition Timing: Oracle integrations required alignment with audit schedules and release windows, creating 3-6 month implementation delays. By the time legal, security, and governance approvals were completed, the original business case had often become outdated.
Market Adoption Patterns: Current DeFi applications achieve core functionality using commoditised price feeds. The custom data oracle use case remains theoretically attractive but practically unnecessary for hitting the KPIs that investors and communities actually measure.
Incentive Misalignment: Projects could satisfy investor and community metrics (TVL, volume, partnership announcements) without actually implementing novel infrastructure. The gap between partnership press release and production integration became indefinite.
The Conceptual Validation Trap
The most dangerous pattern: a positive conceptual reception without a commitment to implementation. Prospects enthusiastically discuss use cases, provide detailed feedback on technical specifications, and express strong interest in pilots. Then engineering resources never materialise for actual integration.
Despite demonstrated interest from hundreds of projects, zero protocols built proof-of-concepts independently. The pattern repeated, conceptual validation followed by indefinite deployment timelines once engineering teams evaluated the actual integration work required.
Assessment Framework
When evaluating deployment gap risk, consider:
Switching Cost Reality: How much actual engineering effort does integration require? Be specific about smart contract modifications, security audits, testing cycles, and deployment coordination.
Stakeholder Complexity: How many parties need to approve this change? Count not just executives but also: security teams, existing audit firms, DAO governance voters, and community stakeholders who might view changes as risky.
Time Horizon Compression: How long can you wait for customers to actually integrate vs. how long before you run out of runway? The deployment gap can exceed the company's survival timeline.
Evidence gathering over 18 months generates intelligence that weeks of traditional due diligence cannot replicate. Multiple deployment attempts reveal whether the market genuinely needs your solution or just conceptually appreciates it.
3. Volatile Team Dynamics
Brand longevity does not necessarily equate to business stability. Watch for companies whose name has existed for years while their actual product has pivoted more times than you can count. Adaptability is a good thing, but narrative chasing can be seen from a mile away.
Research on startup pivots reveals consistent patterns. While many successful startups pivot at least once, execution matters more than appearing adaptable. Each pivot requires: conception, design, build, launch, and metric gathering. Changing direction too frequently creates unfocused and unspecialised teams producing half-baked ideas where product-market fit doesn’t get time to properly develop.
The Excessive Pivot Problem
Multiple failed pivots without traction indicate deeper issues with the team or the approach. The pattern typically manifests as:
Avoidance behavior: Teams repeatedly pivoting right when it’s time to start selling. This pattern of avoiding difficult aspects of building a business (sales, customer conversations, hard feedback) by constantly switching ideas is a critical red flag.
Sunk cost confusion: Most founders quit too late because they confuse persistence with denial. The runway creates an illusion that the ship can still be turned around until money and time literally run out.
FOMO pivoting: Direction changes driven by following CT narratives rather than customer data. Pivoting because you saw another startup raise significant funding isn’t strategic - it’s reactive.
Leadership Anti-Patterns
We’ve observed that distressed teams tend to exhibit two contradictory tendencies:
First, they cling to assumptions and refuse to pivot when market evidence demands it. Founders who take criticism personally rather than considering it objectively. Leaders are unable to articulate what scenarios would cause them to change direction. This rigidity creates organisations that drive toward failure with determination.
Next, they do end up pivoting, but for the wrong reasons. Narrative chasing rather than evidence-based repositioning.
Decision-Making Authority Assessment
A critical but difficult-to-quantify factor: does the CTO or technical founder actually have decision-making power over product direction? Is the company being steered by founders or executives who don’t understand the product or sector?
The deployment gap pillar connects directly here. When non-technical leadership makes promises about capabilities without input from the technical founder, the gap between sales commitments and engineering reality expands indefinitely. When financial projections drive the product roadmap instead of customer feedback, the company optimises for raising the next round rather than building sustainable value.
Social Signal Analysis
The way individual team members communicate publicly can reveal organisational health indicators. Look for:
Team members subtly throwing shade at their employer on social channels
Defensive or hostile responses to community questions
Inconsistent messaging about product direction across different team members
High-profile departures accompanied by vague reasons (”pursuing personal endeavours”)
These signals are subjective but informative. Teams under genuine distress leak their dysfunction through communication patterns long before financial metrics reflect the problem.
4. Lack of IP Value
When you strip away the marketing, has the team actually built something defensible or competitive in the market?
The central question: Would you get more value from paying a low-cost dev shop to copy the idea than from investing in the team or acquiring from them? If the cost to replace is lower than the cost to acquire or invest, there’s no defensible IP value.
The Defensibility Paradox
Most SaaS software starts off non-defensible and tends to build moats over time. As one framework articulates (blog.eladgil.com): “It is easy to copy or clone something that has taken a handful of people a handful of months to build.”
This problem has amplified with AI tooling. While building products has never been easier - software businesses can reach $1 million ARR faster than ever - defending what you’ve built has become more challenging. The AI image editing space demonstrates this: numerous startups scaled to $5-10 million ARR only to watch their value propositions erode overnight when established players integrated similar AI features.
What Counts As A Moat And What Doesn’t (Most of the Time)
Not Moats:
Product features (easily replicated)
Technical complexity alone (depends heavily on vertical)
Being first to market (temporary advantage)
Having “great product” (everyone claims this)
Speed of execution (can’t be sustained indefinitely)
Actual Moats:
Data: Proprietary datasets that nobody else can access or replicate. This requires heavy buy-in from users or businesses, something that can’t be reliably copied by just reverse engineering a product.
Network effects: Value increases with the user count, creating marketplace dynamics where being second place becomes exponentially harder.
Regulatory approvals: Licenses, no-action letters, or regulatory clearances that require years to obtain.
Scale effects: Pre-negotiated pricing that new entrants can’t match, or capital scale advantages that allow cheaper service provision.
Distribution: Multi-year contracts, exclusive provider relationships, or embedded integrations that create switching costs.
Process power: Accumulated learning that can’t be replicated overnight. Google Search vs. Bing exemplifies this - no amount of money invested in Bing can make it outperform Google Search immediately, because the process knowledge accumulated through billions of queries over decades can’t be downloaded.
Vertical-Specific Considerations
Technical moats matter dramatically more in some verticals than others. For highly complex use cases, such as ML infrastructure and healthcare applications where the last 1% of accuracy matters critically, having the strongest technical team creates genuine defensibility. For less complex and less mission-critical use cases like sales and marketing tools, technical strength provides minimal moat.
This distinction explains why many technically impressive deAI projects lack acquisition value despite novel architectures. The technology might be sophisticated, but if a well-capitalised competitor can replicate functionality in 6-9 months with a competent engineering team, there’s no technical moat.
Technology Salvageability Assessment
If the current company were to fail, could value be salvaged from the underlying technology or IP? Can it be tailored to a different audience or vertical? Are there accumulated process insights or proprietary datasets that retain value independent of current business model?
Research into oracle platforms reveals how non-technical moats often matter more than technical sophistication. Established oracle providers benefit from network effects, brand trust, and ecosystem lock-in that extend well beyond their technical capabilities. Being technically superior doesn’t guarantee market success - distribution, reputation, and integration depth frequently trump pure technical merit.
The Build vs. Buy Framework
From a buyer’s perspective, assess whether building an equivalent solution in-house would be cheaper than acquiring the company. Consider:
Time to replicate core functionality
Availability of engineering talent capable of building an equivalent solution
Whether any proprietary data, exclusive partnerships, or regulatory approvals provide meaningful advantage
If accumulated learning and process knowledge can be transferred through acquisition
If internal development is cheaper and faster than acquisition, the IP value is effectively zero regardless of technical sophistication.
5. Look and Feel (Yes, seriously)
You’re building enterprise-grade infrastructure? Then why are your docs six months out of date? Building an analytics platform for institutions? Why does your interface look like a casino?
This pillar generates the most subjective assessments but provides surprisingly reliable signals. The approach of: open the app, open all competitor apps in different tabs, and “vibe check” is simple yet effective. When the look and feel is wrong, it can signal specific problems:
“The team just slapped this together, probably outsourcing to a dev shop”
“They don’t understand their audience”
“This looks low effort”
Documentation as Organisational Signal
Poor documentation offers insights into organisational capability and priorities. The pattern is remarkably consistent across failing companies:
Follow-through failure: Teams that can’t complete documentation projects typically struggle with all non-critical tasks. If documentation isn’t valued enough to finish, what else isn’t getting completed?
Resource constraints: Documentation requires dedicated resources. When companies treat it as an afterthought, they’re signaling either resource scarcity or prioritisation dysfunction.
Integration gaps: Technical writers not integrated with development teams through product lifecycle. This organisational structure issue extends beyond documentation - it indicates communication breakdowns affecting the entire operation.
Research on documentation quality identifies specific failure modes:
Version mismatches: Features changed names or icons between documentation version and current product, indicating long periods between updates.
Broken links: 404 errors throughout the documentation suggest neglect. When companies can’t maintain their own documentation site, it signals deeper operational issues.
Incomplete sections: Pages that are “victims of changed minds and budget overruns.” Started but never finished sections indicate project management dysfunction.
Multiple “Getting Started” paths: When users encounter links to “Getting Started,” “How to use,” “Documentation,” and “Quick Start Guide” - each covering slightly different or overlapping information - it suggests no one has overall ownership of the user experience.
This degradation pattern extends beyond documentation to all aspects of operational competence. If a team can’t maintain docs through product cycles, they likely struggle to maintain code quality, customer relationships, and other internal processes.
Design-Market Fit
Just as product-market fit matters, design-market fit reveals whether teams understand their audience. Enterprise infrastructure with casino aesthetics signals audience misunderstanding. Developer tools with marketing-heavy, low-information landing pages suggest the team doesn’t know how developers evaluate products.
The subjective assessment matters: when you compare five competitor interfaces side by side, does this product immediately stand out as more or less professional than its alternatives? The “vibe check” captures intangibles that formal evaluations often miss.
Can Polish Be Fixed?
The critical question: Is poor look and feel something that can be addressed through rebranding and user research, or does it indicate fundamental issues that lock the brand into a certain perception?
Sometimes it’s fixable - the team genuinely didn’t realise their casino aesthetics undermined enterprise credibility. But more often, poor polish results from:
Teams that don’t understand their actual market (can’t be fixed with design sprints)
Resource constraints preventing proper investment in UX (can’t be fixed without capital)
Leadership that doesn’t prioritise product quality (can’t be fixed without personnel changes)
When documentation is 6+ months out of date, when the interface looks hastily assembled, when competitor comparison reveals obvious gaps in polish - these signals indicate either lack of capability or lack of prioritisation. Both create acquisition risk.
How the Pillars Interact
Individual pillar weaknesses can be explained away. Combined pillar weaknesses signal distress.
An audience composed primarily of airdrop farmers (Pillar 1) might be acceptable if the deployment gap is small (Pillar 2) and the technology is genuinely defensible (Pillar 4). But when audience fragility combines with large deployment gaps, frequent pivots (Pillar 3), and poor documentation (Pillar 5), the company lacks both current traction and future potential.
Team volatility (Pillar 3) might be strategic adaptation if the company maintains documentation quality (Pillar 5) and builds genuine IP (Pillar 4). But when pivots happen because founders avoid difficult sales conversations rather than respond to market evidence, and when each pivot leaves documentation outdated and audience confused, the volatility signals panic rather than strategy.
The Time Advantage
Extensive research periods are a feature, not a bug. Our methodology is explicitly designed to improve over time, as more evidence is gathered and more time is spent observing a team’s operations.
Audiences reveal their extractive vs. genuine nature only after token incentives dry out. Deployment gaps manifest only after multiple integration attempts. Team dynamics show true colours only through multiple stress cycles. The IP value becomes apparent only when competitors attempt to replicate it. Documentation quality (or degradation) requires observing maintenance through product evolution.
Companies can fake one or two signals temporarily. Faking all five simultaneously for months or years is only possible for the hall of fame grifters.
Versus Obvious Evaluation Metrics
Each pillar captures a specific form of the gap between narrative and reality:
Pillar 1: Token farmers ≠ customers
Pillar 2: Partnership announcements ≠ integrations
Pillar 3: Pivot frequency ≠ adaptability
Pillar 4: Technical complexity ≠ defensibility
Pillar 5: Slick marketing ≠ product quality
Traditional analysis focuses on the left side of these equations - what companies claim, what metrics they report, what their pitch decks promise. This methodology focuses on the right side - what actually matters for sustainable value creation.
Risk Intensity Assessment
Single pillar weakness: monitor
Two pillar weakness: concerning
Three+ pillar weakness: high distress probability
All five pillars weak: assume distress, probe for opportunities
The framework enables early identification before market consensus forms. This creates positioning opportunities - when others maintain “irrational optimism” about deAI projects, this methodology reveals distress before obvious financial signals emerge.
Application in Practice
This methodology enables several strategic deal opportunities:
Distressed Acquisition Timing: Spot acquisition windows before financial metrics reflect them. When you identify three+ pillar weakness while the company still maintains runway, you can approach founders before desperation sets in. The target hasn’t yet exhausted all options, meaning acquisition conversations happen on better terms than fire-sale scenarios. Companies with strong IP (Pillar 4) but failing deployment (Pillar 2) and disgruntled teams (Pillar 3) are prime acquisition targets - the technology has value, but current leadership can’t commercialise it.
Governance Token Accumulation: For tokenised deAI projects showing early distress signals, accumulating governance tokens before market consensus forms creates positioning power. When audience fragility (Pillar 1) combines with team volatility (Pillar 3), token prices often remain elevated on narrative alone while operational reality deteriorates. Strategic token accumulation enables influence over protocol direction, partnership decisions, or eventual wind-down processes. This approach is particularly effective when salvageable technology (Pillar 4) is present within a failing operational structure.
Acquihire Opportunities: Teams showing execution capability despite business model failure represent acquihire targets. When documentation remains current (Pillar 5), the deployment process is optimal (Pillar 2), but market positioning is wrong - the team has skill but applied it to the wrong problem space. Identify these situations 6-12 months before obvious failure when team members start signaling dissatisfaction on social channels (Pillar 3).
Build vs. Buy Decisions: The IP value assessment (Pillar 4) directly informs replication strategy. When deployment gap research reveals a product’s core functionality can be replicated in 6-9 months with available engineering talent, buying becomes unnecessary. Conversely, when genuine data moats or process power exist, acquisition may be cheaper than internal development. The methodology’s multi-month observation period reveals whether apparent defensibility is real or performative, enabling accurate build-vs-buy calculations.
Hostile Takeover Positioning: Companies with all five pillars showing weakness but maintaining token value or equity valuation create takeover opportunities. When governance structures are weak (common in projects with fragmenting teams), coordinating with other distressed stakeholders enables governance capture. This works when valuable technology or datasets exist within failing organisational structures - the assets have value, but current leadership can’t extract it. Token-based governance makes this particularly viable in crypto-native structures.
Asset Extraction Strategies: Even terminal companies contain extractable value. Strong Pillar 4 (defensible IP) within failing Pillars 1-3 and 5 means technology/data can be salvaged. Proprietary datasets retain value independent of business model. Customer relationships (even in high-deployment-gap scenarios) can be transitioned to alternative solutions. The framework reveals which assets retain value and optimal extraction timing.
The methodology’s power comes from identifying these opportunities before market consensus forms. By the time distress becomes obvious through financial metrics, optimal deal windows may have already closed.
Conclusion
Numbers tell a clear story, but operational reality tells the complete story.
The five pillars methodology captures what traditional financial analysis overlooks: the gap between what companies claim and what they can actually deliver. Partnership announcements that never deploy. Audiences that exist only to farm airdrops. Teams that pivot to avoid difficult conversations rather than respond to market evidence. Technology that can be replicated by any competent dev shop. Products that don’t match their claimed market position.
These aren’t the metrics you’ll find in pitch decks. These are things you discover while observing day-to-day operations over extended periods of time.
For detailed applications of this methodology, including quantitative results and specific case studies, please email shadow@arasakalabs.io to request access to our confidential investor reports.

