I help music, tech & creator startups GROW 🚀
⤷ 15+ years building, scaling, and exiting high-impact ventures ⤷ 0-1 music tech platforms used by millions ⤷ Growth Leader for teams, scaled communities, & strategies that led to acquition.
Lock In. Reach Out.
SKILLS
TESTIMONIALS

Kurt Weiberth
Linktree Staff Engineer / ex. PayPal, Songlink (acquisition) / Former Business Partner
"Nick single-handedly turned Songlink from a side project into a business. Prior to partnering with him the platform was a useful tool for friends and family with slow organic growth. Nick immediately identified the core user segment that would drive revenue and created a growth flywheel. He then coupled relentless determination with natural charisma to grow a network of recording artists, managers and labels who started using Songlink as part of their release strategies. This community drove our user and revenue growth, which led to profitability and an eventual Linktree acquisition. Oh, and most importantly he's a genuinely good person. Don't hesitate to work with Nick."

Patrick Sweetman
Top 1% Music Tech Engineer // Founder @ Recoup Al, Al Agent x Onchain Engineering Leader / ex Amazon
"Nick is a standout leader in the onchain music industry. His unique blend of experience and innovation allows him to successfully launch new products, from building effective growth strategies to nurturing artist relationships. As he spearheads his new growth company, his profound expertise will undoubtedly build some of the most diverse projects. I'm confident that entrusting your product to Nick and his team means embarking an a path fo substantial growth and success? "

Garrett Hughes
Co-founder @ Mint Songs (acquired Napster) / Engineering Leader @ Dune Analytics
"Nick's leadership in our team was inspiring and encouraged us to achieve our best potential. He was responsible for community development, where he demonstrated an innate ability to build, nurture and grow communities that still thrive today. I had the pleasure of working together at Mint Songs as a colleague for a year. During that time, he served as the Director of Growth and displayed an exceptional comprehension and execution in Growth strategies, community development, user growth, as well as brand strategy that significantly contributed to our projects' success. Nick's dedication, ingeniously paired with his business acumen and creative marketing skills, brought a considerable boost in user growth for our brand. Nick is a powerhouse of Growth and Marketing strategies, an expert in building partnerships, and a maestro in developing communities."

Eric Johnson
COO at Session / ex OpenSea, Spotify, Mint Songs
"Nick Merich is a master of growth marketing, partnerships, and community development. Working together at an early-stage startup, his expertise in crafting effective growth strategies and brand development was a game-changer for the business."

Nathan Pham
CEO @ Fanfly / ex United Masters, Napster, Pandora
"Nick is one of the best growth marketers I've worked with. He's so equipped and well rounded to grow product and user base of any size! After a year of working together at Mint Songs, I definitely look forward to working with him again in the future."
WORK
Co-Founder
Forest Ave.
Sept 2023 - Present
Co-Founder
Songlink (Acquired by Linktree)
March 2016 - 2021
Entrepreneurship Mentor (Startup Advisor)
Carnegie Mellon University, Swartz Center
2020 - Present
Director of Growth
Mint Songs (Acquired by Napster)
2022-2023
Head of Growth
Kits AI (a16z backed)
2023-2024
Co-Founder
Andocia Creative Agency (Acquired)
2014-2019
Head of Run Clubs
Dad Day
Feb 2025 - Now
Growth Advisor
Pop Site
Sept 2025 - Present

SATURDAY AT 6 AM
Dad Day Run Club
Every Saturday we meet in the South Hills, Pittsburgh PA to run 45 mins total @ 9-10 pace - Trails, Road, Mixed. Join Community below.

Director of Growth @ Mint Songs
Mint Songs (acquired by Napster): Built artist onboarding engine that grew user base 40% MoM, drove $30K NFT sales in 4 months, and scaled community to 20K Twitter followers, 0-15K Discord members, scaled YouTube 0-5k subscribers, produced 8+ Live Minting x Creation Events in LA, NYC, Pittsburgh.

Head of Growth @ Kits
Kits AI (a16z backed): Scaled Kits.io (sample pack marketplace), signed the first 8 licensed AI voice major, and contributed to building viral organic and paid growth loops with partners leading to millions of users.

Senior Growth Manager @ Trainwell
Trainwell: Ran $32K influencer campaign generating above average new subscriptions at $32 CAC, owned content and email, created new NIL Athlete Influencer Vertical, ran comedy and productive podcast ads finding new ways to scale growth without hurting retention.

Growth Partner @ Creatives Drink
Creatives Drink is a live case of a creative-driven beverage or lifestyle brand. Produced and Sold hundreds of Customized Cocktail Boxes partnered with industry leading city brands year around.
SIDE QUESTS
coming soon
coming soon
coming soon
coming soon
coming soon
coming soon
Resources for Growth Led Leaders 💪
EXPERIMENT GUIDE
Goal:
What user behavior change are we
Metric
Primary (OMTM) + secondary + g
Segment
Who exactly (exclude others t
Variants
Control vs. test (include sc
Trigger
What user action/moment initi
Action
Action: What do we want New Date Range
Measure
How do we quantify the behavi
MDE
Minimum detectable effect (statis
Guardrails
What metrics can't get wor
DRI
One person owns success/failure
Timeline
Start date, review date, sto
DRI
One person owns success/failure
Risks
What could break or bias results?
Decision criteria
Ship/iterate/archiv
90MIN PSYCHOLOGICAL FLOW
0–10 min: Define OMTM and bottleneck; confirm evidence
Process: Leader states OMTM, presents bottleneck evidence, asks for confirmation. No discussion—just alignment check.
Psychological purpose: Goal activation
10–35 min Silent idea write-down, then read-out, no debate; add competitor/user insights
Silent idea write-down, then read-out, no debate; add competitor/user insights
Psychological mechanisms : Silent id
35–55 min: ICE score individually; stack-rank; pick top 3
Decision psychology: Individual scoring prevents influence cascades where first person's score anchors everyone else. Take the median, not average, to reduce outlier bias.
Ranking psychology: Forces trade-off t
55–80 min: Design experiments (hypothesis, trigger-action-measure, metrics, exposure window, risks)
Structured thinking: Templates reduce cognitive load and ensure systematic coverage of experiment components. Each element serves a psychological purpose:
Hypothesis: Forces causal reasoning T
80–90 min: Assign DRIs, next steps, decision criteria, logging template link
Commitment psychology: Public commitment increases follow-through by 65%. Implementation intentions ("I will do X at time Y in context Z") increase success rates 2-3x. Copy-paste output format: Experiment 1: [DRI] will [specific action] by [date/time] Dependencies: [Who] provides [what] by [when] Decision criteria: Ship at [%], iterate at [%], archive below [%] Review date: [Date] to analyze results and decide
[DRI Name] owns [experiment name] with final decision authority Irreversible step by tomorrow: [Specific action that moves from planning to execution] Dependencies: Marketing: [Name] delivers [asset/approval] by [date/time] Product: [Name] ships [feature/variant] by [date/time] Data: [Name] sets up [tracking/dashboard] by [date/time] Success criteria: Ship if [primary metric] lifts ≥[%] AND [guardrail] doesn't drop >[%] Fail criteria: Archive if [primary metric] drops OR [guardrail] regresses >[%] Iterate criteria: If [%] < lift < [%], run variant B testing [hypothesis] Review checkpoint: [Date] at [time] - DRI presents data, team advises, DRI decides
[DRI Name] will launch [no-code variant] by EOD [date] 48-hour dependencies: Marketing: Pre-approve copy variants (no review loop) by [time] Data: Funnel setup complete by [time] Product: Feature flag ready by [time] Decision criteria: Ship at >15% lift, iterate 5-15%, archive <5% Stop date: [Date] or when n=[sample size] reached, whichever comes first Parallel qual: [Name] runs 5 user intercepts during test window to explain "why"
Experiment 1: [DRI Name] will [specific action] by [date/time] Dependencies: [Who] provides [what] by [when] Decision criteria: Ship at [%], iterate at [%], archive below [%] Review date: [Date] to analyze results and decide
7: [DRI Name] runs iteration #[X] building on [previous experiment ID] What we learned last time: [Key psychological insight from previous test] This iteration tests: [Sharper hypothesis based on learning] Tomorrow's action: [Name] launches refined variant by [time] Dependencies: None (reusing infrastructure from previous test) Decision criteria: If this confirms hypothesis → Ship and expand to [other touchpoints] If this contradicts → Archive direction and pivot to [alternative hypothesis] If inconclusive → Run one more iteration with [bolder change] New ideas generated: [3 experiment ideas] with initial ICE scores
[DRI Name] leads [customer-facing experiment] Stakeholder pre-brief completed: [Date] with [Sales/Support/Product leads] Tomorrow's step: [Name] begins soft launch to internal beta users by [time] Dependencies: Legal: [Name] approved copy/claims by [date] Support: [Name] briefed on potential questions by [date] Sales: [Name] knows about test, won't be surprised by customer mentions Rollout gates: Gate 1: Internal beta (n=50) for 48h - check for major issues Gate 2: 5% external traffic for 3 days - monitor support tickets Gate 3: 50% traffic if Gate 2 clean Decision criteria: Ship if ≥15% lift AND support ticket volume doesn't increase >[10%] Communication plan: [DRI] sends results email to stakeholders within 24h of decision
Sprint Week [X] - Experiment 8: [DRI Name] owns [experiment cluster] Monday 9am: Hypothesis finalized, variants approved Monday 2pm: [Name] launches to 25% traffic Wednesday 10am: Checkpoint - [DRI] reviews early signals with Data team Friday 3pm: Results review - Ship/iterate/archive decision Dependencies: All approvals pre-secured in previous sprint planning Tracking verified Friday prior week Decision framework: Use "Do No Harm" criteria - ship unless statistically significant negative Velocity goal: Close 1 experiment per week with documented learning
New Ans [DRI Name] tests [ad variant/audience] with $[budget] cap Tomorrow: [Name] launches campaigns by [time] Daily spend limit: $[X] to control downside risk Dependencies: [Name] sets up conversion tracking by [date] Decision criteria : Ship: CAC ≤$[X] AND LTV:CAC ≥[ratio] Iterate: CAC $[X]-$[Y] → test different creative/audience Archive: CAC >$[Y] after n=[X] conversions Stop conditions: Stop at $[budget] spent OR [date], whichever first Scale plan: If ship criteria met, increase daily budget to $[Y] in week 2.
[DRI Name] runs [compliant experiment] in [regulated space] Pre-launch checklist: ☐ Legal review completed by [Name] on [date] ☐ Compliance approval documented in [location] ☐ Privacy impact assessed by [Name] ☐ Data retention policy applied Tomorrow: [Name] submits final variant for compliance sign-off by [time] Dependencies: Cannot launch until all checklist items complete Decision criteria: Standard metrics PLUS compliance monitoring Audit trail: All decisions documented in [system] for regulatory review Rollback procedure: [Name] can rollback, but must notify compliance within [X hours]
New Answe[DRI Name] runs [experiment] with [X engineer-days] budget Scope locked: No scope creep - if it doesn't fit in [X days], we descope or wait Tomorrow: [Name] confirms technical feasibility and effort estimate by [time] Dependencies: Engineering: [Name] allocates [X hours] in sprint [Y] Design: [Name] delivers assets [2 days before] dev starts Decision criteria: Standard metrics + ROI per engineer-day Learning goal: Even if test fails, document [technical pattern/component] for reuse Velocity tracking: Time from idea → launch = [X days] (goal: reduce 30% next quarter)
Systematic Documentation Template ↴
Required fields for reusable learning : 1) Experiment ID: [Date]-[DRI initials]-[Focus area] 2) Hypothesis: If [change] then [outcome] because [psychological mechanism] 3) Setup: Variants, audience, duration, tools used 4) Results: Primary, secondary, guardrail metrics with [screenshots] 5) Decision: Ship/iterate/archive + reasoning 6) Psychological insight: What we learned about user motivation/friction 7) Next experiments: 3 ideas generated with ICE scores 8) Stakeholder brief: Who needs to know + what they should do differently 9) Template owner: [Name] maintains template and trains new team members 10) Searchability: Tagged by [metric], [segment], [channel], [psychological mechanism] Quarterly review: [DRI] presents [top learnings] to leadership on [cadence]





