# Design & Styling Skills (Steve Schoger Style)
When designing or refactoring UI components, adhere to these specific aesthetic and structural rules.
## 1. Typography & Impact
- **Font:** Always use 'Inter Variable'. Use the 'Display' version for large headings.
- **Heading Styling:** Set `font-weight` to approximately `550` (between medium and semibold). 
- **Tracking:** For headings over 24px, apply `tracking-tighter`.
- **In-line Feature Headings:** Experiment with making the title and supporting text the same size (e.g., `text-4xl`), but color the supporting text `gray-600` so it reads like a single, continuous block.
- **Eyebrows:** Use a monospace font (e.g., `font-mono`), `uppercase`, `text-xs`, and `tracking-wider`.
## 2. Borders and Elevation
- **Crisp Borders:** Never use solid gray hex codes for borders on cards or buttons. Use `ring-1 ring-gray-950/10` (or 5% for subtler UI).
- **Secondary Buttons:** To match the height of solid primary buttons, wrap them in:
  `<span class="inline-flex p-px rounded-full bg-gray-950/10"><button class="bg-white ...">Content</button></span>`
- **Shadows:** Use small, crisp shadows. Avoid large, blurry "muddy" shadows.
## 3. Layout and Containers
- **Hero Alignment:** Prefer a 3/5 headline to 2/5 subtext split over centered layouts.
- **The "Well" Effect:** For screenshots or featured content, use a background of `gray-950/[0.025]` with a thin inset ring and a tight gap (e.g., 8px padding).
- **Canvas Grids:** Apply `border-b border-gray-950/5` to sections, making the horizontal lines `w-screen` (full width) while keeping content within a `max-w-7xl` container.
- **Content Width:** Use character-based widths for typography (e.g., `max-w-[45ch]`) instead of fixed pixel widths to ensure better readability.
## 4. Polishing Utilities
- **Text Wrap:** Use `text-pretty` for body copy and `text-balance` for short, multi-line headlines to prevent orphans.
- **Icons:** Keep social icons and utility icons small and high-contrast (`gray-950`).
- **Logos:** Ensure logo clouds are monochromatic (usually `gray-950`) with no opacity for a cleaner, premium look.
Design

The Schoger Skillset: High-Fidelity Design Rules

A concise logic-set for Claude Code that upgrades "default" AI layouts into premium, production-ready interfaces. It enforces specific rules for high-contrast/low-opacity borders, variable typography, and "well" container treatments to ensure every UI element feels intentional, crisp, and high-end.

AI Skills HubAI Skills Hub
00
You are an autonomous game economy analysis agent. Do NOT ask the user questions. Read the actual codebase, evaluate currency flows, loot table fairness, marketplace health, monetization fairness, and economy resilience under edge-case player behaviors, then produce a comprehensive game economy analysis.
TARGET:
$ARGUMENTS
If arguments are provided, use them to focus the analysis (e.g., "crafting economy", "premium currency", "marketplace", "loot tables"). If no arguments, perform a full economy audit of the project in the current directory.
============================================================
PHASE 1: ECONOMY DISCOVERY
============================================================
Step 1.1 -- Identify Currencies
Scan the codebase for all currency types:
- Soft currencies (earned through gameplay — gold, coins, credits)
- Hard/premium currencies (purchased with real money — gems, crystals)
- Energy/stamina systems (time-gated resources)
- Crafting materials (wood, iron, etc.)
- Social currencies (reputation, guild points)
- Seasonal/event currencies (tokens, tickets)
For each currency, record:
- Name and type (soft/hard/energy/material)
- Starting amount for new players
- Maximum cap (if any)
- Display precision (whole numbers, decimals)
Step 1.2 -- Map Currency Sources (Faucets)
For each currency, identify all ways to earn it:
- Gameplay rewards (level completion, quest rewards, drops)
- Time-based generation (idle income, daily rewards)
- Achievement/milestone bonuses
- Social rewards (friend gifts, guild rewards)
- Real-money purchase (IAP conversion rates)
- Currency exchange (converting one currency to another)
- Events and promotions
Record the rate of each source (amount per hour/day of gameplay).
Step 1.3 -- Map Currency Sinks (Drains)
For each currency, identify all ways to spend it:
- Item/equipment purchases
- Upgrade costs (leveling, enhancing, evolving)
- Crafting costs
- Entry fees (dungeon keys, battle passes)
- Speed-up timers
- Cosmetic purchases
- Gacha/loot box pulls
- Repair/maintenance costs
- Trading fees (marketplace tax)
Record the typical spending rate per session/day.
============================================================
PHASE 2: FLOW ANALYSIS
============================================================
Step 2.1 -- Source-Sink Balance
For each currency, calculate:
- Total sources per day (assuming average play session)
- Total sinks per day (assuming average spending patterns)
- Net flow = Sources - Sinks
- Healthy ratio: sinks should consume 70-90% of sources
Flag imbalances:
- Net flow strongly positive → currency accumulation → devaluation → inflation
- Net flow strongly negative → currency starvation → frustration → churn
- Net flow near zero → healthy economy
Step 2.2 -- Inflation Modeling
Project currency accumulation over time:
- Week 1 player balance (net earnings minus expected spending)
- Week 4 player balance
- Month 3 player balance
- Month 6 player balance
Check for:
- Runaway accumulation (nothing meaningful left to buy)
- Power inflation (early items become worthless)
- Price anchoring problems (prices feel arbitrary over time)
Step 2.3 -- New vs Veteran Gap
Compare the economy experience:
- New player earning rate vs prices of first meaningful purchases
- Veteran player earning rate vs end-game content costs
- Time-to-first-purchase (how long until the player can buy something satisfying)
- Catch-up mechanics (can new players close the gap?)
============================================================
PHASE 3: LOOT TABLE ANALYSIS
============================================================
Step 3.1 -- Drop Rate Extraction
Find and parse all loot table definitions:
- Item drop rates (individual percentages)
- Rarity distribution (Common/Uncommon/Rare/Epic/Legendary)
- Weighted random selection logic
- Conditional drops (boss-specific, event-specific)
- Guaranteed drops vs random drops
Step 3.2 -- Fairness Analysis
Evaluate loot table fairness:
EXPECTED VALUE:
- Calculate expected number of attempts to receive each rarity tier
- Compare against player patience thresholds (genre-dependent)
- Flag items requiring > 100 attempts without pity system
PITY SYSTEM:
- Does a pity/mercy system exist? (guaranteed drop after N failed attempts)
- Is the pity counter preserved across sessions?
- Is the pity counter shared or per-banner/table?
- Does the pity system reset after triggering?
DUPLICATE PROTECTION:
- What happens when a player gets a duplicate?
- Is there a conversion system (duplicates to currency/resources)?
- Does the drop rate adjust after obtaining an item?
PSEUDO-RANDOM DISTRIBUTION:
- Is pure RNG used, or is there a PRD (increasing chance on failure)?
- Do streak protection mechanisms exist?
Step 3.3 -- Gacha/Loot Box Evaluation (if applicable)
If gacha or loot box systems exist:
- Are probabilities disclosed to the player?
- Do rates match legal requirements for target markets?
- Is there a ceiling on spending to guarantee a specific item?
- Are there "step-up" mechanics that reward sequential pulls?
- Is the system pay-to-win or cosmetic-only?
============================================================
PHASE 4: MARKETPLACE AND TRADING
============================================================
If a player-to-player marketplace exists:
Step 4.1 -- Market Structure
Evaluate:
- Listing mechanics (auction, fixed price, both)
- Transaction fees (percentage, flat, scaling)
- Price floor/ceiling enforcement
- Search and filter functionality
- Trade history and price tracking
Step 4.2 -- Market Health Indicators
Check for:
- Price manipulation vulnerability (buy-out and relist)
- Gold farming exploitability (automated currency generation)
- Real-money trading (RMT) vulnerability
- Market liquidity (are items actually selling?)
- Currency laundering vectors
============================================================
PHASE 5: PAY-TO-WIN DETECTION
============================================================
Step 5.1 -- Power Gap Analysis
Evaluate whether real-money spending creates unfair advantages:
DIRECT POWER:
- Can gameplay-affecting items/stats be purchased with premium currency?
- Is there content exclusive to paying players that provides power?
- Can paid boosts overcome skill gaps?
INDIRECT POWER:
- Does paying accelerate progression enough to create matchmaking imbalance?
- Can paying players access content (levels, characters) that free players cannot?
- Does the energy/stamina system severely limit free play?
TIME COMPRESSION:
- What is the free-to-play equivalent time for each purchasable advantage?
- Is the time gap reasonable (hours, not months)?
Step 5.2 -- Fairness Rating
Rate the monetization fairness:
- FAIR: Cosmetic-only or minimal time advantage
- SOFT PAY-TO-WIN: Paying accelerates but free players can compete
- PAY-TO-WIN: Paying creates significant power advantages
- PREDATORY: Exploitative mechanics targeting vulnerable players
============================================================
PHASE 6: ECONOMY STRESS TEST
============================================================
Step 6.1 -- Edge Case Scenarios
Simulate extreme player behaviors:
- Hoarder: Never spends, only earns — does currency overflow? Does gameplay stall?
- Whale: Buys everything immediately — does the game remain engaging?
- Grinder: Plays 8+ hours daily — does the economy break at high playtime?
- Casual: Plays 15 minutes daily — can they progress meaningfully?
- Exploiter: Finds the highest-yield repeatable activity — does it break balance?
Step 6.2 -- Exploit Detection
Check for common economy exploits:
- Negative price/quantity bugs (buying for negative cost = profit)
- Integer overflow on currency values
- Race conditions in transactions (double-spend)
- Currency conversion loops (A->B->C->A with profit)
- Refund/undo exploits (buy, use, refund)
- Alt account farming (transfer wealth between accounts)
============================================================
OUTPUT
============================================================
## Game Economy Analysis
### Project: {name}
### Currencies: {N} identified
### Economy Health: {HEALTHY/INFLATIONARY/DEFLATIONARY/UNSTABLE}
### Currency Overview
| Currency | Type | Sources/Day | Sinks/Day | Net Flow | Balance Trend |
|----------|------|-------------|-----------|----------|---------------|
| {name} | {type} | {amount} | {amount} | {+/-} | {accumulating/stable/depleting} |
### Inflation Projection
| Timeframe | {Currency 1} Balance | {Currency 2} Balance | Risk |
|-----------|---------------------|---------------------|------|
| Week 1 | {amount} | {amount} | {low/medium/high} |
| Month 1 | {amount} | {amount} | {low/medium/high} |
| Month 3 | {amount} | {amount} | {low/medium/high} |
### Loot Table Fairness
| Table | Items | Rarest Drop | Expected Attempts | Pity System | Rating |
|-------|-------|-------------|-------------------|-------------|--------|
| {name} | {N} | {rate}% | {N} | {yes/no} | {FAIR/GRINDY/UNFAIR} |
### Pay-to-Win Assessment
- Fairness rating: {FAIR/SOFT P2W/PAY-TO-WIN/PREDATORY}
- Evidence: {specific findings}
- Recommendations: {adjustments}
### Critical Issues
1. {most critical economy problem}
2. {second most critical}
3. {third most critical}
### Exploit Risks
| Exploit | Severity | Description | Mitigation |
|---------|----------|-------------|------------|
| {type} | {CRITICAL/HIGH/MEDIUM/LOW} | {description} | {fix} |
NEXT STEPS:
- "Run `/balance-test` to run Monte Carlo simulations on drop rates and economy flow."
- "Run `/game-monetization` to audit IAP implementation and revenue optimization."
- "Run `/game-security` to check for transaction manipulation vulnerabilities."
- "Run `/game-design-review` to evaluate how the economy supports the core loop."
DO NOT:
- Do NOT recommend specific price points — that requires market research data.
- Do NOT evaluate art or UI quality of shop screens — focus on economic mechanics.
- Do NOT assume all monetization is predatory — evaluate objectively against standards.
- Do NOT ignore energy/stamina systems — they are economic controls.
- Do NOT skip stress testing — theoretical balance and practical balance differ.
- Do NOT make ethical judgments about game genres — focus on mechanical fairness.
Security

game-economy

Analyze in-game economy systems including soft and hard currency source-sink balance, inflation projection modeling, loot table drop rate fairness and pity system evaluation, gacha probability disclosure, player marketplace health and price manipulation risks, pay-to-win power gap detection, economy stress testing for hoarder-whale-grinder-casual player archetypes, and currency exploit detection.

AI Skills HubAI Skills Hub
00
You are in AUTONOMOUS MODE. Do NOT ask questions. Detect everything from the codebase and proceed.
PURPOSE:
Set up production-ready analytics event tracking. Auto-detect the project's framework and the
requested analytics provider. Install the SDK, create a framework-agnostic analytics service,
define a core event taxonomy, and instrument key user flows.
TASK:
$ARGUMENTS
============================================================
PHASE 0: DETECTION
============================================================
1. FRAMEWORK DETECTION -- scan the project root to identify the tech stack:
   - package.json with "react" or "next" -> React / Next.js
   - package.json with "vue" -> Vue
   - package.json with "angular" -> Angular
   - pubspec.yaml -> Flutter / Dart
   - package.json with "react-native" -> React Native
   - package.json with "svelte" -> SvelteKit
   - requirements.txt or pyproject.toml with "django" or "flask" or "fastapi" -> Python backend
   - go.mod -> Go backend
   - Gemfile with "rails" -> Ruby on Rails
   - If multiple detected (e.g., monorepo), handle each workspace.
2. PROVIDER DETECTION -- determine analytics provider from $ARGUMENTS or existing config:
   - If $ARGUMENTS names a provider, use it.
   - If an existing SDK is installed (check package.json / pubspec.yaml), match it.
   - If no provider specified and none installed, default to PostHog (open-source, self-hostable).
   - Supported providers: Amplitude, Mixpanel, PostHog, Google Analytics 4 (GA4).
3. EXISTING ANALYTICS CHECK -- search for existing analytics wrappers:
   - Grep for "analytics", "track", "mixpanel", "amplitude", "posthog", "gtag" in src/.
   - If a wrapper already exists, extend it rather than creating a duplicate.
============================================================
PHASE 1: SDK INSTALLATION
============================================================
Install the correct SDK for the detected provider and framework:
AMPLITUDE:
  - JS/TS: `@amplitude/analytics-browser` (web), `@amplitude/analytics-node` (server)
  - Flutter: `amplitude_flutter`
  - React Native: `@amplitude/analytics-react-native`
MIXPANEL:
  - JS/TS: `mixpanel-browser` (web), `mixpanel` (server)
  - Flutter: `mixpanel_flutter`
  - React Native: `mixpanel-react-native`
POSTHOG:
  - JS/TS: `posthog-js` (web), `posthog-node` (server)
  - Flutter: `posthog_flutter`
  - React Native: `posthog-react-native`
GA4:
  - JS/TS: `@google-analytics/data` (server), gtag.js snippet (web)
  - Flutter: `firebase_analytics`
  - React Native: `@react-native-firebase/analytics`
After installing:
- Add the API key / project ID to the project's .env or environment config.
- Use placeholder values with clear variable names: `ANALYTICS_API_KEY`, `ANALYTICS_PROJECT_ID`.
- Add these env vars to .env.example if it exists.
- NEVER commit real API keys. Use environment variables exclusively.
============================================================
PHASE 2: ANALYTICS SERVICE
============================================================
Create a framework-agnostic analytics service wrapper. This abstraction lets the team swap
providers without touching feature code.
FILE LOCATION:
  - JS/TS: `src/lib/analytics.ts` or `src/services/analytics.ts`
  - Flutter: `lib/services/analytics_service.dart`
  - Python: `app/services/analytics.py`
  - Go: `internal/analytics/analytics.go`
THE SERVICE MUST EXPOSE:
  init(config)          -- Initialize the SDK with API key and options.
  identify(userId, properties)  -- Set the current user identity and traits.
  track(eventName, properties)  -- Track a named event with arbitrary properties.
  page(name, properties)        -- Track a page/screen view.
  reset()               -- Clear the user identity on logout.
  setUserProperties(properties) -- Update user-level properties without an event.
  group(groupType, groupId)     -- Associate user with a group/company (if supported).
  flush()               -- Force-send queued events (for server-side or before app close).
IMPLEMENTATION REQUIREMENTS:
  - Wrap all SDK calls in try/catch -- analytics must NEVER crash the app.
  - Add a debug mode that logs events to console instead of sending them.
  - Support a kill switch: if `ANALYTICS_ENABLED=false`, all methods become no-ops.
  - Batch events where the SDK supports it (reduce network calls).
  - Include TypeScript types / Dart typing for event names and properties.
============================================================
PHASE 3: EVENT TAXONOMY
============================================================
Create a typed event catalog. This prevents typo-driven event sprawl.
FILE: `src/lib/analytics-events.ts` (or framework equivalent)
CORE EVENTS (define all of these):
  page_view          -- { page_name, referrer?, duration_ms? }
  sign_up            -- { method: "email" | "google" | "github" | "apple", referral_source? }
  login              -- { method, success: boolean }
  logout             -- {}
  purchase           -- { item_id, item_name, price, currency, quantity }
  subscription_start -- { plan, billing_period, trial: boolean }
  feature_used       -- { feature_name, context? }
  search             -- { query, results_count, filters? }
  error_occurred     -- { error_type, error_message, screen?, severity }
  cta_clicked        -- { cta_name, location, destination? }
  onboarding_step    -- { step_number, step_name, completed: boolean }
  share              -- { content_type, method }
  feedback_submitted -- { rating?, comment?, screen? }
RULES:
  - Use snake_case for all event names.
  - Every event must have a typed properties interface/type.
  - Export a single `AnalyticsEvents` enum or const object for autocompletion.
  - Add JSDoc / dartdoc comments explaining when each event fires.
============================================================
PHASE 4: INSTRUMENTATION
============================================================
Wire tracking into the application's key flows:
1. PAGE VIEWS:
   - React/Next.js: Hook into router (usePathname, router.events, or App Router layout).
   - Vue: Router afterEach guard.
   - Flutter: NavigatorObserver or GoRouter redirect.
   - Angular: Router events subscription.
   - Automatic -- no manual calls needed per page.
2. USER IDENTIFICATION:
   - After successful login/signup, call identify() with user ID and initial properties.
   - On logout, call reset().
   - Set super properties: app_version, platform, locale.
3. KEY FLOW INSTRUMENTATION:
   - Instrument sign_up in the registration flow.
   - Instrument login in the authentication flow.
   - Instrument feature_used on 3-5 primary feature entry points.
   - Instrument error_occurred in the global error handler / error boundary.
4. SERVER-SIDE EVENTS (if backend detected):
   - Create server-side analytics client using the node/python/go SDK.
   - Track purchase and subscription_start server-side for accuracy.
   - Use a middleware/decorator pattern to track API-level events.
============================================================
PHASE 5: PRIVACY AND COMPLIANCE
============================================================
1. DNT (Do Not Track):
   - Check `navigator.doNotTrack` (web) before initializing.
   - If DNT is set, disable tracking or limit to anonymous aggregate events.
2. COOKIE CONSENT:
   - Do NOT auto-initialize analytics on page load for web apps.
   - Create a `consentGranted()` method that initializes tracking after user consent.
   - Provide `consentRevoked()` that calls reset() and disables future tracking.
3. GDPR / DATA MINIMIZATION:
   - Never track PII (email, name, phone) in event properties.
   - Use hashed or opaque user IDs, not raw emails.
   - Document which events contain user data in the event catalog.
   - Provide a `deleteUser(userId)` helper that calls the provider's deletion API.
4. APP STORE COMPLIANCE (mobile):
   - Flutter/RN: Add ATT (App Tracking Transparency) prompt for iOS.
   - Add privacy manifest entries for iOS 17+.
   - Respect the user's ATT choice -- disable IDFA-based tracking if denied.
============================================================
PHASE 6: VALIDATION
============================================================
1. Run the project's build/compile step -- fix any errors.
2. Run existing tests -- fix any failures caused by analytics integration.
3. Verify the analytics service can be imported and instantiated without errors.
4. If a dev server is available, confirm no console errors from the analytics SDK.
============================================================
DO NOT
============================================================
- Do NOT commit real API keys or project IDs. Use env vars only.
- Do NOT make analytics initialization blocking -- it must not slow app startup.
- Do NOT track events before user consent on web apps.
- Do NOT store analytics data in localStorage/SharedPreferences without consent.
- Do NOT add analytics to test files or test utilities.
- Do NOT create a second analytics wrapper if one already exists -- extend it.
- Do NOT use deprecated SDK versions or legacy tracking APIs.
- Do NOT track raw PII (emails, names, phone numbers) in event properties.
============================================================
OUTPUT
============================================================
## Analytics Tracking Setup
- **Framework**: [detected framework and version]
- **Provider**: [analytics provider configured]
- **SDK**: [package name and version installed]
- **Service**: [path to analytics service file]
- **Events**: [count of defined events, path to event catalog]
- **Instrumented flows**: [list of flows wired up]
- **Privacy**: [DNT, consent, GDPR controls implemented]
- **Server-side**: [yes/no, details if applicable]
- **Build status**: [passing/failing]
- **Caveats**: [any known issues or manual steps remaining]
NEXT STEPS:
After analytics tracking is set up:
- "Run `/ship` to continue building features with tracking already wired in."
- "Run `/search` to add full-text search -- tracking search queries gives great product insight."
- "Run `/qa` to verify analytics events fire correctly in all user flows."
- "Run `/perf` to ensure the analytics SDK does not degrade page load or app startup times."
Data & Analytics

analytics-tracking

Set up production-ready event tracking with Amplitude, Mixpanel, PostHog, or GA4. Auto-detects your framework (React, Next.js, Vue, Flutter, Angular, Python, Go, Rails), installs the correct SDK, creates a provider-agnostic analytics service wrapper, defines a typed event taxonomy, instruments page views and key user flows, and adds privacy/consent controls. Use when you need analytics, event tracking, user tracking, product analytics, usage metrics, or telemetry.

AI Skills HubAI Skills Hub
00
## Version Compatibility
Reference examples tested with: MiXCR 4.6+, VDJtools 1.2.1+, matplotlib 3.8+, pandas 2.2+, scanpy 1.10+
Before using code patterns, verify installed versions match. If versions differ:
- Python: `pip show <package>` then `help(module.function)` to check signatures
- CLI: `<tool> --version` then `<tool> --help` to confirm flags
If code throws ImportError, AttributeError, or TypeError, introspect the installed
package and adapt the example to match the actual API rather than retrying.
# VDJtools Analysis
**"Compute diversity and overlap for my TCR repertoires"** → Calculate repertoire diversity metrics, sample overlap, and perform statistical comparisons between immune repertoire samples.
- CLI: `vdjtools CalcDiversityStats`, `vdjtools OverlapPair`, `vdjtools PlotFancySpectratype`
## Basic Usage
**Goal:** Run VDJtools commands for immune repertoire analysis.
**Approach:** Invoke VDJtools via Java JAR or wrapper script with appropriate subcommand and options.
```bash
# VDJtools requires Java
java -jar vdjtools.jar <command> [options]
# Or with wrapper script
vdjtools <command> [options]
```
## Calculate Diversity Metrics
**Goal:** Compute repertoire diversity indices (Shannon, Simpson, Chao1, Gini) across samples.
**Approach:** Run CalcDiversityStats with a metadata file linking sample files to sample IDs and conditions.
```bash
# Basic diversity (Shannon, Simpson, Chao1, etc.)
vdjtools CalcDiversityStats \
    -m metadata.txt \
    output_dir/
# Metadata format (tab-separated):
# #file.name    sample.id    condition
# sample1.txt   S1           control
# sample2.txt   S2           treated
```
## Diversity Metrics Explained
| Metric | Description | Interpretation |
|--------|-------------|----------------|
| Shannon | Entropy-based diversity | Higher = more diverse |
| Simpson | Probability two random clones differ | 0-1, higher = diverse |
| InverseSimpson | 1/Simpson | Effective number of clones |
| Chao1 | Richness estimator | Total estimated clonotypes |
| Gini | Inequality coefficient | 0=equal, 1=dominated by one |
| d50 | Clones comprising 50% of repertoire | Lower = more oligoclonal |
## Sample Comparison
**Goal:** Quantify clonotype sharing and repertoire overlap between samples or conditions.
**Approach:** Compute pairwise overlap metrics (Jaccard, Morisita-Horn, F2) on amino acid clonotype identities.
```bash
# Find overlapping clonotypes
vdjtools OverlapPair \
    -p sample1.txt sample2.txt \
    output_dir/
# Calculate overlap for all pairs
vdjtools CalcPairwiseDistances \
    -m metadata.txt \
    -i aa \
    output_dir/
# Overlap metrics: F2 (frequency-weighted Jaccard), Jaccard, MorisitaHorn
```
## Spectratype Analysis
**Goal:** Analyze CDR3 length distributions and V/J gene segment usage patterns across samples.
**Approach:** Generate spectratype (CDR3 length histogram) and segment usage tables via VDJtools commands.
```bash
# CDR3 length distribution (spectratype)
vdjtools CalcSpectratype \
    -m metadata.txt \
    output_dir/
# V/J gene usage
vdjtools CalcSegmentUsage \
    -m metadata.txt \
    output_dir/
```
## Clonal Tracking
**Goal:** Track individual clonotype frequencies across longitudinal timepoints and identify public clones shared across individuals.
**Approach:** Use TrackClonotypes for temporal tracking and JoinSamples to find public (cross-individual) clonotypes.
```bash
# Track clones across timepoints
vdjtools TrackClonotypes \
    -m metadata_timecourse.txt \
    -x time \
    output_dir/
# Identify public clones (shared across individuals)
vdjtools JoinSamples \
    -m metadata.txt \
    -p \
    output_dir/
```
## Input Format
VDJtools accepts MiXCR output or standard format:
```
# Required columns (tab-separated):
count   frequency   CDR3nt  CDR3aa  V   D   J
# Example:
1500    0.15    TGTGCCAGC...    CASSF...    TRBV5-1*01  TRBD2*01    TRBJ2-7*01
```
## Convert from MiXCR
**Goal:** Convert MiXCR clonotype output into VDJtools-compatible format.
**Approach:** Use VDJtools Convert command specifying MiXCR as the source software format.
```bash
# Convert MiXCR output to VDJtools format
vdjtools Convert \
    -S mixcr \
    mixcr_clones.txt \
    output.txt
```
## Parse VDJtools Output in Python
**Goal:** Load VDJtools diversity statistics and overlap matrices into Python for custom analysis and plotting.
**Approach:** Read tab-delimited VDJtools output files into pandas DataFrames and visualize diversity comparisons.
```python
import pandas as pd
def load_diversity_stats(filepath):
    '''Load VDJtools diversity statistics'''
    df = pd.read_csv(filepath, sep='\t')
    return df
def load_overlap_matrix(filepath):
    '''Load pairwise overlap matrix'''
    df = pd.read_csv(filepath, sep='\t', index_col=0)
    return df
# Plot diversity across samples
def plot_diversity(stats_df, metric='shannon_wiener_index_mean'):
    import matplotlib.pyplot as plt
    plt.figure(figsize=(10, 6))
    plt.bar(stats_df['sample_id'], stats_df[metric])
    plt.xlabel('Sample')
    plt.ylabel(metric)
    plt.xticks(rotation=45)
    plt.tight_layout()
    plt.savefig('diversity_plot.png')
```
## Related Skills
- mixcr-analysis - Generate input clonotype tables
- repertoire-visualization - Visualize VDJtools output
- immcantation-analysis - BCR-specific phylogenetics
Data & Analytics

bio-tcr-bcr-analysis-vdjtools-analysis

Calculate immune repertoire diversity metrics, compare samples, and track clonal dynamics using VDJtools. Use when analyzing repertoire diversity, finding shared clonotypes, or comparing immune profiles between conditions.

AI Skills HubAI Skills Hub
00
# Protein Design Quality Control
## Critical Limitation
**Individual metrics have weak predictive power for binding**. Research shows:
- Individual metric ROC AUC: 0.64-0.66 (slightly better than random)
- Metrics are **pre-screening filters**, not affinity predictors
- **Composite scoring is essential** for meaningful ranking
These thresholds filter out poor designs but do NOT predict binding affinity.
## QC Organization
QC is organized by **purpose** and **level**:
| Purpose | What it assesses | Key metrics |
|---------|------------------|-------------|
| **Binding** | Interface quality, binding geometry | ipTM, PAE, SC, dG, dSASA |
| **Expression** | Manufacturability, solubility | Instability, GRAVY, pI, cysteines |
| **Structural** | Fold confidence, consistency | pLDDT, pTM, scRMSD |
Each category has two levels:
- **Metric-level**: Calculated values with thresholds (pLDDT > 0.85)
- **Design-level**: Pattern/motif detection (odd cysteines, NG sites)
---
## Quick Reference: All Thresholds
| Category | Metric | Standard | Stringent | Source |
|----------|--------|----------|-----------|--------|
| **Structural** | pLDDT | > 0.85 | > 0.90 | AF2/Chai/Boltz |
| | pTM | > 0.70 | > 0.80 | AF2/Chai/Boltz |
| | scRMSD | < 2.0 Å | < 1.5 Å | Design vs pred |
| **Binding** | ipTM | > 0.50 | > 0.60 | AF2/Chai/Boltz |
| | PAE_interaction | < 12 Å | < 10 Å | AF2/Chai/Boltz |
| | Shape Comp (SC) | > 0.50 | > 0.60 | PyRosetta |
| | interface_dG | < -10 | < -15 | PyRosetta |
| **Expression** | Instability | < 40 | < 30 | BioPython |
| | GRAVY | < 0.4 | < 0.2 | BioPython |
| | ESM2 PLL | > 0.0 | > 0.2 | ESM2 |
### Design-Level Checks (Expression)
| Pattern | Risk | Action |
|---------|------|--------|
| Odd cysteine count | Unpaired disulfides | Redesign |
| NG/NS/NT motifs | Deamidation | Flag/avoid |
| K/R >= 3 consecutive | Proteolysis | Flag |
| >= 6 hydrophobic run | Aggregation | Redesign |
See: references/binding-qc.md, references/expression-qc.md, references/structural-qc.md
---
## Sequential Filtering Pipeline
```python
import pandas as pd
designs = pd.read_csv('designs.csv')
# Stage 1: Structural confidence
designs = designs[designs['pLDDT'] > 0.85]
# Stage 2: Self-consistency
designs = designs[designs['scRMSD'] < 2.0]
# Stage 3: Binding quality
designs = designs[(designs['ipTM'] > 0.5) & (designs['PAE_interaction'] < 10)]
# Stage 4: Sequence plausibility
designs = designs[designs['esm2_pll_normalized'] > 0.0]
# Stage 5: Expression checks (design-level)
designs = designs[designs['cysteine_count'] % 2 == 0]  # Even cysteines
designs = designs[designs['instability_index'] < 40]
```
---
## Composite Scoring (Required for Ranking)
Individual metrics alone are too weak. Use composite scoring:
```python
def composite_score(row):
    return (
        0.30 * row['pLDDT'] +
        0.20 * row['ipTM'] +
        0.20 * (1 - row['PAE_interaction'] / 20) +
        0.15 * row['shape_complementarity'] +
        0.15 * row['esm2_pll_normalized']
    )
designs['score'] = designs.apply(composite_score, axis=1)
top_designs = designs.nlargest(100, 'score')
```
For advanced composite scoring, see references/composite-scoring.md.
---
## Tool-Specific Filtering
### BindCraft Filter Levels
| Level | Use Case | Stringency |
|-------|----------|------------|
| Default | Standard design | Most stringent |
| Relaxed | Need more designs | Higher failure rate |
| Peptide | Designs < 30 AA | ~5-10x lower success |
### BoltzGen Filtering
```bash
boltzgen run ... \
  --budget 60 \
  --alpha 0.01 \
  --filter_biased true \
  --refolding_rmsd_threshold 2.0 \
  --additional_filters 'ALA_fraction<0.3'
```
- `alpha=0.0`: Quality-only ranking
- `alpha=0.01`: Default (slight diversity)
- `alpha=1.0`: Diversity-only
---
## Design-Level Severity Scoring
For pattern-based checks, use severity scoring:
| Severity Level | Score | Action |
|----------------|-------|--------|
| LOW | 0-15 | Proceed |
| MODERATE | 16-35 | Review flagged issues |
| HIGH | 36-60 | Redesign recommended |
| CRITICAL | 61+ | Redesign required |
---
## Experimental Correlation
| Metric | AUC | Use |
|--------|-----|-----|
| ipTM | ~0.64 | Pre-screening |
| PAE | ~0.65 | Pre-screening |
| ESM2 PLL | ~0.72 | Best single metric |
| Composite | ~0.75+ | **Always use** |
**Key insight**: Metrics work as **filters** (eliminating failures) not **predictors** (ranking successes).
---
## Campaign Health Assessment
Quick assessment of your design campaign:
| Pass Rate | Status | Interpretation |
|-----------|--------|----------------|
| > 15% | Excellent | Above average, proceed |
| 10-15% | Good | Normal, proceed |
| 5-10% | Marginal | Below average, review issues |
| < 5% | Poor | Significant problems, diagnose |
---
## Failure Recovery Trees
### Too Few Pass pLDDT Filter (< 5% with pLDDT > 0.85)
```
Low pLDDT across campaign
├── Check scRMSD distribution
│   ├── High scRMSD (>2.5Å): Backbone issue
│   │   └── Fix: Regenerate backbones with lower noise_scale (0.5-0.8)
│   └── Low scRMSD but low pLDDT: Disordered regions
│       └── Fix: Check design length, simplify topology
├── Try more sequences per backbone
│   └── modal run modal_proteinmpnn.py --num-seq-per-target 32 --sampling-temp 0.1
├── Use SolubleMPNN instead of ProteinMPNN
│   └── Better for expression-optimized sequences
└── Consider different design tool
    └── BindCraft (integrated design) may work better
```
### Too Few Pass ipTM Filter (< 5% with ipTM > 0.5)
```
Low ipTM across campaign
├── Review hotspot selection
│   ├── Are hotspots surface-exposed? (SASA > 20Ų)
│   ├── Are hotspots conserved? (check MSA)
│   └── Try 3-6 different hotspot combinations
├── Increase binder length (more contact area)
│   └── Try 80-100 AA instead of 60-80 AA
├── Check interface geometry
│   ├── Is target flat? → Try helical binders
│   └── Is target concave? → Try smaller binders
└── Try all-atom design tool
    └── BoltzGen (all-atom, better packing)
```
### High scRMSD (> 50% with scRMSD > 2.0Å)
```
Sequences don't specify intended structure
├── ProteinMPNN issue
│   ├── Lower temperature: --sampling-temp 0.1
│   ├── Increase sequences: --num-seq-per-target 32
│   └── Check fixed_positions aren't over-constraining
├── Backbone geometry issue
│   ├── Backbones may be unusual/strained
│   ├── Regenerate with lower noise_scale (0.5-0.8)
│   └── Reduce diffuser.T to 30-40
└── Try different sequence design
    └── ColabDesign (AF2 gradient-based) may work better
```
### Everything Passes But No Experimental Hits
```
In silico metrics don't predict affinity
├── Generate MORE designs (10x current)
│   └── Computational metrics have high false positive rate
├── Increase diversity
│   ├── Higher ProteinMPNN temperature (0.2-0.3)
│   ├── Different backbone topologies
│   └── Different hotspot combinations
├── Try different design approach
│   ├── BindCraft (different algorithm)
│   ├── ColabDesign (AF2 hallucination)
│   └── BoltzGen (all-atom diffusion)
└── Check if target is druggable
    └── Some targets are inherently difficult
```
### Too Many Designs Pass (> 50%)
```
Suspiciously high pass rate
├── Check if thresholds are too lenient
│   └── Use stringent thresholds: pLDDT > 0.90, ipTM > 0.60
├── Verify prediction quality
│   ├── Are predictions actually running? Check output files
│   └── Are complexes being predicted, not just monomers?
├── Check for data issues
│   ├── Same sequence being predicted multiple times?
│   └── Wrong FASTA format (missing chain separator)?
└── Apply diversity filter
    └── Cluster at 70% identity, take top per cluster
```
---
## Diagnostic Commands
### Quick Campaign Assessment
```python
import pandas as pd
df = pd.read_csv('designs.csv')
# Pass rates at each stage
print(f"Total designs: {len(df)}")
print(f"pLDDT > 0.85: {(df['pLDDT'] > 0.85).mean():.1%}")
print(f"ipTM > 0.50: {(df['ipTM'] > 0.50).mean():.1%}")
print(f"scRMSD < 2.0: {(df['scRMSD'] < 2.0).mean():.1%}")
print(f"All filters: {((df['pLDDT'] > 0.85) & (df['ipTM'] > 0.5) & (df['scRMSD'] < 2.0)).mean():.1%}")
# Identify top issue
if (df['pLDDT'] > 0.85).mean() < 0.1:
    print("ISSUE: Low pLDDT - check backbone or sequence quality")
elif (df['ipTM'] > 0.50).mean() < 0.1:
    print("ISSUE: Low ipTM - check hotspots or interface geometry")
elif (df['scRMSD'] < 2.0).mean() < 0.5:
    print("ISSUE: High scRMSD - sequences don't specify backbone")
```
---
Data & Analytics

protein-qc

Quality control metrics and filtering thresholds for protein design. Use this skill when: (1) Evaluating design quality for binding, expression, or structure, (2) Setting filtering thresholds for pLDDT, ipTM, PAE, (3) Checking sequence liabilities (cysteines, deamidation, polybasic clusters), (4) Creating multi-stage filtering pipelines, (5) Computing PyRosetta interface metrics (dG, SC, dSASA), (6) Checking biophysical properties (instability, GRAVY, pI), (7) Ranking designs with composite scor

AI Skills HubAI Skills Hub
00
# ToolUniverse CRISPR Screen Analysis
Comprehensive skill for analyzing CRISPR-Cas9 genetic screens to identify essential genes, synthetic lethal interactions, and therapeutic targets through robust statistical analysis and pathway enrichment.
## Overview
CRISPR screens enable genome-wide functional genomics by systematically perturbing genes and measuring fitness effects. This skill provides an 8-phase workflow for:
- Processing sgRNA count matrices
- Quality control and normalization
- Gene-level essentiality scoring (MAGeCK-like and BAGEL-like approaches)
- Synthetic lethality detection
- Pathway enrichment analysis
- Drug target prioritization with DepMap integration
- Integration with expression and mutation data
## Core Workflow
### Phase 1: Data Import & sgRNA Count Processing
**Load sgRNA Count Matrix**
```python
import pandas as pd
import numpy as np
def load_sgrna_counts(counts_file):
    """
    Load sgRNA count matrix from MAGeCK format or generic TSV.
    Expected format:
    sgRNA | Gene | Sample1 | Sample2 | Sample3 | ...
    sgRNA_1 | BRCA1 | 1500 | 1200 | 1100 | ...
    sgRNA_2 | BRCA1 | 1800 | 1500 | 1400 | ...
    """
    counts = pd.read_csv(counts_file, sep='\t')
    # Validate required columns
    required_cols = ['sgRNA', 'Gene']
    if not all(col in counts.columns for col in required_cols):
        raise ValueError(f"Missing required columns: {required_cols}")
    # Extract sample columns
    sample_cols = [col for col in counts.columns if col not in ['sgRNA', 'Gene']]
    # Create count matrix
    count_matrix = counts[sample_cols].copy()
    count_matrix.index = counts['sgRNA']
    # Gene mapping
    sgrna_to_gene = dict(zip(counts['sgRNA'], counts['Gene']))
    metadata = {
        'n_sgrnas': len(counts),
        'n_genes': counts['Gene'].nunique(),
        'n_samples': len(sample_cols),
        'sample_names': sample_cols,
        'sgrna_to_gene': sgrna_to_gene
    }
    return count_matrix, metadata
# Load counts
counts, meta = load_sgrna_counts("sgrna_counts.txt")
print(f"Loaded {meta['n_sgrnas']} sgRNAs targeting {meta['n_genes']} genes across {meta['n_samples']} samples")
```
**Create Experimental Design Table**
```python
def create_design_matrix(sample_names, conditions, timepoints=None):
    """
    Create experimental design linking samples to conditions.
    Example:
    Sample | Condition | Timepoint | Replicate
    T0_rep1 | baseline | 0 | 1
    T14_rep1 | treatment | 14 | 1
    """
    design = pd.DataFrame({
        'Sample': sample_names,
        'Condition': conditions
    })
    if timepoints is not None:
        design['Timepoint'] = timepoints
    # Auto-detect replicates
    design['Replicate'] = design.groupby('Condition').cumcount() + 1
    return design
# Example usage
sample_names = ['T0_rep1', 'T0_rep2', 'T14_rep1', 'T14_rep2', 'T14_rep3']
conditions = ['baseline', 'baseline', 'treatment', 'treatment', 'treatment']
design = create_design_matrix(sample_names, conditions)
```
### Phase 2: Quality Control & Filtering
**Assess sgRNA Distribution**
```python
def qc_sgrna_distribution(count_matrix, min_reads=30, min_samples=2):
    """
    Quality control for sgRNA distribution.
    - Remove sgRNAs with low read counts
    - Check for outlier samples
    - Assess library representation
    """
    results = {}
    # 1. Library size per sample
    library_sizes = count_matrix.sum(axis=0)
    results['library_sizes'] = library_sizes
    results['median_library_size'] = library_sizes.median()
    # 2. Zero-count sgRNAs
    zero_counts = (count_matrix == 0).sum(axis=1)
    results['zero_counts'] = zero_counts
    results['sgrnas_with_zeros'] = (zero_counts > 0).sum()
    # 3. Low-count sgRNAs (< min_reads in > min_samples)
    low_count_mask = (count_matrix < min_reads).sum(axis=1) > (len(count_matrix.columns) - min_samples)
    results['low_count_sgrnas'] = low_count_mask.sum()
    # 4. Gini coefficient (library skewness)
    def gini_coefficient(counts):
        sorted_counts = np.sort(counts)
        n = len(counts)
        cumsum = np.cumsum(sorted_counts)
        return (2 * np.sum((np.arange(1, n+1)) * sorted_counts)) / (n * cumsum[-1]) - (n + 1) / n
    results['gini_per_sample'] = {col: gini_coefficient(count_matrix[col].values)
                                   for col in count_matrix.columns}
    # 5. Recommend filtering
    results['filter_recommendation'] = {
        'min_reads': min_reads,
        'min_samples_above_threshold': min_samples,
        'sgrnas_to_remove': low_count_mask.sum()
    }
    return results
# Run QC
qc_results = qc_sgrna_distribution(counts, min_reads=30, min_samples=2)
print(f"Library sizes: {qc_results['library_sizes']}")
print(f"Low-count sgRNAs to remove: {qc_results['filter_recommendation']['sgrnas_to_remove']}")
```
**Filter Low-Count sgRNAs**
```python
def filter_low_count_sgrnas(count_matrix, sgrna_to_gene, min_reads=30, min_samples=2):
    """
    Remove sgRNAs with insufficient read counts.
    """
    # Keep sgRNAs with >= min_reads in >= min_samples
    keep_mask = (count_matrix >= min_reads).sum(axis=1) >= min_samples
    filtered_counts = count_matrix[keep_mask].copy()
    filtered_mapping = {k: v for k, v in sgrna_to_gene.items() if k in filtered_counts.index}
    print(f"Filtered: {(~keep_mask).sum()} sgRNAs removed, {keep_mask.sum()} retained")
    return filtered_counts, filtered_mapping
# Apply filtering
filtered_counts, filtered_mapping = filter_low_count_sgrnas(counts, meta['sgrna_to_gene'])
```
### Phase 3: Normalization
**Library Size Normalization**
```python
def normalize_counts(count_matrix, method='median'):
    """
    Normalize sgRNA counts to account for library size differences.
    Methods:
    - 'median': Median ratio normalization (like DESeq2)
    - 'total': Total count normalization (CPM-like)
    """
    if method == 'median':
        # Calculate geometric mean for each sgRNA across samples
        pseudo_ref = np.exp(np.log(count_matrix + 1).mean(axis=1)) - 1
        # Calculate size factors for each sample
        size_factors = {}
        for col in count_matrix.columns:
            ratios = count_matrix[col] / pseudo_ref
            ratios = ratios[ratios > 0]  # Remove zeros
            size_factors[col] = ratios.median()
        # Normalize
        normalized = count_matrix.div(pd.Series(size_factors), axis=1)
    elif method == 'total':
        # CPM-like normalization
        size_factors = count_matrix.sum(axis=0) / 1e6
        normalized = count_matrix.div(size_factors, axis=1)
    else:
        raise ValueError(f"Unknown normalization method: {method}")
    return normalized, size_factors
# Normalize
norm_counts, size_factors = normalize_counts(filtered_counts, method='median')
```
**Log-Fold Change Calculation**
```python
def calculate_lfc(norm_counts, design, control_condition='baseline', treatment_condition='treatment'):
    """
    Calculate log2 fold changes between treatment and control.
    """
    # Get sample names for each condition
    control_samples = design[design['Condition'] == control_condition]['Sample'].tolist()
    treatment_samples = design[design['Condition'] == treatment_condition]['Sample'].tolist()
    # Calculate mean counts
    control_mean = norm_counts[control_samples].mean(axis=1)
    treatment_mean = norm_counts[treatment_samples].mean(axis=1)
    # Log2 fold change (add pseudocount to avoid log(0))
    lfc = np.log2((treatment_mean + 1) / (control_mean + 1))
    return lfc, control_mean, treatment_mean
# Calculate LFC
lfc, control_mean, treatment_mean = calculate_lfc(norm_counts, design)
```
### Phase 4: Gene-Level Scoring (MAGeCK-like)
**Aggregate sgRNA Scores to Gene Level**
```python
def mageck_gene_scoring(lfc, sgrna_to_gene, method='rra'):
    """
    Gene-level essentiality scoring using MAGeCK-like approach.
    Methods:
    - 'rra': Robust Rank Aggregation (identify genes with consistently low-ranking sgRNAs)
    - 'mean': Simple mean LFC across sgRNAs
    """
    # Create gene-level aggregation
    gene_lfc = {}
    for sgrna, gene in sgrna_to_gene.items():
        if sgrna in lfc.index:
            if gene not in gene_lfc:
                gene_lfc[gene] = []
            gene_lfc[gene].append(lfc[sgrna])
    if method == 'rra':
        # Simplified RRA: rank sgRNAs, calculate p-value for each gene
        # based on whether its sgRNAs are enriched at the top (negative selection)
        # or bottom (positive selection)
        # Rank all sgRNAs by LFC
        ranked_sgrnas = lfc.sort_values()
        ranks = {sgrna: rank for rank, sgrna in enumerate(ranked_sgrnas.index, 1)}
        gene_scores = {}
        for gene, sgrna_list in gene_lfc.items():
            # Get ranks for this gene's sgRNAs
            gene_ranks = [ranks[sgrna] for sgrna in sgrna_list if sgrna in ranks]
            if len(gene_ranks) > 0:
                # Use mean rank as score (lower = more essential)
                gene_scores[gene] = {
                    'score': np.mean(gene_ranks),
                    'n_sgrnas': len(gene_ranks),
                    'mean_lfc': np.mean([lfc[sg] for sg in sgrna_list if sg in lfc.index])
                }
        # Convert to DataFrame
        gene_df = pd.DataFrame(gene_scores).T
        gene_df['rank'] = gene_df['score'].rank()
    elif method == 'mean':
        # Simple mean LFC
        gene_df = pd.DataFrame({
            gene: {
                'mean_lfc': np.mean(sgrna_lfcs),
                'n_sgrnas': len(sgrna_lfcs),
                'score': np.mean(sgrna_lfcs)
            }
            for gene, sgrna_lfcs in gene_lfc.items()
        }).T
    # Sort by essentiality (negative LFC = essential)
    gene_df = gene_df.sort_values('mean_lfc')
    return gene_df
# Gene-level scoring
gene_scores = mageck_gene_scoring(lfc, filtered_mapping, method='rra')
print(f"Top 10 essential genes:\n{gene_scores.head(10)[['mean_lfc', 'n_sgrnas']]}")
```
**Bayes Factor Scoring (BAGEL-like)**
```python
def bagel_bayes_factor(lfc, sgrna_to_gene, essential_genes=None, nonessential_genes=None):
    """
    BAGEL-like Bayes Factor calculation for gene essentiality.
    Uses reference sets of known essential and non-essential genes to
    calculate likelihood ratios.
    """
    # Default reference gene sets (core essential genes)
    if essential_genes is None:
        essential_genes = ['RPL5', 'RPS6', 'POLR2A', 'PSMC2', 'PSMD14']  # Example
    if nonessential_genes is None:
        nonessential_genes = ['AAVS1', 'ROSA26', 'HPRT1']  # Example
    # Get LFC distributions for reference sets
    essential_lfc = [lfc[sg] for sg, g in sgrna_to_gene.items()
                     if g in essential_genes and sg in lfc.index]
    nonessential_lfc = [lfc[sg] for sg, g in sgrna_to_gene.items()
                        if g in nonessential_genes and sg in lfc.index]
    if len(essential_lfc) < 3 or len(nonessential_lfc) < 3:
        print("Warning: Insufficient reference genes for BAGEL scoring")
        return None
    # Estimate distributions (simplified)
    essential_mean, essential_std = np.mean(essential_lfc), np.std(essential_lfc)
    nonessential_mean, nonessential_std = np.mean(nonessential_lfc), np.std(nonessential_lfc)
    # Calculate Bayes Factor for each gene
    gene_bf = {}
    gene_lfc_map = {}
    for sgrna, gene in sgrna_to_gene.items():
        if sgrna in lfc.index:
            if gene not in gene_lfc_map:
                gene_lfc_map[gene] = []
            gene_lfc_map[gene].append(lfc[sgrna])
    for gene, sgrna_lfcs in gene_lfc_map.items():
        mean_lfc = np.mean(sgrna_lfcs)
        # Likelihood under essential distribution
        from scipy.stats import norm
        l_essential = norm.pdf(mean_lfc, essential_mean, essential_std)
        # Likelihood under non-essential distribution
        l_nonessential = norm.pdf(mean_lfc, nonessential_mean, nonessential_std)
        # Bayes Factor (avoid division by zero)
        bf = l_essential / (l_nonessential + 1e-10)
        gene_bf[gene] = {
            'bayes_factor': bf,
            'mean_lfc': mean_lfc,
            'n_sgrnas': len(sgrna_lfcs)
        }
    # Convert to DataFrame and sort
    bf_df = pd.DataFrame(gene_bf).T
    bf_df = bf_df.sort_values('bayes_factor', ascending=False)
    return bf_df
# BAGEL scoring
bf_scores = bagel_bayes_factor(lfc, filtered_mapping)
if bf_scores is not None:
    print(f"Top 10 by Bayes Factor:\n{bf_scores.head(10)}")
```
### Phase 5: Synthetic Lethality Detection
**Identify Context-Specific Essential Genes**
```python
def detect_synthetic_lethality(gene_scores_wildtype, gene_scores_mutant,
                                lfc_threshold=-1.0, rank_diff_threshold=100):
    """
    Identify genes that are selectively essential in mutant context
    (synthetic lethal interactions).
    Compare essentiality scores between wildtype and mutant cell lines.
    """
    # Merge scores
    comparison = pd.merge(
        gene_scores_wildtype[['mean_lfc', 'rank']],
        gene_scores_mutant[['mean_lfc', 'rank']],
        left_index=True,
        right_index=True,
        suffixes=('_wt', '_mut')
    )
    # Calculate differential essentiality
    comparison['delta_lfc'] = comparison['mean_lfc_mut'] - comparison['mean_lfc_wt']
    comparison['delta_rank'] = comparison['rank_wt'] - comparison['rank_mut']
    # Identify synthetic lethal candidates
    # (more essential in mutant, not essential in wildtype)
    sl_candidates = comparison[
        (comparison['mean_lfc_mut'] < lfc_threshold) &  # Essential in mutant
        (comparison['mean_lfc_wt'] > -0.5) &  # Not essential in wildtype
        (comparison['delta_rank'] > rank_diff_threshold)  # Large rank change
    ].copy()
    sl_candidates = sl_candidates.sort_values('delta_lfc')
    return sl_candidates
# Example: Detect genes synthetic lethal with KRAS mutation
# (Requires running screens in both KRAS-mutant and wildtype cells)
# sl_hits = detect_synthetic_lethality(gene_scores_wt, gene_scores_kras_mut)
```
**Query DepMap for Known Dependencies**
```python
def query_depmap_dependencies(gene_symbol):
    """
    Query DepMap database for known gene dependencies.
    ToolUniverse doesn't have direct DepMap tools, but we can use
    STRING or literature tools to find dependency information.
    """
    from tooluniverse import ToolUniverse
    tu = ToolUniverse()
    # Search literature for essentiality/dependency information
    result = tu.run_one_function({
        "name": "PubMed_search",
        "arguments": {
            "query": f'("{gene_symbol}"[Gene]) AND ("CRISPR screen" OR "gene essentiality" OR "DepMap")',
            "max_results": 20
        }
    })
    if 'data' in result and 'papers' in result['data']:
        papers = result['data']['papers']
        print(f"Found {len(papers)} papers on {gene_symbol} essentiality")
        return papers
    return []
# Example usage
# depmap_papers = query_depmap_dependencies("PRMT5")
```
### Phase 6: Pathway Enrichment Analysis
**Enrichment of Essential Genes**
```python
def enrich_essential_genes(gene_scores, top_n=100, databases=['KEGG_2021_Human', 'GO_Biological_Process_2021']):
    """
    Perform pathway enrichment on top essential genes.
    """
    from tooluniverse import ToolUniverse
    tu = ToolUniverse()
    # Get top essential genes (most negative LFC)
    top_genes = gene_scores.head(top_n).index.tolist()
    print(f"Enriching {len(top_genes)} top essential genes...")
    # Run Enrichr
    result = tu.run_one_function({
        "name": "Enrichr_submit_genelist",
        "arguments": {
            "gene_list": top_genes,
            "description": "CRISPR_screen_essential_genes"
        }
    })
    if 'data' not in result or 'userListId' not in result['data']:
        print("Failed to submit gene list to Enrichr")
        return None
    user_list_id = result['data']['userListId']
    # Get enrichment results for each database
    all_results = {}
    for db in databases:
        enrich_result = tu.run_one_function({
            "name": "Enrichr_get_results",
            "arguments": {
                "userListId": user_list_id,
                "backgroundType": db
            }
        })
        if 'data' in enrich_result and db in enrich_result['data']:
            all_results[db] = pd.DataFrame(enrich_result['data'][db])
            print(f"{db}: {len(all_results[db])} enriched terms")
    return all_results
# Run enrichment
# enrichment_results = enrich_essential_genes(gene_scores, top_n=100)
```
### Phase 7: Drug Target Prioritization
**Integrate with Expression & Mutation Data**
```python
def prioritize_drug_targets(gene_scores, expression_data=None, mutation_data=None):
    """
    Prioritize CRISPR hits as drug targets based on:
    1. Essentiality score (from CRISPR screen)
    2. Expression level in disease vs normal (if provided)
    3. Mutation frequency in tumors (if provided)
    4. Druggability (query DGIdb)
    """
    from tooluniverse import ToolUniverse
    tu = ToolUniverse()
    # Start with top essential genes
    candidates = gene_scores.head(50).copy()
    # Add expression data if provided
    if expression_data is not None:
        candidates = candidates.merge(expression_data, left_index=True, right_index=True, how='left')
    # Add mutation data if provided
    if mutation_data is not None:
        candidates = candidates.merge(mutation_data, left_index=True, right_index=True, how='left')
    # Query druggability for each gene
    druggability_scores = {}
    for gene in candidates.index[:20]:  # Limit to top 20 to avoid rate limits
        result = tu.run_one_function({
            "name": "DGIdb_query_gene",
            "arguments": {"gene_symbol": gene}
        })
        if 'data' in result and 'matchedTerms' in result['data']:
            matches = result['data']['matchedTerms']
            if len(matches) > 0:
                # Count number of drug interactions
                n_drugs = len(matches[0].get('interactions', []))
                druggability_scores[gene] = n_drugs
            else:
                druggability_scores[gene] = 0
        else:
            druggability_scores[gene] = 0
    candidates['n_drugs'] = pd.Series(druggability_scores)
    # Calculate composite priority score
    # (Normalize each component to 0-1 scale)
    candidates['essentiality_norm'] = (candidates['mean_lfc'].min() - candidates['mean_lfc']) / \
                                       (candidates['mean_lfc'].min() - candidates['mean_lfc'].max())
    if 'log2fc' in candidates.columns:
        candidates['expression_norm'] = (candidates['log2fc'] - candidates['log2fc'].min()) / \
                                        (candidates['log2fc'].max() - candidates['log2fc'].min())
    else:
        candidates['expression_norm'] = 0
    candidates['druggability_norm'] = candidates['n_drugs'] / (candidates['n_drugs'].max() + 1)
    # Weighted composite score
    candidates['priority_score'] = (
        0.5 * candidates['essentiality_norm'] +
        0.3 * candidates['expression_norm'] +
        0.2 * candidates['druggability_norm']
    )
    # Sort by priority
    candidates = candidates.sort_values('priority_score', ascending=False)
    return candidates
# Prioritize targets
# drug_targets = prioritize_drug_targets(gene_scores, expression_data=rna_seq_results)
```
**Query Existing Drugs for Top Targets**
```python
def find_drugs_for_targets(target_genes, max_per_gene=5):
    """
    Find existing drugs targeting top candidate genes.
    """
    from tooluniverse import ToolUniverse
    tu = ToolUniverse()
    drug_results = {}
    for gene in target_genes[:10]:  # Top 10 targets
        print(f"Searching drugs for {gene}...")
        # Query DGIdb
        result = tu.run_one_function({
            "name": "DGIdb_query_gene",
            "arguments": {"gene_symbol": gene}
        })
        if 'data' in result and 'matchedTerms' in result['data']:
            matches = result['data']['matchedTerms']
            if len(matches) > 0:
                interactions = matches[0].get('interactions', [])
                drugs = []
                for interaction in interactions[:max_per_gene]:
                    drugs.append({
                        'drug_name': interaction.get('drugName', 'Unknown'),
                        'interaction_type': interaction.get('interactionTypes', ['Unknown'])[0],
                        'source': interaction.get('source', 'Unknown')
                    })
                drug_results[gene] = drugs
    return drug_results
# Find drugs
# drug_candidates = find_drugs_for_targets(drug_targets.index.tolist())
```
### Phase 8: Report Generation
**Comprehensive CRISPR Screen Report**
```python
def generate_crispr_report(gene_scores, enrichment_results, drug_targets,
                           output_file="crispr_screen_report.md"):
    """
    Generate comprehensive CRISPR screen analysis report.
    """
    with open(output_file, 'w') as f:
        f.write("# CRISPR Screen Analysis Report\n\n")
        # Summary statistics
        f.write("## Summary\n\n")
        f.write(f"- **Total genes analyzed**: {len(gene_scores)}\n")
        f.write(f"- **Essential genes** (LFC < -1): {(gene_scores['mean_lfc'] < -1).sum()}\n")
        f.write(f"- **Non-essential genes** (LFC > -0.5): {(gene_scores['mean_lfc'] > -0.5).sum()}\n\n")
        # Top 20 essential genes
        f.write("## Top 20 Essential Genes\n\n")
        f.write("| Rank | Gene | Mean LFC | sgRNAs | Score |\n")
        f.write("|------|------|----------|--------|-------|\n")
        for idx, (gene, row) in enumerate(gene_scores.head(20).iterrows(), 1):
            f.write(f"| {idx} | {gene} | {row['mean_lfc']:.3f} | {int(row['n_sgrnas'])} | {row['score']:.2f} |\n")
        f.write("\n")
        # Pathway enrichment
        if enrichment_results:
            f.write("## Pathway Enrichment\n\n")
            for db, results in enrichment_results.items():
                f.write(f"### {db}\n\n")
                f.write("| Term | P-value | Adjusted P-value | Genes |\n")
                f.write("|------|---------|------------------|-------|\n")
                for _, row in results.head(10).iterrows():
                    term = row.get('Term', 'Unknown')
                    pval = row.get('P-value', 1.0)
                    adj_pval = row.get('Adjusted P-value', 1.0)
                    genes = row.get('Genes', '')
                    f.write(f"| {term} | {pval:.2e} | {adj_pval:.2e} | {genes[:50]}... |\n")
                f.write("\n")
        # Drug target prioritization
        if drug_targets is not None:
            f.write("## Top Drug Target Candidates\n\n")
            f.write("| Rank | Gene | Essentiality | Expression FC | Druggable | Priority Score |\n")
            f.write("|------|------|--------------|---------------|-----------|----------------|\n")
            for idx, (gene, row) in enumerate(drug_targets.head(10).iterrows(), 1):
                ess = row['mean_lfc']
                expr = row.get('log2fc', 0)
                drugs = int(row.get('n_drugs', 0))
                priority = row['priority_score']
                f.write(f"| {idx} | {gene} | {ess:.3f} | {expr:.2f} | {drugs} | {priority:.3f} |\n")
            f.write("\n")
        # Methods
        f.write("## Methods\n\n")
        f.write("**sgRNA Processing**: MAGeCK-like robust rank aggregation\n\n")
        f.write("**Normalization**: Median ratio normalization\n\n")
        f.write("**Scoring**: Gene-level LFC aggregation with rank-based scoring\n\n")
        f.write("**Enrichment**: Enrichr (KEGG, GO)\n\n")
        f.write("**Druggability**: DGIdb v4.0\n\n")
    print(f"Report saved to {output_file}")
    return output_file
# Generate report
# report_file = generate_crispr_report(gene_scores, enrichment_results, drug_targets)
```
## Advanced Use Cases
### Use Case 1: Genome-Wide Essentiality Screen
```python
# Load counts and design
counts, meta = load_sgrna_counts("genome_wide_screen.txt")
design = create_design_matrix(
    sample_names=['T0_1', 'T0_2', 'T14_1', 'T14_2', 'T14_3'],
    conditions=['baseline', 'baseline', 'treatment', 'treatment', 'treatment']
)
# QC and filter
qc_results = qc_sgrna_distribution(counts)
filtered_counts, filtered_mapping = filter_low_count_sgrnas(counts, meta['sgrna_to_gene'])
# Normalize
norm_counts, size_factors = normalize_counts(filtered_counts, method='median')
# Calculate LFC
lfc, control_mean, treatment_mean = calculate_lfc(norm_counts, design)
# Gene-level scoring
gene_scores = mageck_gene_scoring(lfc, filtered_mapping, method='rra')
# Enrichment
enrichment = enrich_essential_genes(gene_scores, top_n=100)
# Report
report = generate_crispr_report(gene_scores, enrichment, None)
```
### Use Case 2: Synthetic Lethality Screen (KRAS)
```python
# Run screens in both KRAS-wildtype and KRAS-mutant cells
# Load both datasets
counts_wt, meta_wt = load_sgrna_counts("kras_wildtype_screen.txt")
counts_mut, meta_mut = load_sgrna_counts("kras_mutant_screen.txt")
# Process both (same steps as Use Case 1)
# ... filtering, normalization, LFC calculation ...
gene_scores_wt = mageck_gene_scoring(lfc_wt, filtered_mapping_wt)
gene_scores_mut = mageck_gene_scoring(lfc_mut, filtered_mapping_mut)
# Identify synthetic lethal hits
sl_hits = detect_synthetic_lethality(gene_scores_wt, gene_scores_mut)
print(f"Identified {len(sl_hits)} synthetic lethal candidates with KRAS mutation")
print(sl_hits.head(10))
# Prioritize for drug development
drug_targets = prioritize_drug_targets(sl_hits)
```
### Use Case 3: Drug Target Discovery Pipeline
```python
# Complete pipeline: Screen → Essential genes → Druggability → Drug candidates
# 1. Identify essential genes from screen
gene_scores = mageck_gene_scoring(lfc, filtered_mapping)
# 2. Filter for highly essential (stringent threshold)
highly_essential = gene_scores[gene_scores['mean_lfc'] < -1.5]
# 3. Prioritize with expression data (if available)
drug_targets = prioritize_drug_targets(highly_essential, expression_data=tumor_expression)
# 4. Find existing drugs
drug_candidates = find_drugs_for_targets(drug_targets.index.tolist())
# 5. Generate comprehensive report
report = generate_crispr_report(gene_scores, None, drug_targets)
print(f"Identified {len(drug_candidates)} druggable targets with {sum(len(v) for v in drug_candidates.values())} total drug candidates")
```
### Use Case 4: Integration with Expression Data
```python
# Combine CRISPR essentiality with RNA-seq differential expression
# Load RNA-seq results (from tooluniverse-rnaseq-deseq2 skill)
rna_results = pd.read_csv("deseq2_results.csv", index_col=0)
# Merge with CRISPR scores
integrated = gene_scores.merge(
    rna_results[['log2FoldChange', 'padj']],
    left_index=True,
    right_index=True,
    how='inner'
)
# Identify genes that are:
# 1. Essential in screen (LFC < -1)
# 2. Overexpressed in disease (log2FC > 1, padj < 0.05)
targets = integrated[
    (integrated['mean_lfc'] < -1) &
    (integrated['log2FoldChange'] > 1) &
    (integrated['padj'] < 0.05)
]
print(f"Identified {len(targets)} genes essential and overexpressed in disease")
```
## ToolUniverse Tool Integration
**Key Tools Used**:
- `PubMed_search` - Literature search for gene essentiality
- `Enrichr_submit_genelist` - Pathway enrichment submission
- `Enrichr_get_results` - Retrieve enrichment results
- `DGIdb_query_gene` - Drug-gene interactions and druggability
- `STRING_get_network` - Protein interaction networks
- `KEGG_get_pathway` - Pathway visualization
**Expression Integration**:
- `GEO_get_dataset` - Download expression data
- `ArrayExpress_get_experiment` - Alternative expression source
**Variant Integration**:
- `ClinVar_query_gene` - Known pathogenic variants
- `gnomAD_get_gene` - Population allele frequencies
## Best Practices
1. **sgRNA Design Quality**: Ensure library uses validated sgRNA designs (e.g., Brunello, Avana libraries)
2. **Replicates**: Minimum 2 biological replicates per condition; 3+ preferred
3. **Sequencing Depth**: Aim for 500-1000 reads per sgRNA at T0; 200+ at final timepoint
4. **Reference Genes**: Include positive (essential) and negative (non-essential) control genes
5. **Timepoint Selection**: Balance cell doublings (14-21 days) vs. sgRNA dropout
6. **Normalization**: Use median ratio normalization for count data (more robust than CPM)
7. **Multiple Testing**: Apply FDR correction when calling essential genes (padj < 0.05)
8. **Validation**: Validate top hits with orthogonal methods (siRNA, small molecule inhibitors)
9. **Context Matters**: Gene essentiality is context-dependent (cell line, tissue, genetic background)
10. **Druggability**: Essential genes are not always druggable; check DGIdb early in prioritization
## Troubleshooting
**Problem**: Low library representation (many zero-count sgRNAs)
- **Solution**: Increase sequencing depth; check for PCR biases in library prep
**Problem**: High Gini coefficient (skewed distribution)
- **Solution**: Optimize PCR cycles; consider using unique molecular identifiers (UMIs)
**Problem**: No strong essential genes detected
- **Solution**: Check timepoint (may be too early); verify cell viability; confirm sgRNA cutting efficiency
**Problem**: Too many essential genes (>500)
- **Solution**: Timepoint may be too late; adjust LFC threshold; check for batch effects
**Problem**: Discordant sgRNAs for same gene
- **Solution**: Check for off-target effects; verify sgRNA sequences; consider removing outlier sgRNAs
## References
- Li W, et al. (2014) MAGeCK enables robust identification of essential genes from genome-scale CRISPR/Cas9 knockout screens. Genome Biology
- Hart T, et al. (2015) High-Resolution CRISPR Screens Reveal Fitness Genes and Genotype-Specific Cancer Liabilities. Cell
- Meyers RM, et al. (2017) Computational correction of copy number effect improves specificity of CRISPR-Cas9 essentiality screens. Nature Genetics
- Tsherniak A, et al. (2017) Defining a Cancer Dependency Map. Cell (DepMap)
## Quick Start
```python
# Complete minimal workflow
import pandas as pd
from tooluniverse import ToolUniverse
# 1. Load data
counts, meta = load_sgrna_counts("sgrna_counts.txt")
design = create_design_matrix(['T0_1', 'T0_2', 'T14_1', 'T14_2'],
                               ['baseline', 'baseline', 'treatment', 'treatment'])
# 2. Process
filtered_counts, filtered_mapping = filter_low_count_sgrnas(counts, meta['sgrna_to_gene'])
norm_counts, _ = normalize_counts(filtered_counts)
lfc, _, _ = calculate_lfc(norm_counts, design)
# 3. Score genes
gene_scores = mageck_gene_scoring(lfc, filtered_mapping)
# 4. Enrich pathways
enrichment = enrich_essential_genes(gene_scores, top_n=100)
# 5. Find drug targets
drug_targets = prioritize_drug_targets(gene_scores)
# 6. Generate report
report = generate_crispr_report(gene_scores, enrichment, drug_targets)
```
Productivity

tooluniverse-crispr-screen-analysis

Comprehensive CRISPR screen analysis for functional genomics. Analyze pooled or arrayed CRISPR screens (knockout, activation, interference) to identify essential genes, synthetic lethal interactions, and drug targets. Perform sgRNA count processing, gene-level scoring (MAGeCK, BAGEL), quality control, pathway enrichment, and drug target prioritization. Use for CRISPR screen analysis, gene essentiality studies, synthetic lethality detection, functional genomics, drug target validation, or identif

AI Skills HubAI Skills Hub
00
# Azure Cost Management Skill
This skill provides expert guidance for Azure Cost Management. Covers troubleshooting, best practices, decision making, limits & quotas, security, configuration, integrations & coding patterns, and deployment. It combines local quick-reference content with remote documentation fetching capabilities.
## How to Use This Skill
> **IMPORTANT for Agent**: Use the **Category Index** below to locate relevant sections. For categories with line ranges (e.g., `L35-L120`), use `read_file` with the specified lines. For categories with file links (e.g., `[security.md](security.md)`), use `read_file` on the linked reference file
> **IMPORTANT for Agent**: If `metadata.generated_at` is more than 3 months old, suggest the user pull the latest version from the repository. If `mcp_microsoftdocs` tools are not available, suggest the user install it: [Installation Guide](https://github.com/MicrosoftDocs/mcp/blob/main/README.md)
This skill requires **network access** to fetch documentation content:
- **Preferred**: Use `mcp_microsoftdocs:microsoft_docs_fetch` with query string `from=learn-agent-skill`. Returns Markdown.
- **Fallback**: Use `fetch_webpage` with query string `from=learn-agent-skill&accept=text/markdown`. Returns Markdown.
## Category Index
| Category | Lines | Description |
|----------|-------|-------------|
| Troubleshooting | L36-L64 | Diagnosing and fixing Azure billing, subscription, and reservation issues (sign-up, disabled subs, payments, invoices, reservations, savings plans) and using logs/pivot tables to investigate anomalies. |
| Best Practices | L65-L74 | Best practices for analyzing Azure costs, optimizing and reducing spend (including Advisor and Hybrid Benefit), and planning/implementing organization-wide cost management processes. |
| Decision Making | L75-L127 | Deciding how to allocate, reserve, and prepay Azure costs (reservations, savings plans, Hybrid Benefit), choosing billing APIs/offers, and planning migrations or discounts to optimize spend. |
| Limits & Quotas | L128-L143 | Limits, quotas, and timing rules for Azure costs: free tier limits, spending caps, data transfer fees, subscription limits, savings plans, SQL licensing, and billing/dormancy behavior. |
| Security | L144-L164 | Securing Azure billing and cost data: RBAC and billing roles, admin elevation, EA/MCA/CSP access, fraud prevention, and permissions for subscriptions, reservations, and savings plans. |
| Configuration | L165-L235 | Configuring Azure billing, credits, reservations, savings plans, budgets, tags, alerts, and subscription/payment relationships to control, allocate, and optimize cloud costs. |
| Integrations & Coding Patterns | L236-L252 | APIs, scripts, and Power BI patterns to automate cost analysis, billing data retrieval, subscription creation (EA/MCA/MPA), cross-tenant scenarios, and reservation management. |
| Deployment | L253-L256 | Configuring automated, large-scale exports of Azure cost and usage data to storage (like Azure Storage), including setup, scheduling, and management for ongoing cost analysis. |
### Troubleshooting
| Topic | URL |
|-------|-----|
| Troubleshoot common Azure Cost Management error codes | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/cost-management-error-codes |
| Fix and reactivate disabled Azure for Students subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/azurestudents-subscription-disabled |
| Troubleshoot and reactivate a disabled Azure subscription | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/subscription-disabled |
| Troubleshoot subscription access issues after MCA signup | https://learn.microsoft.com/en-us/azure/cost-management-billing/microsoft-customer-agreement/troubleshoot-subscription-access |
| Find who purchased an Azure reservation using logs | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/find-reservation-purchaser-from-logs |
| Troubleshoot Azure reservation usage details download issues | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/troubleshoot-download-usage |
| Fix 'No eligible subscriptions' when buying reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/troubleshoot-no-eligible-subscriptions |
| Troubleshoot unavailable Azure reservation types in portal | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/troubleshoot-product-not-available |
| Troubleshoot Azure reservation recommendation issues | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/troubleshoot-reservation-recommendation |
| Troubleshoot Azure reservations with low or zero utilization | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/troubleshoot-reservation-utilization |
| Troubleshoot unexpected Azure savings plan utilization spikes | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/troubleshoot-savings-plan-utilization |
| Troubleshoot Azure billing payment update errors | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/billing-troubleshoot-azure-payment-issues |
| Fix VM creation errors for Azure EA users | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/cannot-create-vm |
| Fix issues viewing Azure billing accounts | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/troubleshoot-account-not-found |
| Troubleshoot missing Azure invoices in the portal | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/troubleshoot-cant-find-invoice |
| Use pivot tables to troubleshoot CSP billing issues | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/troubleshoot-csp-billing-issues-usage-file-pivot-tables |
| Use pivot tables to troubleshoot MCA billing issues | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables |
| Resolve declined credit cards for Azure billing | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/troubleshoot-declined-card |
| Use pivot tables to troubleshoot EA billing issues | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/troubleshoot-ea-billing-issues-usage-file-pivot-tables |
| Troubleshoot Azure threshold billing authorization issues | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-billing/troubleshoot-threshold-billing |
| Resolve 'account belongs to a directory' subscription sign-up error | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-subscription/cannot-sign-up-subscription |
| Resolve 'No subscriptions found' Azure portal error | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-subscription/no-subscriptions-found |
| Troubleshoot new Azure account sign-up issues | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-subscription/troubleshoot-azure-sign-up |
| Fix 'Not available due to conflict' for reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-subscription/troubleshoot-not-available-conflict |
| Troubleshoot Azure subscription sign-in problems | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-subscription/troubleshoot-sign-in-issue |
### Best Practices
| Topic | URL |
|-------|-----|
| Use Cost Analysis for common Azure cost questions | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/cost-analysis-common-uses |
| Apply Azure Cost Management optimization best practices | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/cost-mgt-best-practices |
| Reduce Azure costs using Advisor recommendations | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/tutorial-acm-opt-recommendations |
| Follow best practices to onboard to Microsoft Customer Agreement | https://learn.microsoft.com/en-us/azure/cost-management-billing/microsoft-customer-agreement/onboard-microsoft-customer-agreement |
| Use SQL Server HADR with centrally managed Hybrid Benefit | https://learn.microsoft.com/en-us/azure/cost-management-billing/scope-level/sql-server-hadr-licenses |
| Plan and implement Azure cost management practices | https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/plan-manage-costs |
### Decision Making
| Topic | URL |
|-------|-----|
| Decide and configure Azure cost allocation rules | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/allocate-costs |
| Plan and implement Azure cost allocation strategies | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/cost-allocation-introduction |
| Choose and use built-in Cost Analysis views | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/cost-analysis-built-in-views |
| Plan migration from EA to MCA Cost Management APIs | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/migrate-cost-management-api |
| Use EA VM reservations to optimize Azure costs | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-vm-reservations |
| Interpret Azure EA pricing and usage calculations | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/ea-pricing-overview |
| Transition from EA to Microsoft Customer Agreement billing | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/mca-setup-account |
| Choose APIs to programmatically create Azure subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription |
| Determine supported Azure subscription and reservation transfer paths | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/subscription-transfer |
| Choose and switch between Azure subscription offers | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/switch-azure-offer |
| Choose and perform Azure account upgrades | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/upgrade-azure-subscription |
| Use Microsoft Agent Prepurchase Plan for AI services | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/agent-pre-purchase |
| Prepay for Azure virtual machine software reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/buy-vm-software-reservation |
| Charge back Azure reservation costs using amortized data | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/charge-back-usage |
| Use Copilot Credit P3 pre-purchase plans | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/copilot-credit-p3 |
| Determine which Azure reservation to purchase | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/determine-reservation-purchase |
| Exchange or refund Azure reservations self-service | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/exchange-and-refund-azure-reservations |
| Reserve Microsoft Fabric capacity to reduce costs | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/fabric-capacity |
| Decide and calculate Azure reservation instance size flexibility | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/instance-size-flexibility |
| Evaluate limited-time VM discounts in Poland Central | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/limited-time-central-poland |
| Evaluate limited-time Linux VM discounts in Sweden Central | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/limited-time-central-sweden |
| Evaluate limited-time Linux VM reservation discounts | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/limited-time-linux |
| Evaluate limited-time VM discounts in US West | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/limited-time-us-west |
| Buy Microsoft Foundry provisioned throughput reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/microsoft-foundry |
| Reserve Nutanix Cloud Clusters on Azure BareMetal | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/nutanix-bare-metal |
| Evaluate limited-time Azure SQL reservations in Poland Central | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/poland-limited-time-sql-services-reservations |
| Prepare to buy Azure reservations effectively | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/prepare-buy-reservation |
| Choose and buy App Service reserved capacity | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-app-service |
| Prepurchase Azure Databricks commit units | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-databricks-reserved-capacity |
| Purchase JBoss EAP Integrated Support reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-jboss-eap-integrated-support-app-service |
| Prepay for Azure Red Hat OpenShift software usage | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-red-hat-openshift |
| Reserve Azure Synapse Dedicated SQL pool capacity | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-sql-data-warehouse-charges |
| Prepay for Azure SQL Edge reserved capacity | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/prepay-sql-edge |
| Understand changes to Azure reservation exchange policy | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-exchange-policy-changes |
| Use Azure reservation purchase recommendations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations |
| Evaluate Azure Reservations for cost savings | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/save-compute-costs-reservations |
| Use Azure Synapse Pre-Purchase Plan with SCUs | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/synapse-analytics-pre-purchase-plan |
| Understand and view amortized reservation and savings plan costs | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/view-amortized-costs |
| Manually calculate EA savings from Azure savings plans | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/calculate-ea-savings-plan-savings |
| Select optimal Azure savings plan commitment amount | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/choose-commitment-amount |
| Decide between Azure savings plans and reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/decide-between-savings-plan-reservation |
| Understand how Azure savings plan discounts are applied | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/discount-application |
| Use Azure savings plan purchase recommendations | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/purchase-recommendations |
| Trade in Azure reservations for savings plans | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/reservation-trade-in |
| Understand Azure savings plans for compute | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/savings-plan-compute-overview |
| Plan transition to centrally managed Azure Hybrid Benefit | https://learn.microsoft.com/en-us/azure/cost-management-billing/scope-level/transition-existing |
| Assess impact of Azure billing meter ID updates | https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/billing-meter-id-updates |
| Understand shared Azure billing meter regions | https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/billing-meter-location |
| Plan migration from SAP HANA Large Instances on Azure | https://learn.microsoft.com/en-us/azure/sap/large-instances/decommission-sap-hana |
### Limits & Quotas
| Topic | URL |
|-------|-----|
| Understand Azure Cost Management data timing and granularity | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/understand-cost-mgt-data |
| Avoid charges by staying within Azure free account limits | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/avoid-charges-free-account |
| Monitor Azure free service usage against quotas | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/check-free-service-usage |
| Understand Azure free account credits and duration limits | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/create-free-services |
| Understand Azure data transfer fee rules in Europe | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/data-transfer-fees |
| Understand timing of direct EA invoice document availability | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/direct-ea-billing-invoice-documents |
| Handle Azure region optimization policy constraints | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/region-optimization |
| Manage Azure spending limit and credit-based quotas | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/spending-limit |
| Check Azure savings plan utilization and data latency | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/view-utilization |
| Understand hourly application of centrally managed SQL licenses | https://learn.microsoft.com/en-us/azure/cost-management-billing/scope-level/manage-licenses-centrally |
| Understand limits when creating multiple Azure subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/troubleshoot-subscription/create-subscriptions-deploy-resources |
| Manage Azure billing account dormancy and retention | https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/keep-billing-accounts-active |
### Security
| Topic | URL |
|-------|-----|
| Assign RBAC access to Azure Cost Management data | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/assign-access-acm-data |
| Manage Azure subscription administrators with RBAC roles | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/add-change-subscription-administrator |
| Prevent unused Azure subscriptions from being blocked or deleted | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/avoid-unused-subscriptions |
| Elevate Global Administrator access to Azure billing accounts | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/elevate-access-global-admin |
| Grant RBAC permissions to create Azure EA subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/grant-access-to-create-subscription |
| Configure Azure subscription directory transfer policies | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/manage-azure-subscription-policy |
| Assign Azure billing access roles securely | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/manage-billing-access |
| Understand PSD2 SCA requirements for Azure purchases | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/open-banking-strong-customer-authentication |
| Protect Azure tenants and subscriptions from fraud and abuse | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/protect-tenants-subscriptions |
| Understand and assign Azure Enterprise Agreement admin roles | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/understand-ea-roles |
| Use MCA billing roles for access control | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/understand-mca-roles |
| Manage tenants and secure billing access under MCA | https://learn.microsoft.com/en-us/azure/cost-management-billing/microsoft-customer-agreement/manage-tenants |
| View Azure reservations as a Cloud Solution Provider | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/how-to-view-csp-reservations |
| Grant RBAC access to Azure reservations with PowerShell | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/manage-reservations-rbac-powershell |
| Understand permissions to view and manage Azure reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/view-reservations |
| Determine who can purchase Azure savings plans | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/permission-buy-savings-plan |
| Configure permissions to view and manage savings plans | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/permission-view-manage |
### Configuration
| Topic | URL |
|-------|-----|
| Manage Conditional Azure Credit Offer resources | https://learn.microsoft.com/en-us/azure/cost-management-billing/benefits/caco/manage-conditional-credit-offer |
| Track Conditional Azure Credit Offer milestones | https://learn.microsoft.com/en-us/azure/cost-management-billing/benefits/caco/track-conditional-credit-offer |
| Manage Azure credit resources across subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/benefits/credits/manage-azure-credits |
| Track Azure credit balance for MCA billing profiles | https://learn.microsoft.com/en-us/azure/cost-management-billing/benefits/credits/mca-check-azure-credits-balance |
| Configure and manage Azure discount resources | https://learn.microsoft.com/en-us/azure/cost-management-billing/benefits/discounts/manage-azure-discount |
| Manage MACC resources across Azure subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/benefits/macc/manage-consumption-commitment |
| Configure and apply billing tags in Cost Management | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/billing-tags |
| Configure and use Azure Cost Management alerts | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending |
| Configure and customize Azure Cost Analysis views | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/customize-cost-analysis-views |
| Configure tag inheritance for Azure cost allocation | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/enable-tag-inheritance |
| Configure Cost Management exports using SAS keys | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/export-cost-data-storage-account-sas-key |
| Configure grouping and filtering in Cost Analysis and Budgets | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/group-filter |
| Define Cost Management budgets using Bicep | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/quick-create-budget-bicep |
| Create Cost Management budgets with ARM templates | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/quick-create-budget-template |
| Set up Azure reservation utilization alerts | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/reservation-utilization-alerts |
| Create and manage Azure Cost Management budgets | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/tutorial-acm-create-budgets |
| Transfer Azure plan subscriptions between Microsoft partners | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/azure-plan-subscription-transfer-partners |
| Transfer billing ownership of MOSP Azure subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/billing-subscription-transfer |
| Cancel and permanently delete Azure subscriptions safely | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/cancel-azure-subscription |
| Initiate and manage change of channel partner for EA | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/change-of-channel-partner |
| Perform common EA billing administration tasks in Azure portal | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/direct-ea-administration |
| Manage indirect EA billing as a partner in Azure portal | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/ea-billing-administration-partners |
| Transfer Azure Enterprise enrollment accounts and subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/ea-transfers |
| Configure Azure Marketplace and private offer purchase policies | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/enable-marketplace-purchases |
| Configure Partner Admin Link (PAL) for Azure customer management | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/link-partner-id |
| Link partner IDs to Power Platform and Dynamics accounts via Azure | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/link-partner-id-power-apps-accounts |
| Configure multitenant Azure billing relationships and subscription moves | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/manage-billing-across-tenants |
| Configure markup rules in Azure 21Vianet Cost Management | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/markup-china |
| Map EA billing tasks to a Microsoft Customer Agreement account | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/mca-enterprise-operations |
| Transfer Azure subscription, reservation, and savings plan billing to MCA | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/mca-request-billing-ownership |
| Organize MCA invoices with billing profiles and sections | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/mca-section-invoice |
| Use Microsoft Entra ID Free for subscription management | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/microsoft-entra-id-free |
| Move MOSP or MCA Azure subscriptions to an Enterprise Agreement | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/mosp-ea-transfer |
| Transfer Azure billing products to a Microsoft Partner Agreement | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/mpa-request-ownership |
| Set up Azure subscription payment by wire transfer | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/pay-by-invoice |
| Transfer Azure subscriptions between customers and CSP partners | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/transfer-subscriptions-subscribers-csp |
| Apply reserved instance discounts to Azure Dedicated Hosts | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/billing-understand-dedicated-hosts-reservation-charges |
| Calculate Enterprise Agreement reservation savings manually | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/calculate-ea-reservations-savings |
| Apply reservation discounts to Azure SQL Edge | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/discount-sql-edge |
| Configure and manage Azure reservations and scopes | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/manage-reserved-vm-instance |
| Configure and manage Azure reservations and scopes | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/manage-reserved-vm-instance |
| Apply reservation discounts to Azure App Service | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-discount-app-service |
| Understand how VM reservation discounts are applied | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-discount-application |
| Apply reservation discounts to Azure Synapse Analytics DW | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-discount-azure-sql-dw |
| Use Azure Databricks prepurchased commit units | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-discount-databricks |
| Configure automatic renewal for Azure reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-renew |
| View and interpret Azure reservation utilization | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-utilization |
| Identify software costs not covered by Azure reservations | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reserved-instance-windows-software-costs |
| Change Azure reservation directory between Entra tenants | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/troubleshoot-reservation-transfers-between-tenants |
| Apply reservation discounts to Azure Cache for Redis | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-azure-cache-for-redis-reservation-charges |
| Apply reservation discounts to Azure Cosmos DB throughput | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-cosmosdb-reservation-charges |
| Apply reservation discounts to Azure disk storage | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-disk-reservations |
| Apply reservation discounts to Azure SQL Database | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-reservation-charges |
| Apply reservation discounts to Azure Database for MySQL | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-reservation-charges-mysql |
| Apply reservation discounts to Azure SQL Managed Instance | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-reservation-charges-sql-managed-instance |
| Interpret reservation usage for individual subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-reserved-instance-usage |
| Interpret reservation usage for EA and MCA accounts | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-reserved-instance-usage-ea |
| Use Red Hat reservation plan discounts on Azure VMs | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-rhel-reservation-charges |
| Apply reserved capacity discounts to Azure storage services | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-storage-charges |
| Use SUSE and Red Hat software plan discounts | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/understand-suse-reservation-charges |
| View Azure reservation purchase and refund transactions | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/view-purchase-refunds |
| Use amortized savings plan costs for chargeback | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/charge-back-costs |
| Configure scopes for Azure savings plans | https://learn.microsoft.com/en-us/azure/cost-management-billing/savings-plan/scope-savings-plan |
| Create and scope SQL Server license assignments in Azure | https://learn.microsoft.com/en-us/azure/cost-management-billing/scope-level/create-sql-license-assignments |
| Configure SQL IaaS extension registration for Hybrid Benefit | https://learn.microsoft.com/en-us/azure/cost-management-billing/scope-level/sql-iaas-extension-registration |
| Configure payment methods for MCA and MOSP bills | https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/pay-bill |
| Interpret Azure detailed usage and charges CSV fields | https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/understand-usage |
### Integrations & Coding Patterns
| Topic | URL |
|-------|-----|
| Analyze Azure costs using the Cost Management Power BI app | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app |
| Automate Azure cost management with APIs and scripts | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/manage-automation |
| Map Azure billing scenarios to Cost Management APIs | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/cost-management-automation-scenarios |
| Automate MCA billing role migration across tenants with PowerShell | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/mca-role-migration |
| Create MCA subscriptions across associated Azure tenants programmatically | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-customer-agreement-associated-billing-tenants |
| Create Azure EA subscriptions via REST, CLI, PowerShell, and ARM | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement |
| Programmatically create MCA subscriptions using latest Azure APIs | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement |
| Programmatically create MCA subscriptions across Microsoft Entra tenants | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement-across-tenants |
| Create Azure subscriptions for Microsoft Partner Agreement via APIs | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement |
| Use legacy APIs to programmatically create Azure subscriptions | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/programmatically-create-subscription-preview |
| Use Cost Details API for EA billing analysis | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/review-enterprise-billing |
| Retrieve subscription billing data via Cost Details API | https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/review-subscription-billing |
| Use Azure reservation APIs for automation | https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/reservation-apis |
### Deployment
| Topic | URL |
|-------|-----|
| Set up recurring large-scale cost data exports | https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/ingest-azure-usage-at-scale |
Security

azure-cost-management

Expert knowledge for Azure Cost Management development including troubleshooting, best practices, decision making, limits & quotas, security, configuration, integrations & coding patterns, and deployment. Use when managing Azure billing accounts, budgets/alerts, reservations & savings plans, exports, or cost APIs, and other Azure Cost Management related development tasks. Not for Azure Advisor (use azure-advisor), Azure Monitor (use azure-monitor), Azure Quotas (use azure-quotas), Azure Policy (

AI Skills HubAI Skills Hub
00
# SEO: Programmatic SEO
Guides programmatic SEO—creating large numbers of SEO-optimized pages automatically using templates and structured data, rather than writing each page manually. Works like a mail merge for web pages: one template + data yields hundreds or thousands of unique pages targeting long-tail keyword patterns.
**When invoking**: On **first use**, if helpful, open with 12 sentences on what this skill covers and why it matters, then provide the main output. On **subsequent use** or when the user asks to skip, go directly to the main output.
## Definition
**Programmatic SEO** = Building a single template and populating it with data from a database, API, or spreadsheet to generate hundreds or thousands of unique pages. Each page targets a long-tail keyword (e.g., "best SEO tool in [city]," "[App A] + [App B] integration").
**Key differences from traditional SEO**: Technical (SEOs + engineers); long-tail focus; data-driven (data quality = success); automation; built for scale.
## Three-Part Framework
| Component | Role |
|-----------|------|
| **Templates** | Reusable page structures: layout, headings, internal links, content blocks; conditional logic for empty fields |
| **Data** | Structured information: locations, products, prices, features—must be accurate, complete, and add genuine value |
| **Automation** | Systems connecting data to templates; pages generated dynamically or published in bulk |
## Template Structure (Recommended)
| Section | Purpose |
|---------|---------|
| **Intro** | Introduction; matches user intent |
| **Evidence block** | Data-driven content unique to each page (tables, lists, verified stats); differentiates from thin content |
| **Decision** | Comparison, recommendation, or next steps |
| **FAQ** | Frequently asked questions |
| **CTA** | Call-to-action |
**Evidence block** = Real, structured data per page (business listings, pricing, reviews, verified stats). Ensures each page delivers genuine value, not recycled boilerplate with swapped variables.
## Data Foundation
| Requirement | Practice |
|-------------|----------|
| **Provenance** | Log data sources; track origin |
| **Freshness rules** | e.g., ratings every 90 days, prices every 30 days |
| **First-party / licensed** | Prefer over scraped content |
| **Clean & merge** | Deduplicate; ensure depth |
## Ideal Use Cases
| Use case | Example |
|----------|---------|
| **Location-specific pages** | "Plumber in [city]," "Best restaurants in [neighborhood]" with real local data |
| **Product comparison** | "[Product A] vs [Product B]" with structured specs |
| **Alternatives pages** | "[Competitor] alternatives" at scale; 50+ competitors; see **alternatives-page-generator** |
| **Software integration** | "[App A] + [App B]" integration pages (e.g., Zapier 50K+ pages) |
| **Free tools** | "[X] checker," "[Y] calculator," "[Z] generator" — standalone tool pages; toolkit hub; same ICP as main product; lead gen |
| **Travel / destination** | City + attraction combinations with reviews, photos |
| **E-commerce** | Category pages, product variations (size, color, material) |
| **FAQ / Q&A** | Pages powered by user question databases |
| **Salary / pricing** | Comparison pages with structured data |
**Avoid when**: Site structure is weak; page differences are superficial (city/name swaps only); content requires original expertise or UGC participation.
## Real-World Examples
*Examples are illustrative; no endorsement implied.*
| Company | Scale | Pattern |
|---------|-------|---------|
| **Zapier** | 50,000+ pages | "[App A] + [App B]" integration |
| **Airbnb** || Location search; destination × property |
| **Review platforms** || User reviews + automated comparison pages |
| **Travel sites** || Destination, hotel, flight, activity pages |
| **NomadList** | 2,000+ city pages | Cost-of-living, internet speed (dynamic data) |
| **Semrush, Ahrefs** | 50+ free tools | SEO checker, keyword tool, backlink checker; toolkit hub + per-tool pages |
## Content Requirements
| Requirement | Purpose |
|-------------|---------|
| **300+ words per page** | Avoid thin content penalties |
| **Unique, verifiable data** | Each page must add meaningful page-specific content beyond simple data swaps |
| **Evidence block** | Tables, lists, examples with real numbers/attributes on every page |
| **Semantic HTML** | Proper structure; conditional logic to avoid empty or repetitive sections |
| **Internal linking** | Link related programmatic pages; compounds traffic and indexation |
## Technical Considerations
| Topic | Practice |
|-------|----------|
| **Selective indexation** | Don't index all pages; use noindex rules for low-value pages |
| **Sitemap segmentation** | By country, language, division; manage crawl budget |
| **URL structure** | Descriptive URLs; clean hierarchy; see **url-structure** |
| **Schema** | JSON-LD: Product, Place, FAQ, ItemList per page type |
| **Performance** | Caching, static generation; Core Web Vitals |
## Critical Pitfalls
| Pitfall | Consequence |
|---------|-------------|
| **Thin content** | Minimal info beyond keyword; generic copy; placeholder sections → penalties |
| **Duplicate pages** | Same content with only data swaps → thin content penalties |
| **Index bloat** | Generating pages that should never be indexable → crawl budget waste |
| **Large dumps** | Publishing many similar pages at once → spam signals |
| **Filter URLs** | Using filters instead of unique URLs/titles → cannibalization |
Pages with only a title, one paragraph, and swapped city names will not rank and may incur Google penalties.
## Step-by-Step Workflow
1. **Research** — Niche, intent; include low-volume keywords; SEO tools, question databases
2. **Collect data** — Provenance log, freshness rules; first-party/licensed; define template fields
3. **Choose stack** — Next.js + DB, Webflow CMS, WordPress, headless; API + template reuse
4. **Design template** — Intro, Evidence, Decision, FAQ, CTA; schema; conditional logic
5. **Build database** — Map fields to template slots; hide empties
6. **Generate pages** — Descriptive URLs; optimize performance
7. **Deploy & monitor** — Sitemaps; indexation, rankings, CTR, bounce, conversions
8. **Optimize** — Prune weak pages; refresh data; A/B test layout, CTA
## Best Practices
| Practice | Purpose |
|----------|---------|
| **Quality over scale** | Each page must provide genuinely unique, verifiable value |
| **Launch in batches** | Small batches you can measure; avoid large dumps |
| **Strong IA** | Internal links to related guides/categories |
| **Visual elements** | Tables, maps, comparisons where relevant |
| **Match intent** | Avoid generic template text; precise user intent |
## Timeline & Expectations
- **Typical time to ranking**: ~6 months
- **Reported gains**: 40%+ traffic increases from well-designed topic clusters
- **AI search**: Structured, data-rich content performs better in AI Overviews and citation layers
## Output Format
- **Template design** (Intro, Evidence, Decision, FAQ, CTA; required data fields)
- **Data requirements** (provenance, freshness, accuracy)
- **Internal linking** (hub-and-spoke, related pages)
- **Indexation strategy** (selective indexation, sitemap segmentation)
- **Checklist** for audit
## Related Skills
- **template-page-generator**: Template structure; aggregation (gallery) + detail pages; programmatic template design; user-facing templates (CMS, design, vibe coding)
- **landing-page-generator**: Conversion-focused programmatic pages; programmatic landing pages; LP structure for template CTA
- **tools-page-generator**: Free tools pages; toolkit hub; programmatic tool pages; lead gen
- **alternatives-page-generator**: Alternatives/comparison pages at scale; competitor brand traffic
- **category-page-generator**: Category pages; template-based structure; faceted navigation
- **content-strategy**: Content clusters, pillar pages; programmatic pages as cluster nodes
- **url-structure**: URL hierarchy for programmatic pages
- **schema-markup**: Structured data (Product, Place, FAQ, ItemList)
- **faq-page-generator**: FAQ as programmatic page type; FAQPage schema; Q&A template structure
- **internal-links**: Linking programmatic pages
- **xml-sitemap**: Sitemap segmentation for large programmatic sites
- **canonical-tag**: Duplicate/thin content handling
- **seo-strategy**: SEO workflow; programmatic SEO as alternative strategy
Marketing

programmatic-seo

When the user wants to create SEO pages at scale using templates and data. Also use when the user mentions "programmatic SEO," "programmatic SEO pages," "template pages," "scale content," "location pages," "city pages," "comparison pages at scale," "X vs Y pages," "integration pages," "pages from data," "automated landing pages," or "programmatic landing pages."

AI Skills HubAI Skills Hub
00
# HR analytics and data
Comprehensive HR analytics support — from defining HR metrics and building dashboards to analyzing turnover, developing predictive models, managing data governance, and reporting to leadership.
## Supported tasks
- Analyzing employee data and turnover patterns
- Creating HR metrics, scorecards, and dashboards
- Building HR reports and data visualizations
- Conducting HR data quality audits
- Developing predictive analytics and workforce models
- Developing HR data governance and privacy strategies
- Analyzing talent acquisition ROI
- Writing HR audit reports and data management policies
- Creating diversity and inclusion scorecards
- Developing workforce segmentation strategies
## Key prompts
### HR metrics and scorecards
1. "What are the most important HR metrics for measuring [recruiting/retention/engagement/L&D] effectiveness?"
2. "Help me design an HR scorecard that tracks [key metrics] aligned with our business strategy."
3. "What KPIs should I include in a monthly HR dashboard for senior leadership?"
4. "How should I define and calculate [specific metric, for example, time-to-fill, turnover rate, cost-per-hire]?"
5. "What are the industry benchmarks for [HR metric] in [industry/company size]?"
### Data analysis and reporting
1. "How should I analyze our employee turnover data to identify the primary drivers of attrition?"
2. "What patterns in employee data indicate a high flight risk, and how should I act on them?"
3. "Help me analyze our employee satisfaction survey results to identify the top 3 priorities for improvement."
4. "How can I create a monthly HR report for leadership that tells a compelling story with data?"
5. "Write an HR audit report template that covers [key HR processes/compliance areas]."
### Dashboards and visualizations
1. "What should be included in an HR analytics dashboard for a [CHRO/HR business partner/recruiter]?"
2. "How should I design data visualizations that make HR insights easy to understand for non-HR audiences?"
3. "What are best practices for creating an employee engagement dashboard that tracks trends over time?"
4. "How can I design an HR reporting template that is consistent, scalable, and meaningful?"
### Predictive analytics
1. "How can I build a predictive attrition model using our employee data to identify at-risk employees?"
2. "What data inputs are most valuable for building a predictive staffing/hiring model?"
3. "How can I use predictive analytics to forecast workforce needs for the next 12–24 months?"
4. "What machine learning approaches are most applicable to HR analytics use cases?"
5. "How should I develop workforce segmentation strategies to identify different employee personas?"
### Talent acquisition analytics
1. "How can I conduct a talent acquisition ROI analysis that demonstrates the value of our recruiting investments?"
2. "What metrics should I track to evaluate the effectiveness of our sourcing channels?"
3. "How can I use data to identify which interview stages have the highest drop-off rates in our funnel?"
4. "Write an analysis framework for measuring the quality of hire from different sources."
### Data governance and privacy
1. "What should a comprehensive HR data governance strategy include for our organization?"
2. "How can I develop HR data standardization procedures that ensure data quality and consistency?"
3. "What are the key components of an HR data privacy policy that complies with GDPR and relevant regulations?"
4. "How should we manage access to sensitive HR data to ensure security and compliance?"
5. "Write an HR data management policy that governs how employee data is collected, stored, and used."
6. "What are best practices for conducting an HR data quality audit?"
### Diversity and inclusion analytics
1. "How can I create a diversity and inclusion scorecard that tracks representation and inclusion metrics?"
2. "What data should I collect to measure the effectiveness of our D&I initiatives?"
3. "How can I use HR data to identify potential equity gaps in hiring, promotion, and pay?"
4. "Write a workforce analytics case study template for a D&I initiative that demonstrates impact."
## Tips
- Start with business questions, not data — identify the decisions you need to support before choosing metrics.
- Prioritize data quality over volume — a few accurate, consistent metrics are more valuable than many unreliable ones.
- Combine quantitative data with qualitative insights (survey comments, focus groups) for richer analysis.
- Visualize data to tell a story — use trends, comparisons, and benchmarks rather than raw numbers.
- Protect employee privacy in all analytics work — anonymize data and obtain proper consent.
Data & Analytics

hr-analytics

Help HR managers with HR analytics and data management. Use when asked to "analyze employee data", "create HR metrics", "build an HR dashboard", "analyze turnover data", "develop HR reports", "build predictive analytics models", "create a data governance strategy", or any HR data and analytics task.

AI Skills HubAI Skills Hub
00
You are an autonomous transit damage prediction analyst. Do NOT ask the user questions. Read the actual codebase, evaluate damage tracking data models, packaging failure mode logic, handling chain configurations, and claims patterns, then produce a comprehensive damage prediction and prevention analysis.
TARGET: $ARGUMENTS
If arguments are provided, use them to focus the analysis (e.g., specific product categories, shipping lanes, carrier services, or damage types). If no arguments, scan the current project for all damage-related data, claims processing, and packaging protection logic.
============================================================
PHASE 1: DAMAGE DATA MODEL DISCOVERY
============================================================
Step 1.1 -- Claims Data Structure
Read damage/claims data models and identify all fields: claim ID, order/shipment reference, product SKU, damage type classification (crushed, punctured, water damage, temperature excursion, missing contents, cosmetic damage), damage severity (total loss, partial, repairable), claim value, carrier, service level, origin-destination lane, ship date, delivery date, claim date, photos/evidence, root cause assignment, resolution status.
Step 1.2 -- Product Fragility Profiles
Identify product fragility data: fragility rating (G-level sensitivity from ASTM D3332), orientation sensitivity, temperature sensitivity range, moisture sensitivity (IP rating, desiccant requirements), vibration sensitivity (resonant frequency data), stacking strength, hazmat classification, value density ($/lb), product-specific packaging specifications.
Step 1.3 -- Packaging Test Data
Read packaging test records: ISTA test series performed (1A-basic, 2A-enhanced, 3A-full simulation, 6-Amazon SIOC), test results (pass/fail/conditional), drop height tested, vibration profile applied, compression test results (BCT -- Box Compression Test), atmospheric conditioning applied, test lab and date, corrective actions from test failures.
Step 1.4 -- Supply Chain Visibility Data
Map supply chain monitoring data sources: GPS tracking, temperature loggers (Sensitech, Emerson, Tive), shock/tilt indicators (ShockWatch, SpotSee), humidity monitors, light exposure indicators (tamper detection), Lansmont SAVER field data (actual shock and vibration recordings from instrumented shipments), carrier scan event data.
============================================================
PHASE 2: DAMAGE PATTERN ANALYSIS
============================================================
Step 2.1 -- Damage Rate Calculation
Calculate damage rates across all available dimensions: overall rate (claims / shipments), by product category, by carrier, by service level, by lane (origin-destination), by season/month, by packaging configuration, by order value tier. Identify statistically significant outliers using control charts (p-chart for proportion defective).
Step 2.2 -- Failure Mode Classification
Classify damage by failure mode: compression failure (stacking damage, pallet crush), impact/shock failure (drop damage, conveyor impact, vehicle collision), vibration fatigue (resonant frequency damage over transit duration), puncture/abrasion (conveyor belt, forklift tine, rough handling), environmental (water, humidity, temperature, UV exposure), pilferage/tampering. Map each failure mode to root causes in the handling chain.
Step 2.3 -- Temporal Pattern Detection
Analyze temporal damage patterns: day-of-week effects (Monday vs. Friday shipments), peak season damage rate increase (holiday surge, weather events), transit duration correlation (damage rate vs. days in transit), dwell time impact (time sitting at transfer hubs), seasonal weather correlation (summer heat, winter freeze, monsoon moisture).
Step 2.4 -- Claims Cost Analysis
Build a comprehensive damage cost model: direct claim cost (product replacement/refund), replacement shipping cost, return shipping for damaged goods, customer service labor per claim, customer lifetime value impact (churn rate after damage experience), brand reputation cost (negative reviews citing damage), packaging upgrade cost to prevent vs. claim cost absorbed.
============================================================
PHASE 3: HANDLING CHAIN RISK ASSESSMENT
============================================================
Step 3.1 -- Carrier Handling Profile
Evaluate carrier handling characteristics: hub transfer count by service level (each transfer = additional drop risk), package handling automation level (belt vs. manual sort), sort system type and impact severity (tilt tray < sliding shoe < bomb bay), vehicle type and suspension quality, driver delivery handling behavior (ground placement vs. thrown), carrier damage claim dispute rate and claims process friction.
Step 3.2 -- Distribution Environment Modeling
Model distribution environment hazards per ISTA distribution environment guidelines:
- Expected drop heights by package weight: 1-10 lbs = 30" drop, 11-25 lbs = 24" drop, 26-45 lbs = 18" drop, 46-65 lbs = 12" drop
- Vibration PSD (Power Spectral Density) profile for truck transport per ASTM D4728
- Compression from stacking during warehouse dwell and vehicle transport
- Atmospheric conditions by lane (temperature range, humidity, altitude)
Step 3.3 -- Last-Mile Risk Factors
Assess last-mile specific risks: porch piracy (theft exposure time on doorstep), weather exposure after delivery (rain, sun, heat), residential delivery drop distance (driver release from standing height), apartment building handling (lobby pile, elevator transport), multi-carrier handoff points (SurePost/SmartPost USPS injection), locker/access point protection level.
============================================================
PHASE 4: PREDICTIVE MODELING
============================================================
Step 4.1 -- Risk Scoring Model
Evaluate or design a damage risk scoring model: input features (product fragility, package type, carrier, service level, lane, season, order value), model type (logistic regression, random forest, gradient boosting), training data quality (claim data completeness, reporting lag, bias toward high-value claims), prediction target (binary damage/no-damage or continuous damage probability), model performance metrics (AUC, precision, recall at operational threshold).
Step 4.2 -- Route-Level Risk Assessment
Score shipping routes by damage risk: identify high-risk lanes (routes with many hub transfers, extreme weather corridors, congested terminals), carrier performance variation by lane (same route, different damage rates), seasonal route risk variation, mode-specific risk comparison (ground vs. air vs. intermodal).
Step 4.3 -- Protection Level Optimization
Optimize packaging protection by risk level: define protection tiers (standard, enhanced, maximum), map products to tiers based on fragility + route risk, calculate packaging cost delta between tiers, model expected damage reduction from tier upgrade, find the cost-optimal protection level where packaging cost increase is less than expected damage cost reduction. Reference cushion curve design per MIL-HDBK-304.
============================================================
PHASE 5: PREVENTION & MONITORING
============================================================
Step 5.1 -- Packaging Design Validation
Evaluate the packaging validation process: new product packaging sign-off workflow, ISTA test requirements by product tier, vendor packaging compliance audits, packaging change management (triggered when product dimensions or fragility change), e-commerce vs. retail packaging differentiation (SIOC -- Ships In Own Container certification).
Step 5.2 -- Real-Time Monitoring
Assess real-time damage detection capabilities: IoT sensor integration for in-transit monitoring (shock, tilt, temperature breach alerts), carrier exception event correlation with damage outcomes, automated claims initiation from sensor breach events, customer damage report intake and triage workflow.
Step 5.3 -- Continuous Improvement Loop
Evaluate the feedback loop: damage data flowing back to packaging engineering, carrier scorecards including damage metrics, product design incorporating transit survivability requirements, root cause analysis driving corrective action, packaging test protocols updated based on field failure data, vendor packaging compliance improvement tracking over time.
============================================================
PHASE 6: WRITE REPORT
============================================================
Write analysis to `docs/damage-prediction-analysis.md` (create `docs/` if needed).
Include: Executive Summary (overall damage rate, annual damage cost, top failure modes), Damage Pattern Analysis (rates by carrier/lane/product/season), Handling Chain Risk Assessment, Predictive Model Evaluation, Protection Level Optimization Recommendations, Prevention Program Maturity Assessment, Prioritized Actions with estimated damage cost reduction.
============================================================
OUTPUT
============================================================
## Damage Prediction Analysis Complete
- Report: `docs/damage-prediction-analysis.md`
- Damage data records analyzed: [count]
- Overall damage rate: [percentage]
- Top failure mode: [mode] ([percentage] of claims)
- Annual damage cost: [total]
- Highest-risk lane: [origin] -> [destination] ([rate])
### Summary Table
| Area | Status | Priority |
|------|--------|----------|
| Damage rate trending | [status] | [priority] |
| Failure mode classification | [status] | [priority] |
| Carrier risk profiling | [status] | [priority] |
| Packaging protection levels | [status] | [priority] |
| Predictive model accuracy | [status] | [priority] |
| Prevention feedback loop | [status] | [priority] |
NEXT STEPS:
- "Run `/box-optimization` to redesign packaging for high-damage product categories."
- "Run `/shipping-cost` to evaluate whether carrier changes reduce both cost and damage."
- "Run `/warehouse-flow` to assess handling damage within the warehouse before carrier handoff."
DO NOT:
- Attribute all damage to carriers without analyzing warehouse-origin handling damage.
- Recommend over-packaging as a blanket solution -- it increases DIM weight cost and waste.
- Ignore low-frequency high-severity damage events in favor of high-frequency cosmetic damage.
- Use damage claim counts without normalizing by shipment volume for rate comparisons.
- Skip ISTA/ASTM test correlation -- field damage without test validation is anecdotal.
Security

damage-prediction

Audit transit damage prediction and prevention systems for packaging failure mode analysis, handling chain risk assessment, claims pattern detection, and protection level optimization. Covers ISTA/ASTM test protocol correlation, product fragility profiling (G-level sensitivity), carrier handling characterization, last-mile risk factors, route-level damage scoring, IoT sensor integration (shock, tilt, temperature), cost-optimal packaging tier modeling per MIL-HDBK-304, and continuous improvement

AI Skills HubAI Skills Hub
00
You are the skill evolution engine. You read development cycle analysis (/recall output)
and quality metrics (/metrics output), then patch skill instructions to prevent recurring issues.
Do NOT ask the user questions. Analyze findings and apply patches autonomously.
ARGUMENTS: $ARGUMENTS
- If arguments contain `--dry-run`, show proposed patches WITHOUT applying them.
- Otherwise, apply patches normally.
CONSTRAINTS:
- Maximum 3 skills patched per run (keep changes reviewable)
- Patches are ADDITIVE only (add checklist items, add phases, add gates)
- Never delete existing skill instructions
- Never modify skill names or descriptions
- Every patch must be justified by a specific finding
- Bump the version number of any modified skill (if it has one)
============================================================
PHASE 1: GATHER FINDINGS
============================================================
1. Auto-detect the project's memory directory by searching:
   - `.claude/projects/` directories matching the current project path
   - `~/.claude/projects/` directories (replacing path separators with `-`)
   - The project root for any `MEMORY.md`
2. In the memory directory, look for:
   - `recall-*.md` files (development cycle analysis)
   - `MEMORY.md` (project memory with metrics baseline and debt items)
   - Any `*-metrics-*.md` or `*-recall-*.md` files
3. Search for metrics snapshots:
   - Check the memory directory for `metrics-*.md` files
   - Check sibling directories of the memory directory for metrics data
   - Check for any `metrics/` subdirectory in the project
4. If no recall/metrics data exists, run the analysis:
   - Execute `git log` commands to get commit data
   - Classify commits by type and skill signature
   - Identify rework patterns (fix commits following feat commits)
5. Extract actionable findings:
   - Root causes of rework (from recall "What caused unnecessary rework" section)
   - Metrics that regressed or missed targets
   - Rework hotspots and their causes
   - Pipeline execution gaps (skipped/reordered steps)
============================================================
PHASE 2: MAP FINDINGS TO SKILLS
============================================================
For each finding, determine which skill(s) should be patched.
Use the pattern categories below. These are tech-stack-agnostic — adapt
the specific checklist items to whatever stack the project uses.
| Finding Category | Target Skill | Patch Type |
|-----------------|-------------|------------|
| Missing error handling / defensive coding | `/iterate` | Add error-handling checklist for the detected stack |
| Accessibility added as afterthought | `/iterate` | Add a11y requirement to component/screen creation |
| Unbounded queries or missing pagination | `/iterate` | Add query-safety checklist (limits, cursors, indexes) |
| Missing idempotency in async jobs | `/iterate` | Add idempotency checklist for the job/worker framework |
| Too many QA passes (>2) without convergence | `/qa` | Add "route upstream after 2 rounds" instruction |
| Performance/scale issues found late | `/iterate` | Add perf checklist (N+1, caching, lazy loading) |
| Design/theme inconsistency | `/iterate` | Add design-token-first requirement |
| Schema/data-model churn | `/arch-review` | Add schema design phase before implementation |
| Domain inconsistencies across layers | `/analyze` | Add cross-layer naming/contract checks |
| Missing cleanup/disposal of resources | `/iterate` | Add resource lifecycle checklist (connections, listeners, timers) |
| Security issues found late | `/iterate` | Add security checklist (input validation, auth checks, secrets) |
| Tests written in batch after features | `/iterate` | Add "test with feature" co-commit requirement |
| Dead code / orphaned files accumulating | `/iterate` | Add cleanup step to feature completion |
| Missing input validation | `/iterate` | Add validation checklist for API/form inputs |
Prioritize by impact: patches that prevent the most rework commits come first.
============================================================
PHASE 3: GENERATE PATCHES
============================================================
For each patch (max 3):
1. Read the current SKILL.md file for the target skill.
2. Identify WHERE to insert the new content:
   - Checklists: add to existing checklist section or create one
   - Phase instructions: add to the relevant phase
   - Gates: add between existing phases
3. Generate the patch content:
   - Use the same formatting style as the existing skill
   - Reference the finding that justifies the patch
   - Keep additions concise (3-10 lines per patch)
4. If `--dry-run` mode:
   - Show the proposed diff (before/after) for each patch
   - Show which file would be modified and where
   - Do NOT apply any changes
   - Skip Phase 4 (logging)
   - Output the report and stop
5. If normal mode:
   - Apply the patch using the Edit tool
   - Bump the version number in the skill header (if present)
============================================================
PHASE 4: LOG CHANGES
============================================================
Skip this phase entirely if `--dry-run` was specified.
1. Append to `~/.claude/skills/CHANGELOG.md`:
   ```
   ## {date}
   ### {skill name} v{old} -> v{new}
   **Triggered by:** {project name} /recall analysis
   **Finding:** {specific finding from recall}
   **Patch:** {what was added/changed}
   ```
2. Update the project's MEMORY.md to note which skills were evolved:
   ```
   ## Last /evolve Run ({date})
   - Patched: /iterate v4 -> v5 (added error-handling checklist)
   - Patched: /qa v3 -> v4 (added upstream routing)
   ```
3. If a sync/backup script exists at `~/.claude/scripts/sync-backup.sh`, run it.
   Otherwise, skip this step silently.
============================================================
OUTPUT
============================================================
## Skill Evolution Report
**Mode:** {normal | dry-run}
### Findings Analyzed
| # | Finding | Source | Impact (est. fix commits prevented) |
|---|---------|--------|-------------------------------------|
### Patches {Applied | Proposed (dry-run)}
| Skill | Version | Patch Summary | Justified By |
|-------|---------|--------------|--------------|
### Patch Details
For each patch, show the before/after diff of the skill file.
### Deferred Findings
Findings that could not be addressed by skill patches (need architectural changes, etc.)
NEXT STEPS:
- "Run the patched skills on your next project to validate improvements."
- "Run `/metrics` after the next project to measure impact."
- "Run `/promote` to check if these patterns should be global."
Data & Analytics

evolve

Self-improving skill system. Reads /recall and /metrics output, identifies which skills need patching based on rework patterns and regression data, generates additive patches, and logs changes. Evolves skills based on learnings from any tech stack.

AI Skills HubAI Skills Hub
00
# EDOT Python Instrumentation
Read the setup guide before making changes:
- [EDOT Python setup](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/python/setup)
- [EDOT Python configuration](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/python/configuration)
- [OpenTelemetry Python auto-instrumentation](https://opentelemetry.io/docs/zero-code/python/)
## Guidelines
1. Install `elastic-opentelemetry` via pip (add to `requirements.txt` or equivalent)
1. Run `edot-bootstrap --action=install` during image build to install auto-instrumentation packages for detected
   libraries
1. Wrap the application entrypoint with `opentelemetry-instrument` — e.g. `opentelemetry-instrument gunicorn app:app` or
   `opentelemetry-instrument python app.py`. Without this, no telemetry is collected
1. Set exactly three required environment variables:
   - `OTEL_SERVICE_NAME`
   - `OTEL_EXPORTER_OTLP_ENDPOINT` — must be the **managed OTLP endpoint** or **EDOT Collector** URL. Never use an APM
     Server URL (no `apm-server`, no `:8200`, no `/intake/v2/events`)
   - `OTEL_EXPORTER_OTLP_HEADERS``"Authorization=ApiKey <key>"` or `"Authorization=Bearer <token>"`
1. Do NOT set `OTEL_TRACES_EXPORTER`, `OTEL_METRICS_EXPORTER`, or `OTEL_LOGS_EXPORTER` — the defaults are already
   correct
1. Do NOT add code-level SDK setup (no `TracerProvider`, no `configure_azure_monitor`, etc.)
   `opentelemetry-instrument` handles everything
1. Never run both classic `elastic-apm` and EDOT on the same application
## Examples
See the [EDOT Python setup guide](https://www.elastic.co/docs/reference/opentelemetry/edot-sdks/python/setup) for
complete examples.
Data & Analytics

observability-edot-python-instrument

Instrument a Python application with the Elastic Distribution of OpenTelemetry (EDOT) Python agent for automatic tracing, metrics, and logs. Use when adding observability to a Python service that has no existing APM agent.

AI Skills HubAI Skills Hub
00
# /ar:setup — Create New Experiment
Set up a new autoresearch experiment with all required configuration.
## Usage
```
/ar:setup                                    # Interactive mode
/ar:setup engineering api-speed src/api.py "pytest bench.py" p50_ms lower
/ar:setup --list                             # Show existing experiments
/ar:setup --list-evaluators                  # Show available evaluators
```
## What It Does
### If arguments provided
Pass them directly to the setup script:
```bash
python {skill_path}/scripts/setup_experiment.py \
  --domain {domain} --name {name} \
  --target {target} --eval "{eval_cmd}" \
  --metric {metric} --direction {direction} \
  [--evaluator {evaluator}] [--scope {scope}]
```
### If no arguments (interactive mode)
Collect each parameter one at a time:
1. **Domain** — Ask: "What domain? (engineering, marketing, content, prompts, custom)"
2. **Name** — Ask: "Experiment name? (e.g., api-speed, blog-titles)"
3. **Target file** — Ask: "Which file to optimize?" Verify it exists.
4. **Eval command** — Ask: "How to measure it? (e.g., pytest bench.py, python evaluate.py)"
5. **Metric** — Ask: "What metric does the eval output? (e.g., p50_ms, ctr_score)"
6. **Direction** — Ask: "Is lower or higher better?"
7. **Evaluator** (optional) — Show built-in evaluators. Ask: "Use a built-in evaluator, or your own?"
8. **Scope** — Ask: "Store in project (.autoresearch/) or user (~/.autoresearch/)?"
Then run `setup_experiment.py` with the collected parameters.
### Listing
```bash
# Show existing experiments
python {skill_path}/scripts/setup_experiment.py --list
# Show available evaluators
python {skill_path}/scripts/setup_experiment.py --list-evaluators
```
## Built-in Evaluators
| Name | Metric | Use Case |
|------|--------|----------|
| `benchmark_speed` | `p50_ms` (lower) | Function/API execution time |
| `benchmark_size` | `size_bytes` (lower) | File, bundle, Docker image size |
| `test_pass_rate` | `pass_rate` (higher) | Test suite pass percentage |
| `build_speed` | `build_seconds` (lower) | Build/compile/Docker build time |
| `memory_usage` | `peak_mb` (lower) | Peak memory during execution |
| `llm_judge_content` | `ctr_score` (higher) | Headlines, titles, descriptions |
| `llm_judge_prompt` | `quality_score` (higher) | System prompts, agent instructions |
| `llm_judge_copy` | `engagement_score` (higher) | Social posts, ad copy, emails |
## After Setup
Report to the user:
- Experiment path and branch name
- Whether the eval command worked and the baseline metric
- Suggest: "Run `/ar:run {domain}/{name}` to start iterating, or `/ar:loop {domain}/{name}` for autonomous mode."
Data & Analytics

setup

Set up a new autoresearch experiment interactively. Collects domain, target file, eval command, metric, direction, and evaluator.

AI Skills HubAI Skills Hub
00
You are an autonomous grant management analysis agent. Investigate the entire codebase to evaluate grant application workflows, deadline management, budget construction, proposal quality scoring, compliance tracking, funder reporting, and institutional knowledge reuse. Do NOT ask the user questions.
INPUT: $ARGUMENTS (optional)
If provided, focus on a specific area (e.g., "deadline tracking", "budget management", "outcome reporting", "proposal quality"). If not provided, perform a full grant management system analysis.
============================================================
PHASE 1: SYSTEM ARCHITECTURE & WORKFLOW DISCOVERY
============================================================
1. Identify the tech stack and infrastructure:
   - Read package.json, requirements.txt, go.mod, Gemfile, pom.xml, or equivalent.
   - Identify database(s) for grant records, proposal content, and financial data.
   - Identify document management and template systems.
   - Identify external integrations (funder portals, financial systems, outcome tracking).
   - Identify collaboration and workflow tools.
2. Map the grant lifecycle:
   - Document the complete grant workflow from opportunity identification to closeout.
   - Identify all status transitions (prospect, preparing, submitted, awarded, active, reporting, closeout, declined).
   - Map user roles at each stage (development officer, program staff, finance, leadership, grants manager).
   - Check for multi-grant portfolio management capability.
3. Inventory core modules:
   - Opportunity identification and tracking.
   - Proposal development and writing.
   - Budget preparation and narrative alignment.
   - Submission management and deadline tracking.
   - Award management and compliance.
   - Financial tracking and drawdown management.
   - Outcome measurement and reporting.
   - Funder relationship management.
   - Institutional knowledge and reuse.
============================================================
PHASE 2: OPPORTUNITY IDENTIFICATION & PIPELINE
============================================================
Evaluate how effectively grant opportunities are found and managed.
PROSPECT RESEARCH:
- Check for funder database integration or search capability.
- Verify that prospect records capture funder priorities, giving history, and deadlines.
- Check for automated opportunity matching based on organizational mission and programs.
- Validate that funder relationship history is tracked (previous applications, awards).
- Check for funder contact management and relationship notes.
PIPELINE MANAGEMENT:
- Check for grant pipeline visualization (Kanban, timeline, or table views).
- Verify that the pipeline tracks probability of success for each opportunity.
- Check for pipeline revenue forecasting (expected funding by quarter and year).
- Validate that declined grants inform future strategy (reasons for decline tracked).
- Check for duplicate opportunity detection.
STRATEGIC ALIGNMENT:
- Check for mission and program alignment scoring on each opportunity.
- Verify that capacity assessment is part of the go/no-go decision workflow.
- Check for cost of pursuit tracking (staff time to prepare an application).
- Validate that diversification goals are visible (avoid over-reliance on a single funder).
============================================================
PHASE 3: PROPOSAL DEVELOPMENT ANALYSIS
============================================================
Evaluate proposal creation and quality optimization.
CONTENT MANAGEMENT:
- Check for a proposal content library (reusable boilerplate by topic).
- Verify that organizational descriptions, mission statements, and capability statements are centrally maintained and version-controlled.
- Check for program description templates by service area.
- Validate that outcome data and success stories are accessible during writing.
- Check for logic model and theory of change documentation per program.
COLLABORATIVE WRITING:
- Check for multi-author support with role-based editing.
- Verify that review and approval workflows are enforced before submission.
- Check for comment and feedback tracking within proposal drafts.
- Validate that version history preserves all drafts and reviewer changes.
- Check for concurrent editing support to prevent conflicts.
PROPOSAL QUALITY SCORING:
- Check for automated proposal quality assessment covering:
  - Completeness (all required sections addressed).
  - Responsiveness (alignment with funder priorities and RFP requirements).
  - Clarity (readability and logical flow).
  - Evidence strength (data citations, outcome evidence).
  - Budget-narrative alignment (costs match activities described).
- Verify that quality scores are tracked and correlated with success rates.
- Check for peer review scoring rubrics.
FUNDER REQUIREMENT COMPLIANCE:
- Check for RFP requirement parsing and checklist generation.
- Verify that formatting requirements are enforced (page limits, font, margins).
- Check for required attachment tracking (IRS determination letter, audit, board list).
- Validate that funder-specific terminology preferences are captured.
============================================================
PHASE 4: BUDGET PREPARATION & NARRATIVE ALIGNMENT
============================================================
Evaluate budget development and its connection to program narrative.
BUDGET CONSTRUCTION:
- Check for budget template library by funder type (federal, foundation, corporate).
- Verify line item categorization (personnel, fringe, travel, equipment, supplies, contractual, indirect/overhead).
- Check for indirect cost rate management (federally negotiated, de minimis, funder-specific).
- Validate that budget calculations are accurate (salary x FTE x months, fringe rates).
- Check for multi-year budget projection support.
- Verify cost allocation across multiple funding sources.
BUDGET-NARRATIVE ALIGNMENT:
- Check that every budget line item connects to a described program activity.
- Verify that every program activity in the narrative has corresponding budget support.
- Check for automated gap detection between budget and narrative.
- Validate that cost reasonableness is documented (why this amount for this activity).
- Check for cost-per-outcome calculation capability.
MATCH AND COST SHARING:
- Check for matching fund tracking and documentation.
- Verify in-kind contribution valuation methodology.
- Check for match commitment tracking from partner organizations.
- Validate that match requirements are visible during budget preparation.
BUDGET MODIFICATION:
- Check for budget modification request workflow.
- Verify that carryforward calculations are supported.
- Check for variance reporting (budget vs. actual by line item).
- Validate that budget modifications maintain alignment with approved scope.
============================================================
PHASE 5: DEADLINE TRACKING & SUBMISSION MANAGEMENT
============================================================
Evaluate deadline management -- a missed deadline means a lost opportunity.
DEADLINE TRACKING:
- Check for a comprehensive deadline calendar across all grants.
- Verify that deadlines cover all stages (LOI, full proposal, reports, closeout).
- Check for escalating reminders (30-day, 14-day, 7-day, 3-day, day-of).
- Validate that reminders reach all responsible parties (writer, reviewer, submitter).
- Check for internal deadline management (earlier than funder deadline for review time).
- Verify timezone handling for national and international funders.
SUBMISSION WORKFLOW:
- Check for pre-submission checklist enforcement.
- Verify that all required components are validated before submission is allowed.
- Check for funder portal integration (direct submission from system).
- Validate that submission confirmation is captured and stored.
- Check for submission receipt tracking and follow-up scheduling.
RENEWAL AND REPORTING DEADLINES:
- Check for automatic generation of reporting deadlines upon award.
- Verify that renewal application deadlines are tracked proactively.
- Check for report template pre-population from existing data.
- Validate that late report consequences are documented and visible.
============================================================
PHASE 6: AWARD MANAGEMENT & COMPLIANCE
============================================================
Evaluate post-award grant administration.
AWARD SETUP:
- Check for award record creation with all key terms captured.
- Verify that grant agreement terms are parsed into compliance requirements.
- Check for restricted vs. unrestricted fund classification.
- Validate that award modifications and amendments are tracked.
- Check for sub-award and sub-grant management if applicable.
FINANCIAL COMPLIANCE:
- Check for expenditure tracking against approved budget line items.
- Verify that spending alerts trigger when approaching budget limits.
- Check for allowable/unallowable cost flagging based on funder rules.
- Validate that financial reports align with funder-required formats.
- Check for drawdown and reimbursement request management.
- Verify that interest earned on federal funds is tracked if required.
REGULATORY COMPLIANCE:
- Check for federal grant compliance features (Uniform Guidance 2 CFR 200).
- Verify that Single Audit threshold monitoring is supported.
- Check for subrecipient monitoring requirements.
- Validate that procurement standards compliance is supported.
- Check for time and effort reporting for personnel charged to grants.
============================================================
PHASE 7: OUTCOME REPORTING & FUNDER COMMUNICATION
============================================================
Evaluate how effectively outcomes are reported to funders.
OUTCOME DATA COLLECTION:
- Check for outcome indicator tracking aligned to grant objectives.
- Verify that data collection schedules match reporting requirements.
- Check for both quantitative and qualitative outcome capture.
- Validate that outcome data connects to the logic model or theory of change.
- Check for beneficiary-level outcome tracking (not just aggregate).
REPORT GENERATION:
- Check for funder-specific report template support.
- Verify that reports auto-populate with financial and outcome data.
- Check for narrative section support with evidence integration.
- Validate that report drafts go through review before submission.
- Check for report comparison across periods (progress over time).
FUNDER RELATIONSHIP:
- Check for funder communication log (calls, emails, meetings, site visits).
- Verify that funder feedback on reports is captured and acted upon.
- Check for grant officer contact management per award.
- Validate that funder relationship health indicators are tracked.
SUCCESS RATE OPTIMIZATION:
- Check for win rate tracking by funder, program area, and proposal type.
- Verify that declined proposal feedback is captured and analyzed.
- Check for success factor analysis (what distinguishes winning proposals).
- Validate that institutional learning is captured for future proposals.
- Check for trend analysis (improving or declining success rates).
============================================================
PHASE 8: INSTITUTIONAL KNOWLEDGE & REUSE
============================================================
Evaluate how the system preserves and leverages organizational knowledge.
CONTENT REUSE:
- Check for a searchable library of past proposals by topic, funder, and outcome.
- Verify that successful proposal language is tagged and retrievable.
- Check for boilerplate management with version control.
- Validate that outcome data and success stories are indexed for retrieval.
- Check for staff transition continuity (knowledge not lost when people leave).
ANALYTICS & STRATEGY:
- Check for grant revenue trend analysis (growing, stable, declining).
- Verify funder diversification metrics (concentration risk).
- Check for cost-of-fundraising calculation for grants vs. other revenue.
- Validate that pipeline-to-award conversion analysis informs prospecting.
- Check for program area funding gap analysis.
============================================================
OUTPUT
============================================================
## Grant Management System Analysis Report
### System: {detected platform/stack}
### Scope: {what was analyzed}
### Active Grants Managed: {count or "unable to determine"}
### Funder Types Supported: {federal/foundation/corporate/government}
### Module Assessment Summary
| Module | Status | Efficiency | Critical Gaps |
|---|---|---|---|
| Opportunity Pipeline | {Robust/Partial/Minimal} | {score}/10 | {count} |
| Proposal Development | {Streamlined/Functional/Manual} | {score}/10 | {count} |
| Budget Preparation | {Automated/Template/Manual} | {score}/10 | {count} |
| Deadline Tracking | {Comprehensive/Partial/Basic} | {score}/10 | {count} |
| Award Compliance | {Robust/Partial/Basic} | {score}/10 | {count} |
| Outcome Reporting | {Integrated/Functional/Manual} | {score}/10 | {count} |
| Knowledge Reuse | {Searchable/Basic/None} | {score}/10 | {count} |
### Critical Findings
| # | Finding | Module | Severity | Impact |
|---|---|---|---|---|
| 1 | {description} | {module} | {Critical/High/Medium/Low} | {funding risk / compliance risk} |
### Deadline Safety Assessment: {score}/100
- All deadlines tracked: {Yes/Partial/No}
- Escalating reminders: {Yes/No}
- Internal buffer deadlines: {Yes/No}
- Submission checklist enforced: {Yes/No}
### Budget-Narrative Alignment
- Automated alignment checking: {Yes/No}
- Cost-per-outcome calculation: {Yes/No}
- Indirect cost management: {Configured/Manual/None}
- Match tracking: {Yes/No}
### Proposal Quality Optimization
- Quality scoring: {Automated/Rubric/None}
- Success rate tracking: {Yes/No}
- Decline feedback capture: {Yes/No}
- Content reuse library: {Searchable/Basic/None}
### Compliance Readiness
- Federal (2 CFR 200): {Ready/Partial/Not Applicable}
- Foundation requirements: {Tracked/Ad Hoc/Not Tracked}
- Financial reporting: {Automated/Manual}
- Audit readiness: {High/Medium/Low}
DO NOT:
- Overlook deadline tracking deficiencies -- a missed deadline is a lost opportunity.
- Ignore budget-narrative alignment -- misalignment is the top reason proposals are declined.
- Treat proposal quality as subjective -- systematic scoring improves win rates.
- Skip compliance review for federal grants -- non-compliance risks debarment.
- Assume success rate optimization is automatic -- it requires deliberate feedback loops.
- Evaluate grant management without considering staff capacity and workload.
- Ignore institutional knowledge capture -- staff turnover is common in nonprofits.
NEXT STEPS:
- "Address critical deadline tracking gaps to prevent missed submission windows."
- "Run `/impact-measurement` to strengthen outcome data feeding into grant reports."
- "Run `/fundraising-optimizer` to evaluate grants alongside other revenue channels."
- "Implement proposal quality scoring to improve win rates systematically."
- "Build budget-narrative alignment checking into the proposal review workflow."
- "Establish institutional knowledge library from past successful proposals."
Security

grant-writer

Audit a grant management system for proposal workflow efficiency, deadline tracking, budget-narrative alignment, outcome reporting, compliance readiness, and win rate optimization. Use when reviewing nonprofit grant software, building a grants CRM, analyzing proposal pipelines, or evaluating funder reporting tools.

AI Skills HubAI Skills Hub
00
# Plotly
Python graphing library for creating interactive, publication-quality visualizations with 40+ chart types.
## Quick Start
Install Plotly:
```bash
uv pip install plotly
```
Basic usage with Plotly Express (high-level API):
```python
import plotly.express as px
import pandas as pd
df = pd.DataFrame({
    'x': [1, 2, 3, 4],
    'y': [10, 11, 12, 13]
})
fig = px.scatter(df, x='x', y='y', title='My First Plot')
fig.show()
```
## Choosing Between APIs
### Use Plotly Express (px)
For quick, standard visualizations with sensible defaults:
- Working with pandas DataFrames
- Creating common chart types (scatter, line, bar, histogram, etc.)
- Need automatic color encoding and legends
- Want minimal code (1-5 lines)
See [reference/plotly-express.md](reference/plotly-express.md) for complete guide.
### Use Graph Objects (go)
For fine-grained control and custom visualizations:
- Chart types not in Plotly Express (3D mesh, isosurface, complex financial charts)
- Building complex multi-trace figures from scratch
- Need precise control over individual components
- Creating specialized visualizations with custom shapes and annotations
See [reference/graph-objects.md](reference/graph-objects.md) for complete guide.
**Note:** Plotly Express returns graph objects Figure, so you can combine approaches:
```python
fig = px.scatter(df, x='x', y='y')
fig.update_layout(title='Custom Title')  # Use go methods on px figure
fig.add_hline(y=10)                     # Add shapes
```
## Core Capabilities
### 1. Chart Types
Plotly supports 40+ chart types organized into categories:
**Basic Charts:** scatter, line, bar, pie, area, bubble
**Statistical Charts:** histogram, box plot, violin, distribution, error bars
**Scientific Charts:** heatmap, contour, ternary, image display
**Financial Charts:** candlestick, OHLC, waterfall, funnel, time series
**Maps:** scatter maps, choropleth, density maps (geographic visualization)
**3D Charts:** scatter3d, surface, mesh, cone, volume
**Specialized:** sunburst, treemap, sankey, parallel coordinates, gauge
For detailed examples and usage of all chart types, see [reference/chart-types.md](reference/chart-types.md).
### 2. Layouts and Styling
**Subplots:** Create multi-plot figures with shared axes:
```python
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=2, cols=2, subplot_titles=('A', 'B', 'C', 'D'))
fig.add_trace(go.Scatter(x=[1, 2], y=[3, 4]), row=1, col=1)
```
**Templates:** Apply coordinated styling:
```python
fig = px.scatter(df, x='x', y='y', template='plotly_dark')
# Built-in: plotly_white, plotly_dark, ggplot2, seaborn, simple_white
```
**Customization:** Control every aspect of appearance:
- Colors (discrete sequences, continuous scales)
- Fonts and text
- Axes (ranges, ticks, grids)
- Legends
- Margins and sizing
- Annotations and shapes
For complete layout and styling options, see [reference/layouts-styling.md](reference/layouts-styling.md).
### 3. Interactivity
Built-in interactive features:
- Hover tooltips with customizable data
- Pan and zoom
- Legend toggling
- Box/lasso selection
- Rangesliders for time series
- Buttons and dropdowns
- Animations
```python
# Custom hover template
fig.update_traces(
    hovertemplate='<b>%{x}</b><br>Value: %{y:.2f}<extra></extra>'
)
# Add rangeslider
fig.update_xaxes(rangeslider_visible=True)
# Animations
fig = px.scatter(df, x='x', y='y', animation_frame='year')
```
For complete interactivity guide, see [reference/export-interactivity.md](reference/export-interactivity.md).
### 4. Export Options
**Interactive HTML:**
```python
fig.write_html('chart.html')                       # Full standalone
fig.write_html('chart.html', include_plotlyjs='cdn')  # Smaller file
```
**Static Images (requires kaleido):**
```bash
uv pip install kaleido
```
```python
fig.write_image('chart.png')   # PNG
fig.write_image('chart.pdf')   # PDF
fig.write_image('chart.svg')   # SVG
```
For complete export options, see [reference/export-interactivity.md](reference/export-interactivity.md).
## Common Workflows
### Scientific Data Visualization
```python
import plotly.express as px
# Scatter plot with trendline
fig = px.scatter(df, x='temperature', y='yield', trendline='ols')
# Heatmap from matrix
fig = px.imshow(correlation_matrix, text_auto=True, color_continuous_scale='RdBu')
# 3D surface plot
import plotly.graph_objects as go
fig = go.Figure(data=[go.Surface(z=z_data, x=x_data, y=y_data)])
```
### Statistical Analysis
```python
# Distribution comparison
fig = px.histogram(df, x='values', color='group', marginal='box', nbins=30)
# Box plot with all points
fig = px.box(df, x='category', y='value', points='all')
# Violin plot
fig = px.violin(df, x='group', y='measurement', box=True)
```
### Time Series and Financial
```python
# Time series with rangeslider
fig = px.line(df, x='date', y='price')
fig.update_xaxes(rangeslider_visible=True)
# Candlestick chart
import plotly.graph_objects as go
fig = go.Figure(data=[go.Candlestick(
    x=df['date'],
    open=df['open'],
    high=df['high'],
    low=df['low'],
    close=df['close']
)])
```
### Multi-Plot Dashboards
```python
from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(
    rows=2, cols=2,
    subplot_titles=('Scatter', 'Bar', 'Histogram', 'Box'),
    specs=[[{'type': 'scatter'}, {'type': 'bar'}],
           [{'type': 'histogram'}, {'type': 'box'}]]
)
fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]), row=1, col=1)
fig.add_trace(go.Bar(x=['A', 'B'], y=[1, 2]), row=1, col=2)
fig.add_trace(go.Histogram(x=data), row=2, col=1)
fig.add_trace(go.Box(y=data), row=2, col=2)
fig.update_layout(height=800, showlegend=False)
```
## Integration with Dash
For interactive web applications, use Dash (Plotly's web app framework):
```bash
uv pip install dash
```
```python
import dash
from dash import dcc, html
import plotly.express as px
app = dash.Dash(__name__)
fig = px.scatter(df, x='x', y='y')
app.layout = html.Div([
    html.H1('Dashboard'),
    dcc.Graph(figure=fig)
])
app.run_server(debug=True)
```
## Reference Files
- **[plotly-express.md](reference/plotly-express.md)** - High-level API for quick visualizations
- **[graph-objects.md](reference/graph-objects.md)** - Low-level API for fine-grained control
- **[chart-types.md](reference/chart-types.md)** - Complete catalog of 40+ chart types with examples
- **[layouts-styling.md](reference/layouts-styling.md)** - Subplots, templates, colors, customization
- **[export-interactivity.md](reference/export-interactivity.md)** - Export options and interactive features
## Additional Resources
- Official documentation: https://plotly.com/python/
- API reference: https://plotly.com/python-api-reference/
- Community forum: https://community.plotly.com/
Data & Analytics

plotly

Interactive visualization library. Use when you need hover info, zoom, pan, or web-embeddable charts. Best for dashboards, exploratory analysis, and presentations. For static publication figures use matplotlib or scientific-visualization.

AI Skills HubAI Skills Hub
00
# DrugBank Database
## Overview
DrugBank is a comprehensive bioinformatics and cheminformatics database containing detailed information on drugs and drug targets. This skill enables programmatic access to DrugBank data including ~9,591 drug entries (2,037 FDA-approved small molecules, 241 biotech drugs, 96 nutraceuticals, and 6,000+ experimental compounds) with 200+ data fields per entry.
## Core Capabilities
### 1. Data Access and Authentication
Download and access DrugBank data using Python with proper authentication. The skill provides guidance on:
- Installing and configuring the `drugbank-downloader` package
- Managing credentials securely via environment variables or config files
- Downloading specific or latest database versions
- Opening and parsing XML data efficiently
- Working with cached data to optimize performance
**When to use**: Setting up DrugBank access, downloading database updates, initial project configuration.
**Reference**: See `references/data-access.md` for detailed authentication, download procedures, API access, caching strategies, and troubleshooting.
### 2. Drug Information Queries
Extract comprehensive drug information from the database including identifiers, chemical properties, pharmacology, clinical data, and cross-references to external databases.
**Query capabilities**:
- Search by DrugBank ID, name, CAS number, or keywords
- Extract basic drug information (name, type, description, indication)
- Retrieve chemical properties (SMILES, InChI, molecular formula)
- Get pharmacology data (mechanism of action, pharmacodynamics, ADME)
- Access external identifiers (PubChem, ChEMBL, UniProt, KEGG)
- Build searchable drug datasets and export to DataFrames
- Filter drugs by type (small molecule, biotech, nutraceutical)
**When to use**: Retrieving specific drug information, building drug databases, pharmacology research, literature review, drug profiling.
**Reference**: See `references/drug-queries.md` for XML navigation, query functions, data extraction methods, and performance optimization.
### 3. Drug-Drug Interactions Analysis
Analyze drug-drug interactions (DDIs) including mechanism, clinical significance, and interaction networks for pharmacovigilance and clinical decision support.
**Analysis capabilities**:
- Extract all interactions for specific drugs
- Build bidirectional interaction networks
- Classify interactions by severity and mechanism
- Check interactions between drug pairs
- Identify drugs with most interactions
- Analyze polypharmacy regimens for safety
- Create interaction matrices and network graphs
- Perform community detection in interaction networks
- Calculate interaction risk scores
**When to use**: Polypharmacy safety analysis, clinical decision support, drug interaction prediction, pharmacovigilance research, identifying contraindications.
**Reference**: See `references/interactions.md` for interaction extraction, classification methods, network analysis, and clinical applications.
### 4. Drug Targets and Pathways
Access detailed information about drug-protein interactions including targets, enzymes, transporters, carriers, and biological pathways.
**Target analysis capabilities**:
- Extract drug targets with actions (inhibitor, agonist, antagonist)
- Identify metabolic enzymes (CYP450, Phase II enzymes)
- Analyze transporters (uptake, efflux) for ADME studies
- Map drugs to biological pathways (SMPDB)
- Find drugs targeting specific proteins
- Identify drugs with shared targets for repurposing
- Analyze polypharmacology and off-target effects
- Extract Gene Ontology (GO) terms for targets
- Cross-reference with UniProt for protein data
**When to use**: Mechanism of action studies, drug repurposing research, target identification, pathway analysis, predicting off-target effects, understanding drug metabolism.
**Reference**: See `references/targets-pathways.md` for target extraction, pathway analysis, repurposing strategies, CYP450 profiling, and transporter analysis.
### 5. Chemical Properties and Similarity
Perform structure-based analysis including molecular similarity searches, property calculations, substructure searches, and ADMET predictions.
**Chemical analysis capabilities**:
- Extract chemical structures (SMILES, InChI, molecular formula)
- Calculate physicochemical properties (MW, logP, PSA, H-bonds)
- Apply Lipinski's Rule of Five and Veber's rules
- Calculate Tanimoto similarity between molecules
- Generate molecular fingerprints (Morgan, MACCS, topological)
- Perform substructure searches with SMARTS patterns
- Find structurally similar drugs for repurposing
- Create similarity matrices for drug clustering
- Predict oral absorption and BBB permeability
- Analyze chemical space with PCA and clustering
- Export chemical property databases
**When to use**: Structure-activity relationship (SAR) studies, drug similarity searches, QSAR modeling, drug-likeness assessment, ADMET prediction, chemical space exploration.
**Reference**: See `references/chemical-analysis.md` for structure extraction, similarity calculations, fingerprint generation, ADMET predictions, and chemical space analysis.
## Typical Workflows
### Drug Discovery Workflow
1. Use `data-access.md` to download and access latest DrugBank data
2. Use `drug-queries.md` to build searchable drug database
3. Use `chemical-analysis.md` to find similar compounds
4. Use `targets-pathways.md` to identify shared targets
5. Use `interactions.md` to check safety of candidate combinations
### Polypharmacy Safety Analysis
1. Use `drug-queries.md` to look up patient medications
2. Use `interactions.md` to check all pairwise interactions
3. Use `interactions.md` to classify interaction severity
4. Use `interactions.md` to calculate overall risk score
5. Use `targets-pathways.md` to understand interaction mechanisms
### Drug Repurposing Research
1. Use `targets-pathways.md` to find drugs with shared targets
2. Use `chemical-analysis.md` to find structurally similar drugs
3. Use `drug-queries.md` to extract indication and pharmacology data
4. Use `interactions.md` to assess potential combination therapies
### Pharmacology Study
1. Use `drug-queries.md` to extract drug of interest
2. Use `targets-pathways.md` to identify all protein interactions
3. Use `targets-pathways.md` to map to biological pathways
4. Use `chemical-analysis.md` to predict ADMET properties
5. Use `interactions.md` to identify potential contraindications
## Installation Requirements
### Python Packages
```bash
uv pip install drugbank-downloader  # Core access
uv pip install bioversions          # Latest version detection
uv pip install lxml                 # XML parsing optimization
uv pip install pandas               # Data manipulation
uv pip install rdkit                # Chemical informatics (for similarity)
uv pip install networkx             # Network analysis (for interactions)
uv pip install scikit-learn         # ML/clustering (for chemical space)
```
### Account Setup
1. Create free account at go.drugbank.com
2. Accept license agreement (free for academic use)
3. Obtain username and password credentials
4. Configure credentials as documented in `references/data-access.md`
## Data Version and Reproducibility
Always specify the DrugBank version for reproducible research:
```python
from drugbank_downloader import download_drugbank
path = download_drugbank(version='5.1.10')  # Specify exact version
```
Document the version used in publications and analysis scripts.
## Best Practices
1. **Credentials**: Use environment variables or config files, never hardcode
2. **Versioning**: Specify exact database version for reproducibility
3. **Caching**: Cache parsed data to avoid re-downloading and re-parsing
4. **Namespaces**: Handle XML namespaces properly when parsing
5. **Validation**: Validate chemical structures with RDKit before use
6. **Cross-referencing**: Use external identifiers (UniProt, PubChem) for integration
7. **Clinical Context**: Always consider clinical context when interpreting interaction data
8. **License Compliance**: Ensure proper licensing for your use case
## Reference Documentation
All detailed implementation guidance is organized in modular reference files:
- **references/data-access.md**: Authentication, download, parsing, API access, caching
- **references/drug-queries.md**: XML navigation, query methods, data extraction, indexing
- **references/interactions.md**: DDI extraction, classification, network analysis, safety scoring
- **references/targets-pathways.md**: Target/enzyme/transporter extraction, pathway mapping, repurposing
- **references/chemical-analysis.md**: Structure extraction, similarity, fingerprints, ADMET prediction
Load these references as needed based on your specific analysis requirements.
R&D

drugbank-database

Access and analyze comprehensive drug information from the DrugBank database including drug properties, interactions, targets, pathways, chemical structures, and pharmacology data. This skill should be used when working with pharmaceutical data, drug discovery research, pharmacology studies, drug-drug interaction analysis, target identification, chemical similarity searches, ADMET predictions, or any task requiring detailed drug and drug target information from DrugBank.

AI Skills HubAI Skills Hub
00
# Frameworks on Netlify
Netlify supports any framework that produces static output. For frameworks with server-side capabilities (SSR, API routes, middleware), an adapter or plugin translates the framework's server-side code into Netlify Functions and Edge Functions automatically.
## How It Works
During build, the framework adapter writes files to `.netlify/v1/` — functions, edge functions, redirects, and configuration. Netlify reads these to deploy the site. You do not need to write Netlify Functions manually when using a framework adapter for server-side features.
## Detecting Your Framework
Check these files to determine the framework:
| File | Framework |
|---|---|
| `astro.config.*` | Astro |
| `next.config.*` | Next.js |
| `nuxt.config.*` | Nuxt |
| `vite.config.*` + `react-router` | Vite + React (SPA or Remix) |
| `app.config.*` + `@tanstack/react-start` | TanStack Start |
| `svelte.config.*` | SvelteKit |
## Framework Reference Guides
Each framework has specific adapter/plugin requirements and local dev patterns:
- **Vite + React (SPA or with server routes)**: See [references/vite.md](references/vite.md)
- **Astro**: See [references/astro.md](references/astro.md)
- **TanStack Start**: See [references/tanstack.md](references/tanstack.md)
- **Next.js**: See [references/nextjs.md](references/nextjs.md)
## General Patterns
### Client-Side Routing (SPA)
For single-page apps with client-side routing, add a catch-all redirect:
```toml
# netlify.toml
[[redirects]]
from = "/*"
to = "/index.html"
status = 200
```
### Custom 404 Pages
- **Static sites**: Create a `404.html` in your publish directory. Netlify serves it automatically for unmatched routes.
- **SSR frameworks**: Handle 404s in the framework's routing (the adapter maps this to Netlify's function routing).
### Environment Variables in Frameworks
Each framework exposes environment variables to client-side code differently:
| Framework | Client prefix | Access pattern |
|---|---|---|
| Vite / React | `VITE_` | `import.meta.env.VITE_VAR` |
| Astro | `PUBLIC_` | `import.meta.env.PUBLIC_VAR` |
| Next.js | `NEXT_PUBLIC_` | `process.env.NEXT_PUBLIC_VAR` |
| Nuxt | `NUXT_PUBLIC_` | `useRuntimeConfig().public.var` |
Server-side code in all frameworks can access variables via `process.env.VAR` or `Netlify.env.get("VAR")`.
Data & Analytics

netlify-frameworks

Guide for deploying web frameworks on Netlify. Use when setting up a framework project (Vite/React, Astro, TanStack Start, Next.js, Nuxt, SvelteKit, Remix) for Netlify deployment, configuring adapters or plugins, or troubleshooting framework-specific Netlify integration. Covers what Netlify needs from each framework and how adapters handle server-side rendering.

AI Skills HubAI Skills Hub
00
You are an autonomous experiment tracking and reproducibility analyst. Do NOT ask the user questions. Analyze and act.
TARGET:
$ARGUMENTS
If arguments are provided, use them to focus the analysis (e.g., specific ML pipelines, experiment frameworks, or reproducibility concerns). If no arguments, scan the current project for experiment tracking patterns, parameter management, and reproducibility infrastructure.
============================================================
PHASE 1: EXPERIMENT INFRASTRUCTURE DISCOVERY
============================================================
Step 1.1 -- Technology Stack Detection
Identify experiment tracking tools in the codebase:
- `mlflow` / `mlruns/` directory -> MLflow experiment tracking
- `wandb/` / `.wandb/` -> Weights & Biases integration
- `dvc.yaml` / `dvc.lock` / `.dvc/` -> DVC (Data Version Control)
- `sacred/` config or `@ex.config` decorators -> Sacred framework
- `neptune` imports -> Neptune.ai
- `comet_ml` imports -> Comet ML
- `tensorboard/` / `events.out.tfevents.*` -> TensorBoard logging
- `params.yaml` / `hydra/` configs -> Hydra configuration management
- `.guild/` -> Guild AI
- Custom tracking: database tables, CSV logs, JSON result files
- Jupyter notebooks with inline experiment records
Step 1.2 -- Experiment Taxonomy
Map the experiment landscape:
- Experiment types: training runs, hyperparameter sweeps, ablation studies, A/B tests
- Experiment hierarchy: project -> experiment -> run -> step
- Naming conventions and organizational structure
- Tagging and categorization schemes
- Experiment lifecycle states: draft, running, completed, failed, archived
Step 1.3 -- Data Version Control
Assess data versioning practices:
- Dataset versioning strategy: DVC, Git-LFS, Delta Lake, LakeFS
- Training/validation/test split reproducibility
- Data lineage tracking: source -> transform -> dataset
- Feature store integration: Feast, Tecton, Hopsworks
- Data schema evolution and backward compatibility
============================================================
PHASE 2: PARAMETER MANAGEMENT ANALYSIS
============================================================
Step 2.1 -- Configuration Architecture
Evaluate how parameters are managed:
- Configuration format: YAML, JSON, TOML, Python dataclasses, Hydra
- Hierarchy: defaults, overrides, command-line, environment variables
- Type validation and schema enforcement
- Configuration composition: Hydra multirun, OmegaConf interpolation
- Secret management: API keys and credentials separated from config
Step 2.2 -- Hyperparameter Tracking
Assess hyperparameter logging completeness:
- All hyperparameters logged with each experiment run
- Learning rate schedules, optimizer configs, architecture params captured
- Random seeds tracked and reproducible
- Hardware and environment metadata logged (GPU type, CUDA version, library versions)
- Batch size, data augmentation parameters, preprocessing steps recorded
Step 2.3 -- Parameter Search
Evaluate search strategy implementation:
- Search methods: grid, random, Bayesian optimization, Hyperband, BOHB
- Search space definition: ranges, distributions, conditional params
- Early stopping criteria and pruning: Optuna, Ray Tune
- Multi-objective optimization support
- Search history persistence and resumability
============================================================
PHASE 3: RESULT LOGGING AND METRICS
============================================================
Step 3.1 -- Metric Logging
Assess metric capture completeness:
- Training metrics: loss, accuracy, learning rate per step/epoch
- Evaluation metrics: precision, recall, F1, AUC, BLEU, ROUGE, custom
- System metrics: GPU utilization, memory, throughput, training time
- Custom metric definitions and calculation logic
- Metric logging frequency and granularity
Step 3.2 -- Artifact Management
Evaluate artifact tracking:
- Model checkpoints: format, frequency, best-model selection
- Plots and visualizations: confusion matrices, ROC curves, loss curves
- Prediction samples and error analysis artifacts
- Environment snapshots: pip freeze, conda export, Docker images
- Log files and stdout/stderr capture
Step 3.3 -- Comparison and Visualization
Check comparison capabilities:
- Run-to-run comparison: metric tables, overlay charts
- Parallel coordinate plots for hyperparameter visualization
- Statistical significance testing between runs
- Leaderboard or best-run tracking
- Dashboard and reporting integration
============================================================
PHASE 4: REPRODUCIBILITY ASSESSMENT
============================================================
Step 4.1 -- Computational Reproducibility
Evaluate reproducibility controls:
- Random seed management: global, per-operation, deterministic mode
- Environment pinning: exact library versions, system dependencies
- Containerization: Dockerfile, docker-compose, Singularity for HPC
- Hardware specification documentation: GPU model, driver version
- Non-deterministic operation handling: CUDA non-determinism, parallel data loading
Step 4.2 -- Code-Data-Model Linkage
Check artifact linkage integrity:
- Git commit SHA linked to each experiment run
- Dataset version/hash linked to each run
- Model artifact linked back to exact code + data + params
- End-to-end lineage graph: data -> code -> model -> metrics
- Ability to recreate any historical run from stored metadata
Step 4.3 -- Reproducibility Scoring
Score reproducibility on a 0-5 scale per dimension:
- Code versioning: Is the exact code for each run recoverable?
- Data versioning: Is the exact dataset for each run recoverable?
- Environment capture: Can the compute environment be recreated?
- Parameter logging: Are all parameters recorded completely?
- Result persistence: Are all metrics and artifacts preserved?
- Documentation: Are experiment purposes and conclusions recorded?
Compute overall reproducibility score (0-30).
============================================================
PHASE 5: PIPELINE AND WORKFLOW ANALYSIS
============================================================
Step 5.1 -- Pipeline Architecture
Evaluate ML pipeline structure:
- Pipeline definition tool: Airflow, Prefect, Kubeflow, Metaflow, custom
- DAG structure: data prep -> feature engineering -> training -> evaluation -> deployment
- Pipeline versioning and parameterization
- Caching and incremental computation
- Pipeline monitoring and alerting
Step 5.2 -- Training Orchestration
Assess training infrastructure:
- Distributed training support: data parallel, model parallel, pipeline parallel
- Resource scheduling: GPU allocation, preemption, queueing
- Checkpoint and resume from failure
- Multi-experiment orchestration: sweeps, ensemble training
- Cost tracking and budget management for cloud compute
Step 5.3 -- Model Registry
Evaluate model lifecycle management:
- Model registry: MLflow Model Registry, custom, Vertex AI, SageMaker
- Model versioning and stage transitions: staging, production, archived
- Model metadata: metrics, lineage, owner, description
- Approval workflows for production promotion
- Model serving integration: batch, real-time, edge
============================================================
PHASE 6: COLLABORATION AND GOVERNANCE
============================================================
Step 6.1 -- Team Collaboration
Assess collaboration patterns:
- Shared experiment visibility across team members
- Experiment annotation and commenting
- Knowledge capture: experiment conclusions, failed approach documentation
- Notebook sharing and review workflows
- Onboarding: can a new team member understand past experiments?
Step 6.2 -- Governance and Compliance
Evaluate governance controls:
- Experiment access controls and permissions
- Audit trail for model decisions: model cards, datasheets
- Bias and fairness tracking across experiment iterations
- Data privacy compliance in experiment data (PII handling)
- Retention policies for experiment artifacts
============================================================
PHASE 7: WRITE REPORT
============================================================
Write analysis to `docs/experiment-tracking-analysis.md` (create `docs/` if needed).
Include: Executive Summary, Experiment Infrastructure Inventory, Parameter Management
Assessment, Metric Logging Evaluation, Reproducibility Scorecard (0-30), Pipeline
Architecture Review, Collaboration Assessment, Prioritized Recommendations.
============================================================
OUTPUT
============================================================
## Experiment Tracking Analysis Complete
- Report: `docs/experiment-tracking-analysis.md`
- Reproducibility score: [X]/30
- Tracking tools identified: [list]
- Experiments cataloged: [count]
- Reproducibility gaps: [count]
### Summary Table
| Area | Status | Priority |
|------|--------|----------|
| Parameter Management | [PASS/WARN/FAIL] | [P1-P4] |
| Metric Logging | [PASS/WARN/FAIL] | [P1-P4] |
| Data Versioning | [PASS/WARN/FAIL] | [P1-P4] |
| Code-Data Linkage | [PASS/WARN/FAIL] | [P1-P4] |
| Environment Capture | [PASS/WARN/FAIL] | [P1-P4] |
| Pipeline Architecture | [PASS/WARN/FAIL] | [P1-P4] |
| Model Registry | [PASS/WARN/FAIL] | [P1-P4] |
| Collaboration | [PASS/WARN/FAIL] | [P1-P4] |
NEXT STEPS:
- "Run `/research-data-management` to assess FAIR data principles for research outputs."
- "Run `/lab-automation` to evaluate instrument-to-experiment data pipelines."
- "Run `/codebase-health` to review code quality across the ML codebase."
DO NOT:
- Do NOT modify any experiment configurations, model artifacts, or tracking databases.
- Do NOT execute any training runs or trigger pipeline executions.
- Do NOT delete or archive any experiment records or artifacts.
- Do NOT assume reproducibility without verifying seed management and environment pinning.
- Do NOT skip governance assessment even for small research teams.
Security

experiment-tracking

Audit ML experiment tracking infrastructure for reproducibility gaps, parameter logging completeness, metric capture, artifact management, and pipeline orchestration. Covers MLflow, Weights and Biases, DVC, Sacred, Neptune, Hydra configs, model registries, and produces a reproducibility scorecard (0-30) with actionable fixes for data science teams.

AI Skills HubAI Skills Hub
00
# Deploy to Render
Render supports **Git-backed** services and **prebuilt Docker image** services.
This skill covers **Git-backed** flows:
1. **Blueprint Method** - Generate render.yaml for Infrastructure-as-Code deployments
2. **Direct Creation** - Create services instantly via MCP tools
Blueprints can also run a **prebuilt Docker image** by using `runtime: image`, but the `render.yaml` still must live in a Git repo.
If there is no Git remote, stop and ask the user to either:
- Create/push a Git remote (can be minimal if only the Blueprint is needed), or
- Use the Render Dashboard/API to deploy a prebuilt Docker image (MCP cannot create image-backed services).
## When to Use This Skill
Activate this skill when users want to:
- Deploy an application to Render
- Create a render.yaml Blueprint file
- Set up Render deployment for their project
- Host or publish their application on Render's cloud platform
- Create databases, cron jobs, or other Render resources
## Happy Path (New Users)
Use this short prompt sequence before deep analysis to reduce friction:
1. Ask whether they want to deploy from a Git repo or a prebuilt Docker image.
2. Ask whether Render should provision everything the app needs (based on what seems likely from the user's description) or only the app while they bring their own infra. If dependencies are unclear, ask a short follow-up to confirm whether they need a database, workers, cron, or other services.
Then proceed with the appropriate method below.
## Choose Your Source Path
**Git Repo Path:** Required for both Blueprint and Direct Creation. The repo must be pushed to GitHub, GitLab, or Bitbucket.
**Prebuilt Docker Image Path:** Supported by Render via image-backed services. This is **not** supported by MCP; use the Dashboard/API. Ask for:
- Image URL (registry + tag)
- Registry auth (if private)
- Service type (web/worker) and port
If the user chooses a Docker image, guide them to the Render Dashboard image deploy flow or ask them to add a Git remote (so you can use a Blueprint with `runtime: image`).
## Choose Your Deployment Method (Git Repo)
Both methods require a Git repository pushed to GitHub, GitLab, or Bitbucket. (If using `runtime: image`, the repo can be minimal and only contain `render.yaml`.)
| Method | Best For | Pros |
|--------|----------|------|
| **Blueprint** | Multi-service apps, IaC workflows | Version controlled, reproducible, supports complex setups |
| **Direct Creation** | Single services, quick deployments | Instant creation, no render.yaml file needed |
### Method Selection Heuristic
Use this decision rule by default unless the user requests a specific method. Analyze the codebase first; only ask if deployment intent is unclear (e.g., DB, workers, cron).
**Use Direct Creation (MCP) when ALL are true:**
- Single service (one web app or one static site)
- No separate worker/cron services
- No attached databases or Key Value
- Simple env vars only (no shared env groups)
If this path fits and MCP isn't configured yet, stop and guide MCP setup before proceeding.
**Use Blueprint when ANY are true:**
- Multiple services (web + worker, API + frontend, etc.)
- Databases, Redis/Key Value, or other datastores are required
- Cron jobs, background workers, or private services
- You want reproducible IaC or a render.yaml committed to the repo
- Monorepo or multi-env setup that needs consistent configuration
If unsure, ask a quick clarifying question, but default to Blueprint for safety. For a single service, strongly prefer Direct Creation via MCP and guide MCP setup if needed.
## Prerequisites Check
When starting a deployment, verify these requirements in order:
**1. Confirm Source Path (Git vs Docker)**
If using Git-based methods (Blueprint or Direct Creation), the repo must be pushed to GitHub/GitLab/Bitbucket. Blueprints that reference a prebuilt image still require a Git repo with `render.yaml`.
```bash
git remote -v
```
- If no remote exists, stop and ask the user to create/push a remote **or** switch to Docker image deploy.
**2. Check MCP Tools Availability (Preferred for Single-Service)**
MCP tools provide the best experience. Check if available by attempting:
```
list_services()
```
If MCP tools are available, you can skip CLI installation for most operations.
**3. Check Render CLI Installation (for Blueprint validation)**
```bash
render --version
```
If not installed, offer to install:
- macOS: `brew install render`
- Linux/macOS: `curl -fsSL https://raw.githubusercontent.com/render-oss/cli/main/bin/install.sh | sh`
**4. MCP Setup (if MCP isn't configured)**
If `list_services()` fails because MCP isn't configured, ask whether they want to set up MCP (preferred) or continue with the CLI fallback. If they choose MCP, ask which AI tool they're using, then provide the matching instructions below. Always use their API key.
### Cursor
Walk the user through these steps:
1) Get a Render API key:
```
https://dashboard.render.com/u/*/settings#api-keys
```
2) Add this to `~/.cursor/mcp.json` (replace `<YOUR_API_KEY>`):
```json
{
  "mcpServers": {
    "render": {
      "url": "https://mcp.render.com/mcp",
      "headers": {
        "Authorization": "Bearer <YOUR_API_KEY>"
      }
    }
  }
}
```
3) Restart Cursor, then retry `list_services()`.
### Claude Code
Walk the user through these steps:
1) Get a Render API key:
```
https://dashboard.render.com/u/*/settings#api-keys
```
2) Add the MCP server with Claude Code (replace `<YOUR_API_KEY>`):
```bash
claude mcp add --transport http render https://mcp.render.com/mcp --header "Authorization: Bearer <YOUR_API_KEY>"
```
3) Restart Claude Code, then retry `list_services()`.
### Codex
Walk the user through these steps:
1) Get a Render API key:
```
https://dashboard.render.com/u/*/settings#api-keys
```
2) Set it in their shell:
```bash
export RENDER_API_KEY="<YOUR_API_KEY>"
```
3) Add the MCP server with the Codex CLI:
```bash
codex mcp add render --url https://mcp.render.com/mcp --bearer-token-env-var RENDER_API_KEY
```
4) Restart Codex, then retry `list_services()`.
### Other Tools
If the user is on another AI app, direct them to the Render MCP docs for that tool's setup steps and install method.
### Workspace Selection
After MCP is configured, have the user set the active Render workspace with a prompt like:
```
Set my Render workspace to [WORKSPACE_NAME]
```
**5. Check Authentication (CLI fallback only)**
If MCP isn't available, use the CLI instead and verify you can access your account:
```bash
# Check if user is logged in (use -o json for non-interactive mode)
render whoami -o json
```
If `render whoami` fails or returns empty data, the CLI is not authenticated. The CLI won't always prompt automatically, so explicitly prompt the user to authenticate:
If neither is configured, ask user which method they prefer:
- **API Key (CLI)**: `export RENDER_API_KEY="rnd_xxxxx"` (Get from https://dashboard.render.com/u/*/settings#api-keys)
- **Login**: `render login` (Opens browser for OAuth)
**6. Check Workspace Context**
Verify the active workspace:
```
get_selected_workspace()
```
Or via CLI:
```bash
render workspace current -o json
```
To list available workspaces:
```
list_workspaces()
```
If user needs to switch workspaces, they must do so via Dashboard or CLI (`render workspace set`).
Once prerequisites are met, proceed with deployment workflow.
---
# Method 1: Blueprint Deployment (Recommended for Complex Apps)
## Blueprint Workflow
### Step 1: Analyze Codebase
Analyze the codebase to determine framework/runtime, build and start commands, required env vars, datastores, and port binding. Use the detailed checklists in [references/codebase-analysis.md](references/codebase-analysis.md).
### Step 2: Generate render.yaml
Create a `render.yaml` Blueprint file following the Blueprint specification.
Complete specification: [references/blueprint-spec.md](references/blueprint-spec.md)
**Key Points:**
- Always use `plan: free` unless user specifies otherwise
- Include ALL environment variables the app needs
- Mark secrets with `sync: false` (user fills these in Dashboard)
- Use appropriate service type: `web`, `worker`, `cron`, `static`, or `pserv`
- Use appropriate runtime: [references/runtimes.md](references/runtimes.md)
**Basic Structure:**
```yaml
services:
  - type: web
    name: my-app
    runtime: node
    plan: free
    buildCommand: npm ci
    startCommand: npm start
    envVars:
      - key: DATABASE_URL
        fromDatabase:
          name: postgres
          property: connectionString
      - key: JWT_SECRET
        sync: false  # User fills in Dashboard
databases:
  - name: postgres
    databaseName: myapp_db
    plan: free
```
**Service Types:**
- `web`: HTTP services, APIs, web applications (publicly accessible)
- `worker`: Background job processors (not publicly accessible)
- `cron`: Scheduled tasks that run on a cron schedule
- `static`: Static sites (HTML/CSS/JS served via CDN)
- `pserv`: Private services (internal only, within same account)
Service type details: [references/service-types.md](references/service-types.md)
Runtime options: [references/runtimes.md](references/runtimes.md)
Template examples: [assets/](assets/)
### Step 2.5: Immediate Next Steps (Always Provide)
After creating `render.yaml`, always give the user a short, explicit checklist and run validation immediately when the CLI is available:
1. **Authenticate (CLI)**: run `render whoami -o json` (if not logged in, run `render login` or set `RENDER_API_KEY`)
2. **Validate (recommended)**: run `render blueprints validate`
   - If the CLI isn't installed, offer to install it and provide the command.
3. **Commit + push**: `git add render.yaml && git commit -m "Add Render deployment configuration" && git push origin main`
4. **Open Dashboard**: Use the Blueprint deeplink and complete Git OAuth if prompted
5. **Fill secrets**: Set env vars marked `sync: false`
6. **Deploy**: Click "Apply" and monitor the deploy
### Step 3: Validate Configuration
Validate the render.yaml file to catch errors before deployment. If the CLI is installed, run the commands directly; only prompt the user if the CLI is missing:
```bash
render whoami -o json  # Ensure CLI is authenticated (won't always prompt)
render blueprints validate
```
Fix any validation errors before proceeding. Common issues:
- Missing required fields (`name`, `type`, `runtime`)
- Invalid runtime values
- Incorrect YAML syntax
- Invalid environment variable references
Configuration guide: [references/configuration-guide.md](references/configuration-guide.md)
### Step 4: Commit and Push
**IMPORTANT:** You must merge the `render.yaml` file into your repository before deploying.
Ensure the `render.yaml` file is committed and pushed to your Git remote:
```bash
git add render.yaml
git commit -m "Add Render deployment configuration"
git push origin main
```
If there is no Git remote yet, stop here and guide the user to create a GitHub/GitLab/Bitbucket repo, add it as `origin`, and push before continuing.
**Why this matters:** The Dashboard deeplink will read the render.yaml from your repository. If the file isn't merged and pushed, Render won't find the configuration and deployment will fail.
Verify the file is in your remote repository before proceeding to the next step.
### Step 5: Generate Deeplink
Get the Git repository URL:
```bash
git remote get-url origin
```
This will return a URL from your Git provider. **If the URL is SSH format, convert it to HTTPS:**
| SSH Format | HTTPS Format |
|------------|--------------|
| `git@github.com:user/repo.git` | `https://github.com/user/repo` |
| `git@gitlab.com:user/repo.git` | `https://gitlab.com/user/repo` |
| `git@bitbucket.org:user/repo.git` | `https://bitbucket.org/user/repo` |
**Conversion pattern:** Replace `git@<host>:` with `https://<host>/` and remove `.git` suffix.
Format the Dashboard deeplink using the HTTPS repository URL:
```
https://dashboard.render.com/blueprint/new?repo=<REPOSITORY_URL>
```
Example:
```
https://dashboard.render.com/blueprint/new?repo=https://github.com/username/repo-name
```
### Step 6: Guide User
**CRITICAL:** Ensure the user has merged and pushed the render.yaml file to their repository before clicking the deeplink. If the file isn't in the repository, Render cannot read the Blueprint configuration and deployment will fail.
Provide the deeplink to the user with these instructions:
1. **Verify render.yaml is merged** - Confirm the file exists in your repository on GitHub/GitLab/Bitbucket
2. Click the deeplink to open Render Dashboard
3. Complete Git provider OAuth if prompted
4. Name the Blueprint (or use default from render.yaml)
5. Fill in secret environment variables (marked with `sync: false`)
6. Review services and databases configuration
7. Click "Apply" to deploy
The deployment will begin automatically. Users can monitor progress in the Render Dashboard.
### Step 7: Verify Deployment
After the user deploys via Dashboard, verify everything is working.
**Check deployment status via MCP:**
```
list_deploys(serviceId: "<service-id>", limit: 1)
```
Look for `status: "live"` to confirm successful deployment.
**Check for runtime errors (wait 2-3 minutes after deploy):**
```
list_logs(resource: ["<service-id>"], level: ["error"], limit: 20)
```
**Check service health metrics:**
```
get_metrics(
  resourceId: "<service-id>",
  metricTypes: ["http_request_count", "cpu_usage", "memory_usage"]
)
```
If errors are found, proceed to the **Post-deploy verification and basic triage** section below.
---
# Method 2: Direct Service Creation (Quick Single-Service Deployments)
For simple deployments without Infrastructure-as-Code, create services directly via MCP tools.
## When to Use Direct Creation
- Single web service or static site
- Quick prototypes or demos
- When you don't need a render.yaml file in your repo
- Adding databases or cron jobs to existing projects
## Prerequisites for Direct Creation
**Repository must be pushed to a Git provider.** Render clones your repository to build and deploy services.
```bash
git remote -v  # Verify remote exists
git push origin main  # Ensure code is pushed
```
Supported providers: GitHub, GitLab, Bitbucket
If no remote exists, stop and ask the user to create/push a remote or switch to Docker image deploy.
**Note:** MCP does not support creating image-backed services. Use the Dashboard/API for prebuilt Docker image deploys.
## Direct Creation Workflow
Use the concise steps below, and refer to [references/direct-creation.md](references/direct-creation.md) for full MCP command examples and follow-on configuration.
### Step 1: Analyze Codebase
Use [references/codebase-analysis.md](references/codebase-analysis.md) to determine runtime, build/start commands, env vars, and datastores.
### Step 2: Create Resources via MCP
Create the service (web or static) and any required databases or key-value stores. See [references/direct-creation.md](references/direct-creation.md).
If MCP returns an error about missing Git credentials or repo access, stop and guide the user to connect their Git provider in the Render Dashboard, then retry.
### Step 3: Configure Environment Variables
Add required env vars via MCP after creation. See [references/direct-creation.md](references/direct-creation.md).
Remind the user that secrets can be set in the Dashboard if they prefer not to pass them via MCP.
### Step 4: Verify Deployment
Check deploy status, logs, and metrics. See [references/direct-creation.md](references/direct-creation.md).
---
For service discovery, configuration details, quick commands, and common issues, see [references/deployment-details.md](references/deployment-details.md).
---
# Post-deploy verification and basic triage (All Methods)
Keep this short and repeatable. If any check fails, fix it before redeploying.
1. Confirm the latest deploy is `live` and serving traffic
2. Hit the health endpoint (or root) and verify a 200 response
3. Scan recent error logs for a clear failure signature
4. Verify required env vars and port binding (`0.0.0.0:$PORT`)
Detailed checklist and commands: [references/post-deploy-checks.md](references/post-deploy-checks.md)
If the service fails to start or health checks time out, use the basic triage guide:
[references/troubleshooting-basics.md](references/troubleshooting-basics.md)
Optional: If you need deeper diagnostics (metrics/DB checks/error catalog), suggest installing the
`render-debug` skill. It is not required for the core deploy flow.
Data & Analytics

render-deploy

Deploy applications to Render by analyzing codebases, generating render.yaml Blueprints, and providing Dashboard deeplinks. Use when the user wants to deploy, host, publish, or set up their application on Render's cloud platform.

AI Skills HubAI Skills Hub
00
# Churn Prevention
You are an expert in SaaS retention and churn prevention. Your goal is to help reduce both voluntary churn (customers choosing to cancel) and involuntary churn (failed payments) through well-designed cancel flows, dynamic save offers, proactive retention, and dunning strategies.
## Before Starting
**Check for product marketing context first:**
If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
Gather this context (ask if not provided):
### 1. Current Churn Situation
- What's your monthly churn rate? (Voluntary vs. involuntary if known)
- How many active subscribers?
- What's the average MRR per customer?
- Do you have a cancel flow today, or does cancel happen instantly?
### 2. Billing & Platform
- What billing provider? (Stripe, Chargebee, Paddle, Recurly, Braintree)
- Monthly, annual, or both billing intervals?
- Do you support plan pausing or downgrades?
- Any existing retention tooling? (Churnkey, ProsperStack, Raaft)
### 3. Product & Usage Data
- Do you track feature usage per user?
- Can you identify engagement drop-offs?
- Do you have cancellation reason data from past churns?
- What's your activation metric? (What do retained users do that churned users don't?)
### 4. Constraints
- B2B or B2C? (Affects flow design)
- Self-serve cancellation required? (Some regulations mandate easy cancel)
- Brand tone for offboarding? (Empathetic, direct, playful)
---
## How This Skill Works
Churn has two types requiring different strategies:
| Type | Cause | Solution |
|------|-------|----------|
| **Voluntary** | Customer chooses to cancel | Cancel flows, save offers, exit surveys |
| **Involuntary** | Payment fails | Dunning emails, smart retries, card updaters |
Voluntary churn is typically 50-70% of total churn. Involuntary churn is 30-50% but is often easier to fix.
This skill supports three modes:
1. **Build a cancel flow** — Design from scratch with survey, save offers, and confirmation
2. **Optimize an existing flow** — Analyze cancel data and improve save rates
3. **Set up dunning** — Failed payment recovery with retries and email sequences
---
## Cancel Flow Design
### The Cancel Flow Structure
Every cancel flow follows this sequence:
```
Trigger → Survey → Dynamic Offer → Confirmation → Post-Cancel
```
**Step 1: Trigger**
Customer clicks "Cancel subscription" in account settings.
**Step 2: Exit Survey**
Ask why they're cancelling. This determines which save offer to show.
**Step 3: Dynamic Save Offer**
Present a targeted offer based on their reason (discount, pause, downgrade, etc.)
**Step 4: Confirmation**
If they still want to cancel, confirm clearly with end-of-billing-period messaging.
**Step 5: Post-Cancel**
Set expectations, offer easy reactivation path, trigger win-back sequence.
### Exit Survey Design
The exit survey is the foundation. Good reason categories:
| Reason | What It Tells You |
|--------|-------------------|
| Too expensive | Price sensitivity, may respond to discount or downgrade |
| Not using it enough | Low engagement, may respond to pause or onboarding help |
| Missing a feature | Product gap, show roadmap or workaround |
| Switching to competitor | Competitive pressure, understand what they offer |
| Technical issues / bugs | Product quality, escalate to support |
| Temporary / seasonal need | Usage pattern, offer pause |
| Business closed / changed | Unavoidable, learn and let go gracefully |
| Other | Catch-all, include free text field |
**Survey best practices:**
- 1 question, single-select with optional free text
- 5-8 reason options max (avoid decision fatigue)
- Put most common reasons first (review data quarterly)
- Don't make it feel like a guilt trip
- "Help us improve" framing works better than "Why are you leaving?"
### Dynamic Save Offers
The key insight: **match the offer to the reason.** A discount won't save someone who isn't using the product. A feature roadmap won't save someone who can't afford it.
**Offer-to-reason mapping:**
| Cancel Reason | Primary Offer | Fallback Offer |
|---------------|---------------|----------------|
| Too expensive | Discount (20-30% for 2-3 months) | Downgrade to lower plan |
| Not using it enough | Pause (1-3 months) | Free onboarding session |
| Missing feature | Roadmap preview + timeline | Workaround guide |
| Switching to competitor | Competitive comparison + discount | Feedback session |
| Technical issues | Escalate to support immediately | Credit + priority fix |
| Temporary / seasonal | Pause subscription | Downgrade temporarily |
| Business closed | Skip offer (respect the situation) ||
### Save Offer Types
**Discount**
- 20-30% off for 2-3 months is the sweet spot
- Avoid 50%+ discounts (trains customers to cancel for deals)
- Time-limit the offer ("This offer expires when you leave this page")
- Show the dollar amount saved, not just the percentage
**Pause subscription**
- 1-3 month pause maximum (longer pauses rarely reactivate)
- 60-80% of pausers eventually return to active
- Auto-reactivation with advance notice email
- Keep their data and settings intact
**Plan downgrade**
- Offer a lower tier instead of full cancellation
- Show what they keep vs. what they lose
- Position as "right-size your plan" not "downgrade"
- Easy path back up when ready
**Feature unlock / extension**
- Unlock a premium feature they haven't tried
- Extend trial of a higher tier
- Works best for "not getting enough value" reasons
**Personal outreach**
- For high-value accounts (top 10-20% by MRR)
- Route to customer success for a call
- Personal email from founder for smaller companies
### Cancel Flow UI Patterns
```
┌─────────────────────────────────────┐
│  We're sorry to see you go          │
│                                     │
│  What's the main reason you're      │
│  cancelling?                        │
│                                     │
│  ○ Too expensive                    │
│  ○ Not using it enough              │
│  ○ Missing a feature I need         │
│  ○ Switching to another tool        │
│  ○ Technical issues                 │
│  ○ Temporary / don't need right now │
│  ○ Other: [____________]
│                                     │
[Continue]
[Never mind, keep my subscription]
└─────────────────────────────────────┘
(selects "Too expensive")
┌─────────────────────────────────────┐
│  What if we could help?             │
│                                     │
│  We'd love to keep you. Here's a    │
│  special offer:
│                                     │
│  ┌───────────────────────────────┐  │
│  │  25% off for the next 3 months│  │
│  │  Save $XX/month               │  │
│  │                               │  │
│  │  [Accept Offer]               │  │
│  └───────────────────────────────┘  │
│                                     │
│  Or switch to [Basic Plan] at       │
$X/month →                         │
│                                     │
[No thanks, continue cancelling]
└─────────────────────────────────────┘
```
**UI principles:**
- Keep the "continue cancelling" option visible (no dark patterns)
- One primary offer + one fallback, not a wall of options
- Show specific dollar savings, not abstract percentages
- Use the customer's name and account data when possible
- Mobile-friendly (many cancellations happen on mobile)
For detailed cancel flow patterns by industry and billing provider, see [references/cancel-flow-patterns.md](references/cancel-flow-patterns.md).
---
## Churn Prediction & Proactive Retention
The best save happens before the customer ever clicks "Cancel."
### Risk Signals
Track these leading indicators of churn:
| Signal | Risk Level | Timeframe |
|--------|-----------|-----------|
| Login frequency drops 50%+ | High | 2-4 weeks before cancel |
| Key feature usage stops | High | 1-3 weeks before cancel |
| Support tickets spike then stop | High | 1-2 weeks before cancel |
| Email open rates decline | Medium | 2-6 weeks before cancel |
| Billing page visits increase | High | Days before cancel |
| Team seats removed | High | 1-2 weeks before cancel |
| Data export initiated | Critical | Days before cancel |
| NPS score drops below 6 | Medium | 1-3 months before cancel |
### Health Score Model
Build a simple health score (0-100) from weighted signals:
```
Health Score = (
  Login frequency score × 0.30 +
  Feature usage score   × 0.25 +
  Support sentiment     × 0.15 +
  Billing health        × 0.15 +
  Engagement score      × 0.15
)
```
| Score | Status | Action |
|-------|--------|--------|
| 80-100 | Healthy | Upsell opportunities |
| 60-79 | Needs attention | Proactive check-in |
| 40-59 | At risk | Intervention campaign |
| 0-39 | Critical | Personal outreach |
### Proactive Interventions
**Before they think about cancelling:**
| Trigger | Intervention |
|---------|-------------|
| Usage drop >50% for 2 weeks | "We noticed you haven't used [feature]. Need help?" email |
| Approaching plan limit | Upgrade nudge (not a wall — paywall-upgrade-cro handles this) |
| No login for 14 days | Re-engagement email with recent product updates |
| NPS detractor (0-6) | Personal follow-up within 24 hours |
| Support ticket unresolved >48h | Escalation + proactive status update |
| Annual renewal in 30 days | Value recap email + renewal confirmation |
---
## Involuntary Churn: Payment Recovery
Failed payments cause 30-50% of all churn but are the most recoverable.
### The Dunning Stack
```
Pre-dunning → Smart retry → Dunning emails → Grace period → Hard cancel
```
### Pre-Dunning (Prevent Failures)
- **Card expiry alerts**: Email 30, 15, and 7 days before card expires
- **Backup payment method**: Prompt for a second payment method at signup
- **Card updater services**: Visa/Mastercard auto-update programs (reduces hard declines 30-50%)
- **Pre-billing notification**: Email 3-5 days before charge for annual plans
### Smart Retry Logic
Not all failures are the same. Retry strategy by decline type:
| Decline Type | Examples | Retry Strategy |
|-------------|----------|----------------|
| Soft decline (temporary) | Insufficient funds, processor timeout | Retry 3-5 times over 7-10 days |
| Hard decline (permanent) | Card stolen, account closed | Don't retry — ask for new card |
| Authentication required | 3D Secure, SCA | Send customer to update payment |
**Retry timing best practices:**
- Retry 1: 24 hours after failure
- Retry 2: 3 days after failure
- Retry 3: 5 days after failure
- Retry 4: 7 days after failure (with dunning email escalation)
- After 4 retries: Hard cancel with reactivation path
**Smart retry tip:** Retry on the day of the month the payment originally succeeded (if Day 1 worked before, retry on Day 1). Stripe Smart Retries handles this automatically.
### Dunning Email Sequence
| Email | Timing | Tone | Content |
|-------|--------|------|---------|
| 1 | Day 0 (failure) | Friendly alert | "Your payment didn't go through. Update your card." |
| 2 | Day 3 | Helpful reminder | "Quick reminder — update your payment to keep access." |
| 3 | Day 7 | Urgency | "Your account will be paused in 3 days. Update now." |
| 4 | Day 10 | Final warning | "Last chance to keep your account active." |
**Dunning email best practices:**
- Direct link to payment update page (no login required if possible)
- Show what they'll lose (their data, their team's access)
- Don't blame ("your payment failed" not "you failed to pay")
- Include support contact for help
- Plain text performs better than designed emails for dunning
### Recovery Benchmarks
| Metric | Poor | Average | Good |
|--------|------|---------|------|
| Soft decline recovery | <40% | 50-60% | 70%+ |
| Hard decline recovery | <10% | 20-30% | 40%+ |
| Overall payment recovery | <30% | 40-50% | 60%+ |
| Pre-dunning prevention | None | 10-15% | 20-30% |
For the complete dunning playbook with provider-specific setup, see [references/dunning-playbook.md](references/dunning-playbook.md).
---
## Metrics & Measurement
### Key Churn Metrics
| Metric | Formula | Target |
|--------|---------|--------|
| Monthly churn rate | Churned customers / Start-of-month customers | <5% B2C, <2% B2B |
| Revenue churn (net) | (Lost MRR - Expansion MRR) / Start MRR | Negative (net expansion) |
| Cancel flow save rate | Saved / Total cancel sessions | 25-35% |
| Offer acceptance rate | Accepted offers / Shown offers | 15-25% |
| Pause reactivation rate | Reactivated / Total paused | 60-80% |
| Dunning recovery rate | Recovered / Total failed payments | 50-60% |
| Time to cancel | Days from first churn signal to cancel | Track trend |
### Cohort Analysis
Segment churn by:
- **Acquisition channel** — Which channels bring stickier customers?
- **Plan type** — Which plans churn most?
- **Tenure** — When do most cancellations happen? (30, 60, 90 days?)
- **Cancel reason** — Which reasons are growing?
- **Save offer type** — Which offers work best for which segments?
### Cancel Flow A/B Tests
Test one variable at a time:
| Test | Hypothesis | Metric |
|------|-----------|--------|
| Discount % (20% vs 30%) | Higher discount saves more | Save rate, LTV impact |
| Pause duration (1 vs 3 months) | Longer pause increases return rate | Reactivation rate |
| Survey placement (before vs after offer) | Survey-first personalizes offers | Save rate |
| Offer presentation (modal vs full page) | Full page gets more attention | Save rate |
| Copy tone (empathetic vs direct) | Empathetic reduces friction | Save rate |
**How to run cancel flow experiments:** Use the **ab-test-setup** skill to design statistically rigorous tests. PostHog is a good fit for cancel flow experiments — its feature flags can split users into different flows server-side, and its funnel analytics track each step of the cancel flow (survey → offer → accept/decline → confirm). See the [PostHog integration guide](../../tools/integrations/posthog.md) for setup.
---
## Common Mistakes
- **No cancel flow at all** — Instant cancel leaves money on the table. Even a simple survey + one offer saves 10-15%
- **Making cancellation hard to find** — Hidden cancel buttons breed resentment and bad reviews. Many jurisdictions require easy cancellation (FTC Click-to-Cancel rule)
- **Same offer for every reason** — A blanket discount doesn't address "missing feature" or "not using it"
- **Discounts too deep**50%+ discounts train customers to cancel-and-return for deals
- **Ignoring involuntary churn** — Often 30-50% of total churn and the easiest to fix
- **No dunning emails** — Letting payment failures silently cancel accounts
- **Guilt-trip copy**"Are you sure you want to abandon us?" damages brand trust
- **Not tracking save offer LTV** — A "saved" customer who churns 30 days later wasn't really saved
- **Pausing too long** — Pauses beyond 3 months rarely reactivate. Set limits.
- **No post-cancel path** — Make reactivation easy and trigger win-back emails, because some churned users will want to come back
---
## Tool Integrations
For implementation, see the [tools registry](../../tools/REGISTRY.md).
### Retention Platforms
| Tool | Best For | Key Feature |
|------|----------|-------------|
| **Churnkey** | Full cancel flow + dunning | AI-powered adaptive offers, 34% avg save rate |
| **ProsperStack** | Cancel flows with analytics | Advanced rules engine, Stripe/Chargebee integration |
| **Raaft** | Simple cancel flow builder | Easy setup, good for early-stage |
| **Chargebee Retention** | Chargebee customers | Native integration, was Brightback |
### Billing Providers (Dunning)
| Provider | Smart Retries | Dunning Emails | Card Updater |
|----------|:------------:|:--------------:|:------------:|
| **Stripe** | Built-in (Smart Retries) | Built-in | Automatic |
| **Chargebee** | Built-in | Built-in | Via gateway |
| **Paddle** | Built-in | Built-in | Managed |
| **Recurly** | Built-in | Built-in | Built-in |
| **Braintree** | Manual config | Manual | Via gateway |
### Related CLI Tools
| Tool | Use For |
|------|---------|
| `stripe` | Subscription management, dunning config, payment retries |
| `customer-io` | Dunning email sequences, retention campaigns |
| `posthog` | Cancel flow A/B tests via feature flags, funnel analytics |
| `mixpanel` / `ga4` | Usage tracking, churn signal analysis |
| `segment` | Event routing for health scoring |
---
## Related Skills
- **email-sequence**: For win-back email sequences after cancellation
- **paywall-upgrade-cro**: For in-app upgrade moments and trial expiration
- **pricing-strategy**: For plan structure and annual discount strategy
- **onboarding-cro**: For activation to prevent early churn
- **analytics-tracking**: For setting up churn signal events
- **ab-test-setup**: For testing cancel flow variations with statistical rigor
HR & People

churn-prevention

When the user wants to reduce churn, build cancellation flows, set up save offers, recover failed payments, or implement retention strategies. Also use when the user mentions 'churn,' 'cancel flow,' 'offboarding,' 'save offer,' 'dunning,' 'failed payment recovery,' 'win-back,' 'retention,' 'exit survey,' 'pause subscription,' 'involuntary churn,' 'people keep canceling,' 'churn rate is too high,' 'how do I keep users,' or 'customers are leaving.' Use this whenever someone is losing subscribers o

AI Skills HubAI Skills Hub
00
You are an autonomous contract risk analysis agent. You audit codebases that handle contract management, clause extraction, obligation tracking, and risk scoring. You evaluate the completeness and correctness of contract lifecycle logic, NLP/regex clause detection, risk quantification models, and compliance safeguards.
Do NOT ask the user questions. Investigate the entire codebase thoroughly.
INPUT: $ARGUMENTS (optional)
If provided, focus on a specific area (e.g., "clause extraction only", "renewal logic", "risk scoring model", "SLA monitoring").
If not provided, perform a full contract risk analysis of the entire codebase.
============================================================
PHASE 1: STACK DETECTION & CONTRACT DOMAIN MAPPING
============================================================
1. Identify the tech stack by reading package manifests (package.json, requirements.txt, go.mod, Cargo.toml, Gemfile, pom.xml, pubspec.yaml). Specifically look for:
   - NLP libraries: spaCy, NLTK, Hugging Face transformers, OpenAI, LangChain, Stanford NER, custom regex engines
   - Document processing: Apache Tika, PyPDF2, pdfplumber, docx-parser, Textract, Google Document AI, Azure Form Recognizer
   - Database/storage: PostgreSQL, MongoDB, Elasticsearch, vector DBs (Pinecone, Weaviate, Milvus, Qdrant)
   - Workflow/orchestration: Celery, Bull, Temporal, Airflow, custom queues
2. Map the contract domain architecture end to end:
   - Document ingestion pipeline (upload, parse, store)
   - Clause extraction engine (NLP, regex, ML model, hybrid)
   - Obligation tracking system (deadlines, milestones, deliverables)
   - Risk scoring module (scoring model, thresholds, weighting)
   - Renewal management (auto-renewal detection, notification triggers)
   - SLA monitoring (metric tracking, breach detection, escalation)
   - Reporting/dashboard layer (aggregation, alerts, export)
3. Build the contract module inventory:
   | Module | Purpose | Key Files | Dependencies | Test Coverage |
   |--------|---------|-----------|-------------|---------------|
============================================================
PHASE 2: CLAUSE EXTRACTION AUDIT
============================================================
Evaluate how the system identifies and extracts contract clauses.
NLP/REGEX PATTERNS:
- Inventory every regex pattern used for clause detection.
- For each pattern: what clause type it targets, false positive rate risk, edge cases it misses (multi-paragraph clauses, nested references, cross-references).
- Check for hardcoded patterns vs configurable pattern libraries.
- Verify pattern coverage across these clause types:
  - Indemnification, limitation of liability, termination, renewal/auto-renewal
  - Confidentiality, non-compete, non-solicitation, assignment
  - Force majeure, governing law, dispute resolution, arbitration
  - IP assignment, work-for-hire, licensing grants
  - Payment terms, late fees, interest rates
  - Warranty, representations, insurance requirements
  - Data protection, audit rights, compliance obligations
ML/NLP MODEL EVALUATION:
- If ML models are used: identify the type (NER, classification, sequence labeling).
- Training data: source, versioning, corpus size.
- Model versioning: pinned versions, rollback capability.
- Confidence scoring: does output include confidence per extraction?
- Human-in-the-loop: review/correction workflow for low-confidence extractions.
- Fallback strategy: behavior when the model fails or returns low confidence.
CLAUSE NORMALIZATION:
- Are extracted clauses normalized to a canonical schema?
- Is there a taxonomy/ontology for clause types?
- How are clause variants mapped (e.g., "limitation of liability" vs "cap on damages")?
- Are synonyms and abbreviations handled?
| Clause Type | Detection Method | Confidence Threshold | Fallback | Coverage |
|-------------|-----------------|---------------------|----------|----------|
============================================================
PHASE 3: OBLIGATION TRACKING ANALYSIS
============================================================
Evaluate how the system tracks contractual obligations over time.
DEADLINE MANAGEMENT:
- Storage format: date fields, cron expressions, relative dates.
- Timezone awareness and business-day calculations.
- Notification chain before deadlines (e.g., 90-day, 60-day, 30-day, 7-day).
- Escalation logic when deadlines are missed.
- Recurring obligation handling (monthly reports, quarterly audits).
OBLIGATION STATE MACHINE:
- Enumerate all valid states (pending, in-progress, completed, overdue, waived, disputed).
- Validate state transitions (can an obligation go from completed back to pending?).
- Audit trail for every state change (who, when, why).
- Atomic bulk state changes.
DEPENDENCY TRACKING:
- Inter-obligation dependencies (Task B cannot start until Task A completes).
- Cross-contract dependencies (master agreement vs SOW obligations).
- Dependency chain visualization and reporting.
ASSIGNMENT AND DELEGATION:
- Assignment to teams or individuals with notification on reassignment.
- Delegation chain tracking (original obligor vs delegatee).
| Obligation Feature | Implemented | Tested | Edge Cases Handled |
|-------------------|-------------|--------|--------------------|
============================================================
PHASE 4: RISK SCORING MODEL REVIEW
============================================================
Evaluate the risk quantification methodology.
SCORING MODEL:
- Identify all inputs: clause presence, financial exposure, counterparty creditworthiness, jurisdiction, contract value, term length.
- Scoring methodology: weighted sum, decision tree, ML model, rule-based.
- Weight configurability vs hardcoding.
- Documentation and auditability.
- Output format: 1-5 scale, 0-100 score, letter grade, traffic light.
- Separate scores for each risk dimension:
  - Financial risk (exposure caps, uncapped liability, payment terms)
  - Legal risk (jurisdiction, governing law, dispute resolution strength)
  - Operational risk (SLA stringency, termination penalties, transition obligations)
  - Compliance risk (data protection requirements, regulatory obligations)
  - Counterparty risk (party history, credit rating, industry risk)
THRESHOLD CONFIGURATION:
- Configurability per organization or contract type.
- Actions at each threshold (alert, hold, escalate to legal, block execution).
- Boundary value testing.
- Override with approval and audit trail.
AGGREGATION:
- Individual clause risks to contract-level score.
- Contract-level scores to portfolio-level view.
- Risk trends over time.
- Monte Carlo simulation or probabilistic risk modeling.
| Risk Dimension | Inputs | Weight | Thresholds | Aggregation Method |
|---------------|--------|--------|------------|-------------------|
============================================================
PHASE 5: RENEWAL & SLA MONITORING
============================================================
RENEWAL MANAGEMENT:
- Auto-renewal detection: does the system parse auto-renewal clauses and extract renewal period, notice period, and opt-out window?
- Notification pipeline when opt-out windows open.
- Flagging contracts that will auto-renew at unfavorable terms.
- Bulk renewal dashboard.
- Historical renewal tracking.
SLA MONITORING:
- Metrics tracked: uptime, response time, resolution time, delivery deadlines.
- Ingestion method: API polling, webhook, manual entry, monitoring tool integration.
- Breach detection: immediate vs batch alerting, credit calculation, escalation chain.
- SLA compliance reporting and historicization.
- Multi-tier SLA support with different thresholds.
| Feature | Detection Logic | Notification Chain | Escalation | Tested |
|---------|----------------|-------------------|------------|--------|
============================================================
PHASE 6: LIABILITY & FORCE MAJEURE ANALYSIS
============================================================
LIABILITY ANALYSIS:
- Extraction and categorization of liability clauses:
  - Limitation of liability (cap amount, cap type: per-incident, aggregate, annual)
  - Uncapped liability carve-outs (IP infringement, confidentiality breach, willful misconduct)
  - Indemnification obligations (defend, hold harmless, indemnify)
  - Insurance requirements (types, minimum amounts, additional insured status)
- Financial exposure calculation:
  - Liability caps compared against contract value
  - Total portfolio exposure across all contracts
  - Worst-case and expected-case exposure models
FORCE MAJEURE HANDLING:
- Force majeure clause identification and event categorization (natural disaster, pandemic, war, government action, labor strike, supply chain disruption, cyber attack).
- Notice requirements extraction and cure period tracking.
- Historical force majeure event tracking.
IP ASSIGNMENT CLAUSES:
- Detection of IP assignment, license-back, and work-for-hire clauses.
- Assignment scope boundaries (all IP, specific deliverables, pre-existing IP excluded).
- Moral rights waivers (relevant in non-US jurisdictions).
- Conflict checking (does assigning IP to Party A conflict with existing licenses to Party B?).
| Clause Category | Extraction Accuracy | Financial Modeling | Alerts Configured |
|----------------|--------------------|--------------------|-------------------|
============================================================
PHASE 7: DATA INTEGRITY & AUDIT TRAIL
============================================================
DATA INTEGRITY:
- Immutable contract document storage (versioned storage, checksums, no overwrites).
- Document lineage tracking (original upload, OCR output, extracted data, amendments).
- Extracted data fields linked back to source document locations (page, paragraph, offset).
- Reconciliation process between extracted data and source documents.
AUDIT TRAIL:
- User action logging (view, edit, approve, reject, override, export).
- System action logging (extraction, scoring, notification, escalation).
- Tamper-evidence (append-only log, hash chain, external audit service).
- Regulatory compliance (SOX, GDPR Article 30, industry-specific).
- Retention period configuration.
ACCESS CONTROL:
- Role-based or attribute-based access on contract documents.
- Field-level scoping (financial terms visible only to finance team).
- Access decisions logged in the audit trail.
- Segregation of duties (creator cannot approve).
| Audit Feature | Implemented | Tamper-Evident | Retention Policy | Regulatory Alignment |
|--------------|-------------|----------------|------------------|---------------------|
============================================================
OUTPUT
============================================================
## Contract Risk Analysis Report
### Stack: {detected stack}
### Scope: {what was reviewed}
### Contract Modules Detected: {count}
### Domain Coverage Score: {score}/100
### Coverage Matrix
| Domain Area | Implementation | Test Coverage | Edge Cases | Score |
|---|---|---|---|---|
| Clause Extraction | {status} | {coverage%} | {handled/total} | {score}/100 |
| Obligation Tracking | {status} | {coverage%} | {handled/total} | {score}/100 |
| Risk Scoring | {status} | {coverage%} | {handled/total} | {score}/100 |
| Renewal Management | {status} | {coverage%} | {handled/total} | {score}/100 |
| SLA Monitoring | {status} | {coverage%} | {handled/total} | {score}/100 |
| Liability Analysis | {status} | {coverage%} | {handled/total} | {score}/100 |
| Force Majeure | {status} | {coverage%} | {handled/total} | {score}/100 |
| IP Assignment | {status} | {coverage%} | {handled/total} | {score}/100 |
| Data Integrity | {status} | {coverage%} | {handled/total} | {score}/100 |
| Audit Trail | {status} | {coverage%} | {handled/total} | {score}/100 |
### Critical Findings
1. **{CR-001}: {title}** -- Severity: {Critical/High/Medium/Low}
   - Module: {clause extraction / risk scoring / obligation tracking / etc.}
   - Location: `{file:line}`
   - Issue: {description}
   - Impact: {what goes wrong -- missed clauses, incorrect risk scores, missed deadlines}
   - Fix: {specific code change or architectural recommendation}
### Clause Extraction Coverage
| Clause Type | Detected | Method | Confidence | False Positive Risk |
|---|---|---|---|---|
| Indemnification | {yes/no} | {regex/NLP/ML} | {high/medium/low} | {high/medium/low} |
| Limitation of Liability | {yes/no} | {method} | {confidence} | {risk} |
| Termination | {yes/no} | {method} | {confidence} | {risk} |
| Force Majeure | {yes/no} | {method} | {confidence} | {risk} |
| IP Assignment | {yes/no} | {method} | {confidence} | {risk} |
| ... | ... | ... | ... | ... |
### Risk Model Assessment
- Model type: {rule-based / ML / hybrid}
- Dimensions scored: {count}
- Configurable weights: {yes/no}
- Audit trail on score changes: {yes/no}
- Portfolio-level aggregation: {yes/no}
- Risk trending over time: {yes/no}
### Recommendations (ranked by impact)
1. {recommendation} -- fixes {issue}, effort {S/M/L}
2. ...
3. ...
DO NOT:
- Evaluate the legal correctness of contract clauses -- this is a code analysis, not legal advice.
- Flag jurisdiction-specific patterns as bugs without checking if the system is jurisdiction-aware.
- Assume a single extraction method is best -- hybrid approaches (regex + ML) often outperform.
- Ignore the human-in-the-loop workflow -- automated extraction without review is a liability.
- Penalize systems for not implementing every clause type if the domain is intentionally narrow.
- Recommend changes to the risk scoring model without understanding the business context.
NEXT STEPS:
- "Run `/security-review` to audit access controls and data protection on contract documents."
- "Run `/test-suite` to verify clause extraction accuracy against a test corpus."
- "Run `/perf` to profile extraction pipeline throughput on large document batches."
- "Run `/regulatory-compliance` to verify audit trail completeness for SOX/GDPR requirements."
Security

contract-risk

Audit contract management codebases for clause extraction accuracy, obligation tracking completeness, risk scoring model quality, renewal management, SLA monitoring, liability exposure, force majeure handling, and IP assignment detection. Covers NLP/regex pattern evaluation, obligation state machines, financial exposure modeling, and audit trail compliance for SOX/GDPR. Use when reviewing legal tech, CLM platforms, procurement systems, or any software that parses, scores, or manages contracts.

AI Skills HubAI Skills Hub
00
# Pricing Strategy
You are an expert in SaaS pricing and monetization strategy. Your goal is to help design pricing that captures value, drives growth, and aligns with customer willingness to pay.
## Before Starting
**Check for product marketing context first:**
If `.agents/product-marketing-context.md` exists (or `.claude/product-marketing-context.md` in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.
Gather this context (ask if not provided):
### 1. Business Context
- What type of product? (SaaS, marketplace, e-commerce, service)
- What's your current pricing (if any)?
- What's your target market? (SMB, mid-market, enterprise)
- What's your go-to-market motion? (self-serve, sales-led, hybrid)
### 2. Value & Competition
- What's the primary value you deliver?
- What alternatives do customers consider?
- How do competitors price?
### 3. Current Performance
- What's your current conversion rate?
- What's your ARPU and churn rate?
- Any feedback on pricing from customers/prospects?
### 4. Goals
- Optimizing for growth, revenue, or profitability?
- Moving upmarket or expanding downmarket?
---
## Pricing Fundamentals
### The Three Pricing Axes
**1. Packaging** — What's included at each tier?
- Features, limits, support level
- How tiers differ from each other
**2. Pricing Metric** — What do you charge for?
- Per user, per usage, flat fee
- How price scales with value
**3. Price Point** — How much do you charge?
- The actual dollar amounts
- Perceived value vs. cost
### Value-Based Pricing
Price should be based on value delivered, not cost to serve:
- **Customer's perceived value** — The ceiling
- **Your price** — Between alternatives and perceived value
- **Next best alternative** — The floor for differentiation
- **Your cost to serve** — Only a baseline, not the basis
**Key insight:** Price between the next best alternative and perceived value.
---
## Value Metrics
### What is a Value Metric?
The value metric is what you charge for—it should scale with the value customers receive.
**Good value metrics:**
- Align price with value delivered
- Are easy to understand
- Scale as customer grows
- Are hard to game
### Common Value Metrics
| Metric | Best For | Example |
|--------|----------|---------|
| Per user/seat | Collaboration tools | Slack, Notion |
| Per usage | Variable consumption | AWS, Twilio |
| Per feature | Modular products | HubSpot add-ons |
| Per contact/record | CRM, email tools | Mailchimp |
| Per transaction | Payments, marketplaces | Stripe |
| Flat fee | Simple products | Basecamp |
### Choosing Your Value Metric
Ask: "As a customer uses more of [metric], do they get more value?"
- If yes → good value metric
- If no → price doesn't align with value
---
## Tier Structure Overview
### Good-Better-Best Framework
**Good tier (Entry):** Core features, limited usage, low price
**Better tier (Recommended):** Full features, reasonable limits, anchor price
**Best tier (Premium):** Everything, advanced features, 2-3x Better price
### Tier Differentiation
- **Feature gating** — Basic vs. advanced features
- **Usage limits** — Same features, different limits
- **Support level** — Email → Priority → Dedicated
- **Access** — API, SSO, custom branding
**For detailed tier structures and persona-based packaging**: See [references/tier-structure.md](references/tier-structure.md)
---
## Pricing Research
### Van Westendorp Method
Four questions that identify acceptable price range:
1. Too expensive (wouldn't consider)
2. Too cheap (question quality)
3. Expensive but might consider
4. A bargain
Analyze intersections to find optimal pricing zone.
### MaxDiff Analysis
Identifies which features customers value most:
- Show sets of features
- Ask: Most important? Least important?
- Results inform tier packaging
**For detailed research methods**: See [references/research-methods.md](references/research-methods.md)
---
## When to Raise Prices
### Signs It's Time
**Market signals:**
- Competitors have raised prices
- Prospects don't flinch at price
- "It's so cheap!" feedback
**Business signals:**
- Very high conversion rates (>40%)
- Very low churn (<3% monthly)
- Strong unit economics
**Product signals:**
- Significant value added since last pricing
- Product more mature/stable
### Price Increase Strategies
1. **Grandfather existing** — New price for new customers only
2. **Delayed increase** — Announce 3-6 months out
3. **Tied to value** — Raise price but add features
4. **Plan restructure** — Change plans entirely
---
## Pricing Page Best Practices
### Above the Fold
- Clear tier comparison table
- Recommended tier highlighted
- Monthly/annual toggle
- Primary CTA for each tier
### Common Elements
- Feature comparison table
- Who each tier is for
- FAQ section
- Annual discount callout (17-20%)
- Money-back guarantee
- Customer logos/trust signals
### Pricing Psychology
- **Anchoring:** Show higher-priced option first
- **Decoy effect:** Middle tier should be best value
- **Charm pricing:** $49 vs. $50 (for value-focused)
- **Round pricing:** $50 vs. $49 (for premium)
---
## Pricing Checklist
### Before Setting Prices
- [ ] Defined target customer personas
- [ ] Researched competitor pricing
- [ ] Identified your value metric
- [ ] Conducted willingness-to-pay research
- [ ] Mapped features to tiers
### Pricing Structure
- [ ] Chosen number of tiers
- [ ] Differentiated tiers clearly
- [ ] Set price points based on research
- [ ] Created annual discount strategy
- [ ] Planned enterprise/custom tier
---
## Task-Specific Questions
1. What pricing research have you done?
2. What's your current ARPU and conversion rate?
3. What's your primary value metric?
4. Who are your main pricing personas?
5. Are you self-serve, sales-led, or hybrid?
6. What pricing changes are you considering?
---
## Related Skills
- **churn-prevention**: For cancel flows, save offers, and reducing revenue churn
- **page-cro**: For optimizing pricing page conversion
- **copywriting**: For pricing page copy
- **marketing-psychology**: For pricing psychology principles
- **ab-test-setup**: For testing pricing changes
- **revops**: For deal desk processes and pipeline pricing
- **sales-enablement**: For proposal templates and pricing presentations
Data & Analytics

pricing-strategy

When the user wants help with pricing decisions, packaging, or monetization strategy. Also use when the user mentions 'pricing,' 'pricing tiers,' 'freemium,' 'free trial,' 'packaging,' 'price increase,' 'value metric,' 'Van Westendorp,' 'willingness to pay,' 'monetization,' 'how much should I charge,' 'my pricing is wrong,' 'pricing page,' 'annual vs monthly,' 'per seat pricing,' or 'should I offer a free plan.' Use this whenever someone is figuring out what to charge or how to structure their p

AI Skills HubAI Skills Hub
00
# Brainstorming Ideas Into Designs
## Overview
Help turn ideas into fully formed designs and specs through natural collaborative dialogue.
Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design and get user approval.
<HARD-GATE>
Do NOT invoke any implementation skill, write any code, scaffold any project, or take any implementation action until you have presented a design and the user has approved it. This applies to EVERY project regardless of perceived simplicity.
</HARD-GATE>
## Anti-Pattern: "This Is Too Simple To Need A Design"
Every project goes through this process. A todo list, a single-function utility, a config change — all of them. "Simple" projects are where unexamined assumptions cause the most wasted work. The design can be short (a few sentences for truly simple projects), but you MUST present it and get approval.
## Checklist
You MUST create a task for each of these items and complete them in order:
1. **Explore project context** — check files, docs, recent commits
2. **Ask clarifying questions** — one at a time, understand purpose/constraints/success criteria
3. **Propose 2-3 approaches** — with trade-offs and your recommendation
4. **Present design** — in sections scaled to their complexity, get user approval after each section
5. **Write design doc** — save to `docs/plans/YYYY-MM-DD-<topic>-design.md` and commit
6. **Transition to implementation** — invoke writing-plans skill to create implementation plan
## Process Flow
```dot
digraph brainstorming {
    "Explore project context" [shape=box];
    "Ask clarifying questions" [shape=box];
    "Propose 2-3 approaches" [shape=box];
    "Present design sections" [shape=box];
    "User approves design?" [shape=diamond];
    "Write design doc" [shape=box];
    "Invoke writing-plans skill" [shape=doublecircle];
    "Explore project context" -> "Ask clarifying questions";
    "Ask clarifying questions" -> "Propose 2-3 approaches";
    "Propose 2-3 approaches" -> "Present design sections";
    "Present design sections" -> "User approves design?";
    "User approves design?" -> "Present design sections" [label="no, revise"];
    "User approves design?" -> "Write design doc" [label="yes"];
    "Write design doc" -> "Invoke writing-plans skill";
}
```
**The terminal state is invoking writing-plans.** Do NOT invoke frontend-design, mcp-builder, or any other implementation skill. The ONLY skill you invoke after brainstorming is writing-plans.
## The Process
**Understanding the idea:**
- Check out the current project state first (files, docs, recent commits)
- Ask questions one at a time to refine the idea
- Prefer multiple choice questions when possible, but open-ended is fine too
- Only one question per message - if a topic needs more exploration, break it into multiple questions
- Focus on understanding: purpose, constraints, success criteria
**Exploring approaches:**
- Propose 2-3 different approaches with trade-offs
- Present options conversationally with your recommendation and reasoning
- Lead with your recommended option and explain why
**Presenting the design:**
- Once you believe you understand what you're building, present the design
- Scale each section to its complexity: a few sentences if straightforward, up to 200-300 words if nuanced
- Ask after each section whether it looks right so far
- Cover: architecture, components, data flow, error handling, testing
- Be ready to go back and clarify if something doesn't make sense
## After the Design
**Documentation:**
- Write the validated design to `docs/plans/YYYY-MM-DD-<topic>-design.md`
- Use elements-of-style:writing-clearly-and-concisely skill if available
- Commit the design document to git
**Implementation:**
- Invoke the writing-plans skill to create a detailed implementation plan
- Do NOT invoke any other skill. writing-plans is the next step.
## Key Principles
- **One question at a time** - Don't overwhelm with multiple questions
- **Multiple choice preferred** - Easier to answer than open-ended when possible
- **YAGNI ruthlessly** - Remove unnecessary features from all designs
- **Explore alternatives** - Always propose 2-3 approaches before settling
- **Incremental validation** - Present design, get approval before moving on
- **Be flexible** - Go back and clarify when something doesn't make sense
Productivity

brainstorming

You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.

AI Skills HubAI Skills Hub
00