Design — Read-Only SaaS Rationalization Dashboard
Design a read-only SaaS rationalization dashboard in Claude with Torii MCP
Purpose
This guide shows how to turn a working Torii MCP connection — on any Claude surface — into a focused, read-only dashboard experience for software usage, license consumption, spend, and renewal analysis. The design model is the same regardless of whether you are using Claude.ai, Claude Desktop, Cowork, Claude Code, or the API. Surface-specific implementation details are called out where they differ.
What this dashboard is for
This dashboard pattern is designed to help teams answer questions like:
- Which apps are heavily used by a specific department or business unit?
- Where is license concentration highest?
- Where is spend concentrated?
- Which organizational areas have overlapping applications?
- Which contracts or renewals deserve review?
- Which apps belong in a human review queue for rationalization?
This is not a general-purpose "do anything in Torii" assistant. It is a read-only dashboard assistant.
Why Torii MCP fits this use case
Torii's MCP server lets Claude securely query Torii data including apps, users, contracts, workflows, and more. Claude can search and read apps, users, and contracts, list workflow and audit-style data, and call tools such as list_apps, list_contracts, and match_apps.
This makes Torii MCP a strong fit for dashboards because it gives Claude direct access to the data needed for natural-language drilldowns and live visual artifacts.
Dashboard output formats by surface
The dashboard can be delivered in different formats depending on your deployment surface:
| Surface | Dashboard output format |
|---|---|
| Claude.ai web chat | Conversational text — summaries, tables, findings in chat |
| Claude.ai Projects | Conversational text with persistent context |
| Claude Desktop | Conversational text, optionally saved to local files |
| Cowork | Conversational text or live interactive sidebar artifact |
| Claude Code | Saved files — markdown reports, HTML dashboards, JSON snapshots |
| Claude API / Agent SDK | Any format — pushed to email, Slack, databases, or BI tools |
The Cowork live artifact
When deploying on Cowork, the dashboard can be rendered as a live interactive artifact that uses window.cowork.callMcpTool to fetch live Torii data at render time. This means:
- The artifact always reflects current Torii data when opened — no manual refresh step.
- It persists in the Cowork sidebar across sessions.
- Spend, license, and renewal data reflect the tenant's live state.
This capability is Cowork-specific. On other surfaces, the dashboard is delivered as conversational output or a saved file. The design model described in this article applies to all surfaces — only the rendering layer differs.
Read-only operating model
Torii's MCP is explicit that connected assistants can take actions granted by the user — including updating records and running workflows. For this dashboard design, that is intentionally out of scope.
Allowed behavior
Use the dashboard to:
- Read apps, users, contracts, and related metadata
- Summarize software usage and coverage patterns
- Group and compare apps across organizational slices
- Identify overlap and rationalization candidates
- Summarize contract and renewal exposure
- Generate decision queues for human review
- Explain why an app or grouping was flagged
- Render live interactive artifacts in the Cowork sidebar
Not allowed
Do not use the dashboard to:
- Update users
- Create contracts
- Modify application ownership
- Trigger workflows
- Remove or assign licenses
- Write back to Torii records in any form
The normalized dashboard model
Use the same vocabulary consistently across prompts, outputs, artifacts, and packaging.
Functional Domain
A broad software category.
Examples: Collaboration, Project Management, CRM, Security, Design
Functional Capability
The more specific business capability an app supports.
Examples: Team chat, Video conferencing, Contract lifecycle, Password management, Ticketing
Functional Capability Path
A hierarchy that combines domain and capability.
Examples:
- Collaboration > Team Chat
- Security > Password Management
- IT Operations > Ticketing
This helps users compare apps that may have similar business functions even if their names or vendors differ.
Organizational View
The business dimension used for dashboard drilldowns.
Examples: Department, Business Unit, Legal Entity, Region, Cost Center, or a customer-defined grouping field
Never hardcode Department as the only supported slice. Treat Organizational View as a configurable concept that stays portable across tenants with different operating models.
Dashboard sections
A strong read-only dashboard covers six core sections.
1. Portfolio overview
Business question: What does the software portfolio look like at a glance?
Typical outputs: total apps under review, key functional domains, top concentration areas, high-level overlap signals, top review candidates.
2. Usage by Organizational View
Business question: How is software usage distributed across the selected business slice?
Typical outputs: apps with the highest usage in each Department or Business Unit, concentration by team or entity, areas where usage is broad versus narrow.
3. License distribution by Organizational View
Business question: Where are licenses concentrated, and how unevenly are they distributed?
Typical outputs: license counts by Organizational View, apps with high concentration in a small number of groups, apps with wide distribution but low engagement where usage context is available.
4. Spend concentration by Organizational View
Business question: Which organizational slices account for the most software spend?
Typical outputs: spend by Department, Business Unit, Legal Entity, or Cost Center, apps driving the largest commercial footprint, concentration of spend across overlapping capabilities.
5. Renewal and contract exposure
Business question: Which contracts deserve near-term review, and where is the organizational impact highest?
Typical outputs: contracts renewing in the next 30, 60, or 90 days, renewal exposure by Organizational View, high-cost or high-overlap renewals that should be reviewed manually.
6. Functional overlap candidates
Business question: Which apps appear to serve similar business functions and may deserve consolidation review?
Typical outputs: app groups sharing the same Functional Capability Path, slices where multiple comparable tools are used, candidates with concentrated spend or limited distribution.
Live artifact design
When generating the Cowork interactive artifact, the dashboard should render all six sections using live Torii data fetched via window.cowork.callMcpTool.
The primary MCP calls for any dashboard implementation are:
list_apps— portfolio overview, usage, spend, and license signals. TheexpensesandexpensesLast30Daysfields are returned in cents — divide by 100 for USD display.list_contracts— contract status, renewal dates, and annual value.
A complete dashboard should cover:
- A summary stats row (total apps, total spend, active users, renewals due, review queue count).
- A top-10 spend breakdown.
- A license utilization distribution.
- Renewal exposure buckets (≤30 days, 31–60 days, 61–90 days).
- A filterable app portfolio view with color-coded recommendation labels.
- A read-only decision queue.
For the Cowork live artifact: See assets/react-dashboard-template.jsx in the Tier 1 base skill package for the reference implementation. The UUID configuration block at the top of the file must be set to the customer's Torii MCP server UUID before the artifact loads live data.
For other surfaces: The same sections apply but are rendered as conversational output, markdown, or HTML files depending on your deployment. The data model and section structure are identical.
Decision queue design
The dashboard should end with a decision queue rather than a forced recommendation.
Each queue item should include:
- App name
- Functional capability
- Review signal (the recommendation label)
- Spend
- User count
- License utilization
- Owner
Queue entry labels
- Candidate for retirement
- Candidate for consolidation
- Do not expand
- Evaluate further
Recommendation model
Recommendations should be explainable and advisory.
Use factors like: overlap strength, usage concentration, license utilization, spend significance, renewal timing, governance readiness, and data confidence.
Scoring language
- High review priority: strong overlap, meaningful spend, near-term renewal, or weak governance.
- Medium review priority: moderate overlap or concentration, but incomplete commercial context.
- Low review priority: weak overlap signal or limited supporting evidence.
Do not present recommendations as final actions. The dashboard surfaces signals — it does not take action.
Confidence and data quality handling
A strong dashboard admits uncertainty.
Common issues to handle
- Missing Organizational View values
- Renamed or inconsistent fields
- Partial usage coverage
- Incomplete spend or contract linkage
- App classification ambiguity
Required behavior
If a section depends on a field that is missing or no longer stable:
- Pause that section.
- Explain what is missing.
- Ask for a replacement mapping or manual review.
- Do not silently guess.
Good example
The Legal Entity drilldown is incomplete because the mapped field is missing for part of the population. The renewal section below excludes records without a mapped Legal Entity.
Bad example
Inventing a grouping or silently collapsing unknown values into a confident summary.
When a simple surface is enough
Claude.ai web chat or a Project is enough when you need:
- A fast deployment path
- Interactive natural-language analysis
- Low setup overhead
- A human-in-the-loop dashboard workflow
When to consider more advanced packaging
Consider Cowork, Claude Code, or the API when you need:
- A live visual artifact that persists and refreshes (Cowork)
- Automated recurring refresh with file outputs (Claude Code)
- Integration with other systems or production-grade scheduling (API)
- Stricter behavioral consistency across a larger user base (Projects or Cowork)
Final checklist
Before rolling this out, confirm:
- Claude is connected to Torii through MCP
- The dashboard is explicitly read-only
- One Organizational View is approved
- Dashboard sections are defined clearly
- Recommendation language is advisory, not automated
- Field drift or missing data is handled transparently
- Output format chosen matches the deployment surface (conversational, file, or live artifact)
- Live artifact UUID is configured if using the Cowork interactive artifact
- Business users have prompt examples they can reuse
Updated 3 days ago