How to evaluate a SAM vendor: questions to ask
How to evaluate a SAM vendor: questions to ask
Choosing a Software Asset Management (SAM) platform is a high-leverage decision. The right platform pays for itself many times over through licence optimisation, audit defence and procurement visibility; the wrong one consumes budget, effort, and credibility without moving the dial. Most SAM selection projects fail not because of obvious gaps but because the evaluation missed the awkward questions.
This guide is a buyer's framework: the categories that matter, the specific questions to put in front of each candidate, and the red flags that signal a platform will struggle to deliver in your environment.
Quick answer
A serious SAM evaluation covers nine question areas: discovery coverage (hardware, software, SaaS, cloud, AI, remote), publisher depth (Microsoft, Oracle, IBM, SAP, Adobe and the rest of the high-audit-risk top tier), licence-position maths (ELP across on-prem, SaaS and cloud / BYOL), audit defence (Oracle LMS, SAP, IBM ILMT, Microsoft), integration (ITSM / CMDB / IdP / HR / procurement / FinOps), governance and AI (shadow IT, shadow AI, embedded AI, EU AI Act readiness), operating model and TCO (deployment, admin overhead, total cost over 5 years), vendor credentials and references, and contractual terms. Skip any of these and you'll be surprised post-signature.
Before the RFP — set the evaluation criteria
Before shortlisting anything, write down:
In-scope asset classes — on-prem software, SaaS, cloud / BYOL, mobile, AI, datacenter / Oracle-SAP-IBM, open-source
Priority outcomes — audit defence, licence optimisation savings target, risk reduction, consolidation from multiple point tools, audit trail for governance
Environmental footprint — number of users, devices, data-centre servers, virtual estate (VMware / Hyper-V / Nutanix / KVM), SaaS applications, cloud footprint (AWS / Azure / GCP / OCI / Kubernetes), IaaS vs PaaS
Mandatory publishers — whose licensing must be exact, not approximate (Oracle Database, SAP, IBM, Microsoft EA)
Integration constraints — ITSM platform (ServiceNow / JSM / ManageEngine / TopDesk / BMC), CMDB, identity provider, HR, procurement, FinOps tooling
Timeline — when the first audit / renewal / compliance date drives the programme
Without these, every vendor looks equivalent in a demo.
Question category 1 — Discovery coverage
Discovery is the data foundation. A SAM platform that can't see your estate can't manage it. Ask every candidate:
How do you discover installed software on Windows, macOS and Linux servers and endpoints — agent, agentless, or both?
How do you discover virtual estate (VMware, Hyper-V, Nutanix, KVM, Proxmox)? Do you see VM-host relationships for soft-partitioning licensing?
How do you discover software on offline / air-gapped / disconnected endpoints?
Can you discover remote / home-working / field-based devices without requiring VPN?
How do you discover SaaS applications — IdP integration, browser / endpoint signal, deep API connector, or all three?
How do you discover cloud workloads and PaaS services (AWS RDS, Azure SQL, BigQuery, Redshift, ElastiCache, Kubernetes workloads)?
What's the catalogue size? How many applications does it recognise out of the box?
How often is the catalogue updated, and who maintains it?
Can you discover embedded AI (Microsoft Copilot, Salesforce Einstein, Notion AI, Now Assist, Atlassian Intelligence, Adobe Firefly) as distinct entitlements?
How do you handle unknown / unclassified software? What's the effort to add it?
Red flags: "just install an agent", single-method discovery (SSO-only for SaaS), static catalogues, no offline endpoint support, no virtualisation awareness.
Question category 2 — Publisher depth
Every SAM platform handles the easy publishers. The difference is the hard ones — the high-audit-risk vendors where a general-purpose engine breaks down:
Oracle
Do you have formal Oracle recognition? (Oracle Certified Partner status, Oracle LMS engagement history)
How do you calculate Processor and Named User Plus licensing?
How do you apply the Oracle core factor table correctly, including for processor families Oracle doesn't publish a factor for?
How do you handle soft-partitioning environments (VMware vSphere, Hyper-V, Nutanix) under Oracle's hard-partitioning policy — including the full physical cluster / vMotion rules?
How do you track Oracle Database option and management pack usage (Diagnostic Pack, Tuning Pack, Partitioning, Advanced Security, RAC, Active Data Guard)?
Do you support Oracle Cloud BYOL in Authorized Cloud Environments (OCI, AWS, Azure)?
How do you handle Oracle ULA management and certification?
What's your track record in audit defence with Oracle customers?
SAP
Do you support the current SAP named-authorisation licensing categories (Professional, Functional, Productivity, Developer, etc.)?
How do you measure and manage SAP Digital Access / indirect use?
How do you handle SAP Engines where consumption is driven by business metrics (orders, documents, revenue)?
How do you support S/4HANA migration and the FUE (Full Use Equivalent) licensing model?
IBM
Do you handle IBM Processor Value Unit (PVU) licensing correctly, including the PVU table per processor?
Do you support IBM sub-capacity licensing and the ILMT (IBM License Metric Tool) requirements?
How do you handle IBM bundling and flexibility within product bundles?
How do you manage IBM Cloud Paks (VPC metric)?
Microsoft
How do you handle Windows Server Core / CAL / Datacenter licensing across physical and virtual estate?
How do you manage SQL Server Core vs Server + CAL? Licence Mobility? Failover Rights?
How do you handle Microsoft Enterprise Agreement true-up and Software Assurance benefits?
How do you track Microsoft 365 SKU mix (E3 vs E5 vs F3 vs Business Premium) and Copilot for M365 as a distinct entitlement?
Do you support Azure BYOL / Azure Hybrid Benefit?
Adobe, Autodesk, VMware (Broadcom), Citrix, Red Hat, Salesforce
Specific publisher support matters here — each has its own licensing model and its own discovery requirements.
Red flags: generic answers to Oracle / SAP / IBM questions, no named partner credentials, vague on ILMT, no sub-capacity support, no licence mobility handling, no Copilot-specific SKU tracking.
Question category 3 — Licence position and ELP
The Effective Licence Position (ELP) is the whole point. Ask:
How is the ELP calculated — per publisher, per SKU, per metric?
How do you handle compound licensing rules (cluster-wide licensing for Oracle, per-device vs per-user rules, Microsoft licence mobility)?
How often is the ELP refreshed? Continuous, scheduled, or manual?
Does the ELP cover on-prem, SaaS and cloud / BYOL together or in separate silos?
How do you handle negative ELPs (non-compliance) vs positive ELPs (over-licensed waste)?
Can you attribute licence consumption to business unit / cost centre?
Can you run scenarios: "what happens if we upgrade these users / migrate this workload / add this SaaS tier"?
Red flags: annual-only ELP, separate models for SaaS / cloud / on-prem, vendor-claimed ELP with no audit trail showing how it was derived.
Question category 4 — Audit defence
Half the ROI of SAM is audit risk reduction. Evaluation must cover:
Who has been engaged in active audit defence — which publishers, which customers, what outcomes?
How do you prepare an audit response — what artefacts does the platform generate?
How do you maintain historical data — can you show what the estate looked like 12 / 24 months ago for a retrospective audit?
How do you track and prove licence mobility / reallocation events over time?
How do you support Oracle LMS, IBM ILMT compliance, SAP measurement, Microsoft Enterprise Agreement true-up?
What's the audit-trail retention — for user actions in the platform, for discovery data, for licence changes?
Red flags: "our platform does discovery, audit's a separate service", no historical data retention, no publisher-specific audit workflow support.
Question category 5 — Integration
SAM is never an island. Ask:
Which ITSM platforms are supported out of the box (ServiceNow, Jira Service Management, BMC Helix, TopDesk, ManageEngine, Ivanti, Freshservice)? Bidirectional or one-way?
How do you integrate with CMDBs — does your data flow in as an authoritative source, and can you consume from an existing CMDB too?
Which identity providers integrate (Entra ID, Okta, Ping, OneLogin, Google Workspace, ADFS)?
HR integration (Workday, BambooHR, SAP SuccessFactors) for joiner / mover / leaver events?
Procurement integration (Coupa, Ariba, SAP, Oracle Procurement Cloud)?
FinOps / cloud cost platforms (Apptio Cloudability, Vantage, Finout, CloudHealth, native cloud cost)?
Discovery integration — can you consume from existing discovery tools (SCCM / ConfigMgr, BigFix, Jamf, Tanium, Lansweeper, ServiceNow Discovery) where already deployed?
API — REST, webhooks, event-driven? Rate limits?
Red flags: manual CSV import as primary integration, one-way-only ITSM, no IdP integration, proprietary API formats.
Question category 6 — Governance, SaaS, cloud, AI
Modern SAM extends well beyond on-prem software:
Shadow IT and Shadow SaaS discovery — beyond SSO, do you catch what's bought on a corporate card?
Shadow AI / embedded AI inventory — EU AI Act readiness, acceptable-use policy support
Policy framework — can you express "this app is sanctioned for these users / departments / regions" rules and report compliance?
SaaS offboarding automation — leaver-triggered reclaim across connected tenants
Cloud cost and FinOps integration — how does SAM data feed FinOps and vice versa?
AI governance — tracking AI tools in use, mapping to EU AI Act risk tiers, supporting NIST AI RMF / ISO/IEC 42001
Red flags: no SaaS coverage beyond M365 / Salesforce, no shadow-AI discovery, no policy engine, "roadmap" for AI governance with no shippable capability yet.
Question category 7 — Operating model and TCO
Real cost and real effort over a five-year horizon:
Deployment model — SaaS / managed / on-prem / private cloud? Is there parity across models?
Time-to-first-ELP from signature?
Admin overhead — how many FTEs run the platform day-to-day in a mid-size deployment?
Professional services required for deployment, for ongoing operations, for publisher recognition changes?
Training / certification — for admins, for analysts, for end users?
Licensing model of the SAM platform itself — per endpoint, per employee, tiered? Year-on-year uplift terms?
Total five-year TCO — licence cost + implementation + year-on-year PS + training + any publisher-specific add-on cost
Support tiers — who answers when things break? 24/7? SLA?
Red flags: heavy professional-services requirement disproportionate to platform size, opaque TCO, escalating year-on-year pricing, capped support.
Question category 8 — Vendor credentials and references
Ask what's actually independently verifiable:
Analyst coverage — Gartner Magic Quadrant and Peer Insights, Forrester Wave, IDC MarketScape — inclusion, positioning, recency
Customer recommendation rates (where published)
Named customer references at similar scale and similar publisher footprint
Publisher partnership / certification status (Oracle Certified Partner, FinOps Certified Platform, etc.)
Years in the specific market — SAM vendors that have survived multiple Oracle / SAP / IBM model changes are a very different proposition from vendors only in-market since cloud-SaaS-AI became the centre of gravity
Financial health, ownership structure, recent M&A
Red flags: references only from small or very recent customers, recent rebranding with limited market history, no publisher certifications in your high-risk vendors.
Question category 9 — Contracts, data and exit
The awkward but important ones:
Data ownership — you, always. Contract must be explicit.
Data portability — can you export your entire configuration, discovery history and ELP data in a machine-readable format at any time?
Data residency — where is the data stored? Regional options?
Contract term and renewal uplift caps — what's the cost in year 4?
Termination — mid-term rights, data return obligations, data deletion on exit?
Price protection on expansion — what if your endpoint / user count grows 30%?
Subcontractor disclosure — who else has access to your data?
Red flags: vendor-owned data clauses, no export commitment, no renewal uplift caps, sub-processor vagueness.
The 48-hour reality check
After the RFP responses are in, pick three vendors for deeper evaluation and run a 48-hour reality check:
Hands-on on a representative environment — one Oracle cluster, one VMware cluster with vSphere, one Microsoft domain with mixed server SKUs, a small SaaS inventory, a representative cloud account. See the data flow in.
Reference check — two customer calls, one similar size, one bigger. Ask specifically: what was hard, what was easier than expected, what would you do differently.
Publisher-specific walkthrough — ask the vendor to walk you through one complete ELP for your hardest publisher (usually Oracle or SAP) against a sample of your real data.
Vendors that can't or won't do these three things within 48 hours are telling you something.
Scoring framework
A simple 1–5 rubric across the nine question areas, weighted for your context:
Category | Default weight |
|---|
Category | Default weight |
|---|---|
Discovery coverage | 15% |
Publisher depth | 20% (higher if Oracle / SAP / IBM heavy) |
Licence position / ELP | 15% |
Audit defence | 10% |
Integration | 10% |
Governance / SaaS / cloud / AI | 10% |
Operating model / TCO | 10% |
Vendor credentials | 5% |
Contracts / data / exit | 5% |
Adjust weights before scoring starts, not after. The weighting is the conversation; the score follows.
Common evaluation pitfalls
Pitfall | Why it bites |
|---|
Pitfall | Why it bites |
|---|---|
Demo-driven decision | Looks great on rehearsed data; unrelated to your estate |
Feature-checklist evaluation | Vendors tick everything; depth differences never surface |
No hands-on time | Can't see data quality on real environment |
No publisher-specific deep-dive | Oracle / SAP / IBM depth varies 10x across vendors |
No reference call with similar scale | Every vendor is "great for enterprise"; verify at your scale |
Ignoring ongoing operating cost | Licence cost is a fraction of 5-year cost |
No data portability clause | Vendor lock-in becomes the constraint, not the capability |
Skipping the awkward questions | Reveal the exact places the platform will break |
About Certero
Certero delivers an enterprise-grade product family covering IT asset, software, SaaS, cloud, datacenter and AI management through CerteroX ITAM, CerteroX SAM, CerteroX SaaS Management, CerteroX Cloud Management, CerteroX Datacenter Management and CerteroX AI Management.
For SAM evaluations specifically, CerteroX SAM (with CerteroX Datacenter Management covering Oracle, IBM and SAP Applications) offers:
Oracle Certified Partner status and formal Oracle LMS engagement experience
Direct support for the high-audit-risk publishers — Oracle (Processor / NUP / VMware / options / ULA / Cloud BYOL), SAP (named authorisations / digital access / engines / S/4HANA FUE), IBM (PVU / ILMT / RVU / Cloud Paks), Microsoft (EA / Server / SA / Copilot)
3-method discovery (agent + agentless + import) across Windows / macOS / Linux / virtualisation / SaaS / cloud / AI, with remote / offline support
Continuous Effective Licence Position across on-prem, SaaS and cloud / BYOL
Integration with ITSM (ServiceNow, Jira Service Management, ManageEngine, TopDesk, BMC Helix, Ivanti), identity providers (Entra ID / Okta / Ping / OneLogin / Google), HR systems and cloud / FinOps platforms
97% "would recommend" rating, 4x Gartner Customers' Choice, #1 Gartner Peer Insights recognition, and a verified 38% average saving on cloud deployments
FinOps Certified Platform credential
Certero is one of a very small number of vendors delivering SAM, ITAM, SaaS Management, Cloud Management and AI Management at enterprise depth on the same product family — reducing the number of point tools that have to be integrated and operated.
Related reading:
What is Software Asset Management (SAM)? (pageId 181436474)
What is an Effective License Position (ELP)? (pageId 185041482)
What is Software Audit Defense? (pageId 184844453)
Why are Oracle, SAP, and IBM licensing so complex? (pageId 246808675)
What is the difference between SAM and ITAM? (pageId 246808616)
Software Asset Management FAQ (pageId 184844362)
FAQs
What are the most important questions to ask a SAM vendor?
Discovery coverage, publisher depth for the vendors that actually trigger audits (Oracle, SAP, IBM, Microsoft), how the Effective Licence Position is calculated, integration with your ITSM / CMDB / IdP, and five-year TCO. Everything else is secondary until those five are clearly answered.
How long should a SAM evaluation take?
Three to four months end-to-end is typical for a serious enterprise evaluation: one month to set criteria and shortlist, one month for RFP responses, one month for demos and hands-on, one month for references, negotiations and sign-off. Rushing it shows up two years later in lock-in and unmet expectations.
Should I use an analyst report (Gartner, Forrester, IDC) as the shortlist?
As one input, yes. As the only input, no. Analyst quadrants surface the established vendors but can miss newer or more specialised players. Peer Insights recommendations at similar scale and vertical are often more useful than the headline position.
How many vendors should I shortlist?
Three for the hands-on and reference phase is typical. More than five and the team burns out; fewer than three and you don't have a real comparison.
What's the single biggest red flag in a SAM vendor evaluation?
Inability or reluctance to show hands-on data from your own estate within 48 hours of asking. Every serious SAM platform can ingest a representative sample of your real inventory quickly; inability to do so points at either limited discovery capability or heavy services dependency.
How do I know a vendor really supports Oracle / SAP / IBM properly?
Ask for written detail on their sub-capacity / VMware / ULA / digital-access / PVU handling, plus a customer reference at similar scale who has been through an audit with that vendor supporting. "We support Oracle" and "we defended an Oracle LMS audit in partnership with Oracle Certified Partner credentials" are different claims.
How much weight should I give to SaaS, cloud and AI coverage in a SAM evaluation?
Significant and growing. Even if the primary driver is on-prem audit defence today, enterprises are consolidating point tools — a SAM platform that can't extend to SaaS, cloud BYOL, and AI governance will become the constraint. Weight SaaS/cloud/AI coverage at 10–20% of the scoring, higher if the business SaaS footprint is already major.
How do I evaluate TCO fairly across vendors?
Build a 5-year model covering: initial licence cost + implementation + year-on-year subscription uplift + ongoing professional services + training + any publisher-specific add-ons + integration work. Ask each vendor for written pricing at 100%, 130% and 150% of your initial estimate to see uplift behaviour. Then add estimated internal FTE cost — which is often the largest line for heavy-services platforms.
Is on-prem deployment still relevant for SAM?
For some customers yes — regulated industries, government, enterprises with strict data-residency or sovereignty rules. For most customers, SaaS deployment is now the default. Ask each vendor about deployment-model parity: do SaaS and on-prem customers get the same features, same release cadence, same integrations?
How do I evaluate discovery quality?
Point the vendor at a representative slice of your real environment — one VMware cluster, one Oracle RAC, a handful of Windows/Linux servers, a sample SaaS tenant, one cloud account, some remote endpoints — and inspect the resulting inventory. Discovery quality is obvious in the data; it's rarely obvious in the demo.
What's the right weight for vendor credentials vs technical fit?
Technical fit dominates — 70–80% of the evaluation weight. Vendor credentials matter (publisher partnerships, analyst recognition, customer references) but they're necessary, not sufficient. Platforms that look strong on paper but can't handle your estate end up shelfware.
Should I run a proof of concept / pilot?
If the shortlisted vendors cannot demonstrate on your real environment in the hands-on phase, yes. A time-boxed 4–6 week PoC on one publisher (usually Oracle, SAP or Microsoft) against real data is the clearest signal. If vendors can demonstrate quickly during the hands-on phase, a full PoC may not be necessary — the trade-off is time to first outcome.
How do I get internal stakeholders aligned on the decision?
Write the evaluation criteria down before you look at any vendors, weight them, get sign-off from procurement / IT / security / finance / the business owners of the biggest publisher risk, and only then start shortlisting. Most evaluations fail because different stakeholders are optimising for different outcomes — the criteria document forces the disagreement to surface early rather than after signature.
How do I compare a full SAM platform against best-of-breed point tools?
Work out the real cost of integration, data reconciliation, and duplicated discovery across the point-tool stack over 5 years. The headline licence cost often favours point tools; the true operating cost usually favours a single platform once you include the integration tax, the inconsistency tax (different tools, different answers), and the audit-preparation tax (pulling data together under pressure). A good evaluation models both and presents the comparison explicitly.
v1 — 2026-04-21 — Initial page. Targets unmapped Q20 'SAM vendor evaluation questions'. Nine-category buyer framework. CerteroX SAM + Datacenter Management + credentials-forward positioning. Cross-linked to SAM article, ELP, Audit Defense, Oracle/SAP/IBM complexity, SAM vs ITAM, SAM FAQ.