Skip to main content
Comparison · 9 min read

15 Questions to Ask When Selecting a Reconciliation Vendor in India

Most vendor evaluation questionnaires for reconciliation software are generic SaaS checklists that miss the India-specific questions entirely. For Indian enterprises, 5 of the 15 questions on any evaluation scorecard must address TDS matching, GSTR-2B ITC reconciliation, NACH return code handling, data residency, and whether the vendor scopes the client's use case before configuring the engine.

Terra Insight
Terra Insight Reconciliation Infrastructure

Content authored by practitioners with experience at Amazon India, Intuit QuickBooks, and the Tata Group. Meet the team →

Published 24 March 2026
Domain expertise
TDS Reconciliation GST Input Credit Platform Settlements NACH Batch Matching Bank Reconciliation Form 26AS Matching ERP Integrations Enterprise Finance Ops

Reconciliation vendor selection questions India should form the core of any serious evaluation process — but most procurement checklists that finance teams use are adapted from generic SaaS evaluation templates that were written for US or European contexts. They cover security, uptime, and integration, and miss the five India-specific questions that determine whether the platform will actually work for TDS net-of-gross matching, GSTR-2B ITC reconciliation, and NACH return code handling. This article is for CFOs and Procurement Heads in the final stages of vendor selection. It covers all 15 questions, explains why each matters, and identifies the 5 India-specific questions that most generic scorecards omit.

What Vendor Selection for Reconciliation Software Involves

Selecting reconciliation software India is a different evaluation process from selecting a general-purpose finance application. The reason is specificity: reconciliation software must match the exact transaction patterns, statutory deduction structures, and data source formats that your organisation produces. A platform that matches transactions correctly for a US enterprise may fail on Indian TDS net-of-gross receipts, GSTR-2B ITC eligibility conditions, and NACH return code classification — because these are India-specific requirements that a generic matching engine was not designed to handle.

The evaluation scorecard should therefore include both universal questions (matching engine architecture, security, deployment model) and India-specific questions (TDS section handling, GSTR-2B automation, NACH support, data residency, use case scoping). The 15 questions below cover both categories, with the India-specific questions identified explicitly.

Why Vendor Evaluation Is Harder Than It Looks

The Generic Scorecard Problem

Finance teams sometimes adapt vendor scorecards from procurement templates or analyst reports written for North American or European procurement contexts. These templates produce rigorous answers to questions that do not matter — cloud vendor certification, uptime SLA, Salesforce integration — while missing the questions that do: does the platform handle TDS under multiple sections simultaneously? Does GSTR-2B matching update automatically when ITC eligibility rules change?

An evaluation that scores well on a generic template but fails to cover India compliance handling will select a vendor that works in a demo but underperforms on live Indian transaction data.

The Match Rate Claim Problem

Vendors frequently cite match rate improvements in marketing materials without specifying the data set, the transaction type, or the starting baseline. A claim of “90% match rate improvement” is meaningless without knowing whether it was measured on bank reconciliation (where exact matching is high), TDS (where net-of-gross creates structural mismatches), or GSTR-2B (where ITC eligibility rules add a classification layer). The evaluation process must require vendors to demonstrate match rates on a sample of the client’s actual transaction data — not a curated demo data set.

The Discovery vs. Self-Serve Configuration Problem

Organisations on generic platforms find that TDS net-of-gross matching requires manual override for multi-section deductions, because the platform’s generic tolerance configuration does not account for section-level deduction rates. This failure typically does not appear during the sales demonstration — it appears during the parallel run, after contract execution. The evaluation must assess whether the vendor scopes the client’s use case before configuring the engine, or whether they deliver a generic setup and expect the client to customise.

The 15 Questions

Question 1 — India Compliance: TDS Net-of-Gross Matching

“How does your platform handle TDS net-of-gross receipt matching across multiple deduction sections?”

This question tests whether the vendor has India-specific TDS matching logic. The correct answer describes how section-level deduction rates (194C at 1–2%, 194J at 10%, 194H at 5%) are applied separately to each invoice line before the net receipt is matched to the bank credit. A vendor who describes a generic tolerance match without mentioning section-level deduction is signalling that their TDS handling is not purpose-built.

Question 2 — India Compliance: GSTR-2B ITC Reconciliation

“Does GSTR-2B ITC reconciliation run automatically against your purchase register, or does it require manual export and upload?”

The answer should describe automated ingestion of GSTR-2B data (JSON format from the GST portal) and matching against the purchase register with ITC eligibility classification. A vendor who describes a manual export-and-upload workflow is describing a partially automated process that still requires significant finance team involvement — and more importantly, one that will break when the Rule 36(4) ITC eligibility threshold changes, because manual processes do not update themselves.

Question 3 — India Compliance: NACH Return Code Classification

“What NACH return codes does your platform classify natively, and how are unmatched returns routed to the collections team?”

NACH return codes (such as NACH00001 through NACH00030, covering reasons from insufficient funds to account closure) carry operational significance for lending and subscription businesses. A vendor with native NACH support should be able to name the return codes they classify and describe the routing workflow. A vendor who handles NACH as a generic debit reconciliation without return code classification is not suitable for NBFC or subscription billing use cases.

Question 4 — Matching Engine Architecture

“What is your matching engine architecture — rule-based, signal-weighted, or both?”

A pure rule-based engine matches only when defined conditions are exactly met. For Indian transaction data where UTR references are partially present, narrations are truncated, and TDS creates net-amount differences, rule-based matching produces match rates in the 50–60% range. A signal-weighted engine assigns confidence scores to multiple signals — UTR, partial reference, counterparty, date — and matches transactions above a threshold. The combination of both (exact match first, then signal-weighted for residuals) is the architecture required for match rates above 80% on Indian enterprise data.

Question 5 — Match Rate Performance

“What is your baseline-to-projected match rate improvement on a representative sample of our data?”

The vendor should be willing to run a proof-of-concept matching exercise on a sample of your actual transaction data — not a demo data set — and report the match rate before and after the full pipeline. A platform with patented matching algorithms running a 4-pass pipeline (exact match, signal-weighted, tolerance, many-to-many aggregation) should produce demonstrable improvement from a ~51% baseline to 85%+ on representative Indian transaction data. A vendor who declines to run on actual data, or who only demonstrates on curated examples, is not demonstrating the performance that matters.

Question 6 — Deployment Model

“Is your deployment configuration-only, or does it require custom development for each client?”

A configuration-only deployment adjusts matching rules, signal weights, tolerance thresholds, and workflow routing within the platform’s existing architecture. This is the model that produces 2–4 week deployment timelines and maintains India format update agility. A deployment that requires custom development for each client’s ERP format or matching rules introduces development cycles, code review, and release management — which extends the timeline and creates ongoing maintenance obligations.

Question 7 — Deployment Timeline

“What is your typical deployment timeline from contract to go-live?”

The answer should be specific: a single-stream deployment (bank or TDS) in 2–4 weeks; a multi-stream deployment in 6–8 weeks. Vague answers (“depends on complexity”) without a reference range suggest the vendor’s deployment process is not systematised. Ask for a reference client whose deployment timeline and stream count is closest to your scope.

Question 8 — India Compliance: Data Residency

“Where is our data hosted — India-only, or can it be routed outside India?”

For RBI-regulated entities, India data residency is a hard requirement. For all Indian enterprises under the DPDP Act 2023, cross-border financial data transfer requires documented compliance measures. The vendor should confirm India-only hosting for the reconciliation application and its data storage layer. Ask specifically about backup storage and disaster recovery replication — these are sometimes hosted in different regions than the primary environment.

Question 9 — Security Certifications

“What security certifications do you hold, and do they cover the reconciliation application specifically?”

ISO 27001:2022 certification is the minimum standard for enterprise financial data processing. Verify that the scope covers the reconciliation application and its hosting infrastructure — not just the vendor’s corporate environment. The certification document and scope statement should be available for review under NDA.

Question 10 — Audit Trail

“How is every match decision and manual override logged, and can a statutory auditor query the trail directly?”

The audit trail must be user-attributed (who matched or overrode each entry), timestamped, tamper-evident, and queryable without vendor assistance. This is the evidentiary standard for income tax proceedings and statutory company audit. A vendor who describes application-level logging without a queryable interface for external auditors does not meet this standard.

Question 11 — ERP Connectors

“What ERP connectors do you support natively — SAP, Oracle, Tally, Busy?”

Native connectors for SAP Business One/S/4HANA, Oracle NetSuite, Tally Prime, and Busy Accounting cover the four highest-penetration ERP systems among mid-size Indian enterprises. Native connectors avoid the custom transformation step that most commonly delays implementations. Also verify native support for Indian bank statement formats (MT940 from HDFC, ICICI, SBI, Axis, Kotak) and payment gateway settlement files (Razorpay, Cashfree, PayU).

Question 12 — Variance Classification

“How are variances classified — do you produce variance codes or just flag exceptions?”

A platform that flags exceptions without classifying them transfers the classification work to the finance team. A platform that produces variance codes — FEE_DEDUCTION, TAX_DEDUCTION, ROUNDING, PARTIAL_PAYMENT, PENALTY_OR_INTEREST, UNEXPLAINED — routes each exception to the correct resolution workflow automatically. Variance code coverage is a direct productivity multiplier: the exception queue processed by a team using a coded system is resolved faster than one using an unclassified list.

Question 13 — Exception Resolution SLA

“What is your SLA for exception resolution workflow response times?”

This question distinguishes between the matching engine’s performance (which the vendor controls) and the exception workflow (which the client team operates, with vendor tooling). The SLA should cover: time to present an exception in the queue after the matching run, time to escalate an unresolved exception to a supervisor queue, and availability of the dashboard during the monthly close window.

Question 14 — India Compliance: Use Case Scoping

“Do you scope the client’s use case before configuring the engine, or is it a self-serve setup?”

A vendor who conducts a discovery conversation to map your transaction types, matching rule requirements, and India compliance scope before configuring the engine will achieve higher match rates than one who delivers a generic setup. When a vendor conducts this scoping correctly, the configuration reflects the actual signal weights and tolerance thresholds that match your specific transaction patterns — not default values that approximate an average client. This question is the best single proxy for implementation quality.

Question 15 — Match Rate Commitment

“What is your match rate commitment, and is it contractual?”

The answer should be a range (typically 70–85% for Indian enterprise transaction data), based on a sample data review, expressed as a service level in the contract. A contractual match rate commitment is a signal of confidence in the engine’s performance. A vendor who declines to commit, or who frames the match rate as a marketing claim rather than a contractual obligation, is not backing their product with accountable performance terms.

Question Category Summary

CategoryQuestion NumbersWhy It Matters for Indian Enterprises
India Compliance1, 2, 3, 8, 14TDS net-of-gross, GSTR-2B ITC eligibility, NACH return codes, data residency, and use case scoping are structurally absent from generic vendor scorecards — these 5 questions determine whether the platform works on Indian transaction data or merely appears to
Matching Engine4, 5, 12Signal-weighted architecture, performance on actual data, and variance code coverage determine the post-go-live match rate and the daily exception workload
Deployment6, 7, 11, 13Configuration-only model, timeline commitment, ERP connector coverage, and exception SLA determine whether the implementation completes on schedule and the team can operate independently
Security9, 10Certification scope and audit trail queryability are the two security dimensions most commonly under-evaluated in reconciliation software procurement
Commercial15A contractual match rate commitment is the single most direct indicator of vendor confidence — it converts a marketing claim into an accountable service level

The 5 India-Specific Questions Most Scorecards Omit

Questions 1, 2, 3, 8, and 14 are the five questions that generic vendor evaluation templates miss. Each one tests a dimension of India compliance or India-specific deployment quality that a global SaaS platform may not have built:

  • Question 1 tests whether TDS net-of-gross matching handles multiple sections — a requirement that does not exist outside India.
  • Question 2 tests whether GSTR-2B reconciliation is automated against the purchase register — specific to India’s GST return filing architecture.
  • Question 3 tests whether NACH return codes are natively classified — specific to India’s NPCI-operated direct debit infrastructure.
  • Question 8 tests India data residency — required by RBI for regulated entities and increasingly relevant for all enterprises under DPDP Act 2023.
  • Question 14 tests whether the vendor scopes your use case before configuring — the single best proxy for implementation quality and post-go-live match rate performance.

A vendor who answers all five of these questions specifically and credibly, backed by reference clients in India, is operating a platform built for Indian enterprise requirements. A vendor who deflects or gives generic answers to any of the five is offering a platform that will require workarounds for the transactions that matter most.

TDS reconciliation software and multi-stream reconciliation platforms for Indian enterprises are evaluated most effectively when the scorecard reflects the actual compliance and data complexity of the Indian finance environment — not a procurement template written for a different market.

Primary reference: Institute of Chartered Accountants of India — where audit and financial reporting standards for Indian enterprises are published.

Frequently Asked Questions

What is the most important question to ask a reconciliation vendor in India about TDS handling?
The most diagnostic question is: 'How does your platform handle TDS net-of-gross receipt matching when the same counterparty deducts under multiple sections in the same month?' A vendor with genuine TDS matching capability will explain how section-level deduction rates (194C at 1–2%, 194J at 10%, 194H at 5%) are applied separately before the net receipt is matched to the bank credit. A vendor without India-specific TDS logic will describe a generic tolerance match that treats TDS as a rounding difference — which produces mismatch on multi-section deductions.
Should a reconciliation vendor be able to commit to a contractual match rate?
Yes. A vendor with a patented matching engine and a defined pipeline architecture should be able to commit to a match rate target range — typically 70–85% — based on a review of a sample of the client's actual transaction data. This commitment should appear in the contract as a measurable service level. A vendor who declines to commit to a match rate, or who only expresses the commitment as a best-efforts statement, is signalling uncertainty about the engine's performance on Indian transaction data.
How should a CFO evaluate a vendor's answer to the question about use case scoping?
A vendor who conducts a structured discovery conversation to map the client's transaction types, matching rule requirements, and India compliance scope before configuring the engine will achieve higher match rates than one who delivers a generic setup. When evaluating the answer, listen for whether the vendor distinguishes between different matching streams (bank vs. TDS vs. GSTR-2B), asks about counterparty volume and PAN/GSTIN completeness, and sets a match rate expectation based on the specific data — not a generic claim. Generic answers to the scoping question are a proxy for generic configuration.
What ERP connectors should a reconciliation vendor support natively for Indian enterprises?
The four ERP systems with the highest penetration among mid-size Indian enterprises are SAP Business One / S/4HANA, Oracle NetSuite, Tally Prime, and Busy Accounting. A vendor with native connectors for all four avoids the custom transformation step that most commonly delays implementations. Beyond ERP connectors, Indian enterprises should verify native support for bank statement ingestion (MT940 from HDFC, ICICI, SBI, Axis, Kotak), GST portal export (GSTR-2B JSON), TRACES challan CSV, and payment gateway settlement files from Razorpay, Cashfree, and PayU.
What is the difference between a rule-based and a signal-weighted matching engine, and why does it matter for Indian transaction data?
A rule-based matching engine matches transactions when they satisfy a defined rule exactly — same amount, same reference, same date. It fails when Indian transaction data introduces partial references, truncated UTRs, or TDS net-of-gross differences that prevent exact matches. A signal-weighted engine assigns confidence scores to multiple matching signals (UTR = 0.40, partial reference = 0.25, counterparty = 0.15, date = 0.10) and matches transactions that exceed a confidence threshold even when no single signal is an exact match. For Indian enterprises where UTR-present-but-narration-inconsistent is a common pattern, a signal-weighted engine is required to achieve match rates above 70%.

See how TransactIG handles reconciliation for your industry

Configuration takes 2–4 weeks. No code development required. ISO 27001:2022 certified.