Part 104: Authentication Flows, Integration Patterns, and ESBs
Welcome back to the Salesforce series. In our ongoing deep dive into Salesforce architecture, we have already covered multi-org strategies, data modeling at scale, and the building blocks of well-designed systems. This post shifts focus to three critical pillars of any enterprise Salesforce implementation: authentication flows, integration patterns, and Enterprise Service Buses.
If you are building anything beyond a single self-contained org, you need to understand how systems prove their identity, how data moves between platforms, and how to orchestrate that movement at scale. These topics come up constantly in architect-level conversations, technical design reviews, and every real-world integration project.
Let’s break each one down.
What are Authentication Flows?
Before two systems can exchange data, they need to trust each other. Authentication is the process of proving identity — confirming that the system or user making a request is who they claim to be. In the Salesforce ecosystem, this almost always involves OAuth 2.0.
OAuth 2.0 is an authorization framework that allows a third-party application to obtain limited access to an HTTP service. Instead of sharing usernames and passwords directly, OAuth issues access tokens — short-lived credentials that grant specific permissions. Salesforce acts as both an OAuth provider (issuing tokens to external apps) and an OAuth consumer (obtaining tokens from external identity providers).
The key concepts you need to understand are:
- Authorization Server — The system that authenticates the user and issues tokens. In Salesforce, this is the Salesforce login server.
- Resource Server — The system that hosts the protected resources. This is your Salesforce org’s APIs.
- Client — The application requesting access to the resource server.
- Access Token — A credential that grants access to specific resources. It is short-lived, typically expiring in a couple of hours.
- Refresh Token — A longer-lived credential used to obtain a new access token without requiring the user to re-authenticate.
- Scopes — Permissions that define what the access token allows the client to do.
Every Connected App you create in Salesforce Setup defines a client in the OAuth sense. The Connected App’s Consumer Key and Consumer Secret are used by external systems to identify themselves during the authentication handshake.
The Different Types of Authentication Flows
Salesforce supports several OAuth flows. The right choice depends on the type of application, whether a user is present, and the security requirements of the integration.
Web Server Flow (Authorization Code Grant)
This is the most common flow for web applications where a user interacts with a browser. The external application redirects the user to the Salesforce login page. After the user authenticates and grants consent, Salesforce redirects back to the application with an authorization code. The application then exchanges that code for an access token on the back end.
When to use it: Web applications that have a server-side component. This is the recommended flow for most user-facing integrations because the access token exchange happens server-to-server and never passes through the browser.
How it works:
- The app redirects the user to
https://login.salesforce.com/services/oauth2/authorize. - The user logs in and approves access.
- Salesforce redirects back with an authorization code.
- The app’s server exchanges the code for an access token by calling the token endpoint.
User-Agent Flow (Implicit Grant)
This flow is designed for client-side applications that run entirely in the browser (or on a mobile device) and do not have a secure server-side component. Instead of returning an authorization code, Salesforce returns the access token directly in the URL fragment after the user authenticates.
When to use it: Single-page applications or mobile apps that cannot securely store a client secret. Note that this flow is considered less secure because the token is exposed in the browser. Many organizations are moving away from it in favor of the Web Server flow with PKCE (Proof Key for Code Exchange).
JWT Bearer Token Flow
This is the workhorse flow for server-to-server integrations where no user interaction is required. The external system creates a signed JWT (JSON Web Token) using a private key, sends it to the Salesforce token endpoint, and receives an access token in return. There is no browser, no login page, and no user consent screen.
When to use it: Backend services, scheduled jobs, middleware connections, and any automation that runs without a human in the loop. This is extremely common in enterprise integrations — MuleSoft, Informatica, and custom middleware all frequently use this flow. You need to upload the corresponding X.509 certificate to the Connected App in Salesforce.
Why it matters: The JWT Bearer flow eliminates stored passwords and provides a clean, certificate-based trust model. It is widely considered the best practice for system-to-system integration with Salesforce.
Device Flow
The Device flow is designed for devices with limited input capabilities — think smart TVs, IoT devices, or CLI tools. The device displays a code, the user goes to a separate browser to enter that code and authenticate, and the device polls Salesforce until the authentication is complete.
When to use it: Applications running on devices without a full browser or keyboard. It is relatively niche in the Salesforce world but useful for IoT and embedded scenarios.
Username-Password Flow
The external system sends a username, password, and security token directly to the Salesforce token endpoint and gets an access token back. There is no user interaction, no browser redirect, and no consent screen.
When to use it: Honestly, almost never in production. This flow is convenient for quick prototyping and developer testing, but it is the least secure option. It requires storing credentials in the external system, does not support MFA, and Salesforce has been gradually restricting its use. If you see this flow in a production integration, it is a sign that the architecture needs updating.
SAML Assertion Flow
This flow allows an application that already has a valid SAML assertion (from an identity provider like Okta, Azure AD, or ADFS) to exchange that assertion for a Salesforce OAuth access token. It bridges SAML-based SSO environments with Salesforce’s OAuth API access.
When to use it: Enterprises that have an existing SAML-based identity infrastructure and need to programmatically access Salesforce APIs after SSO authentication.
Asset Token Flow
Designed for IoT-connected devices, this flow allows a device registered as a Salesforce asset to obtain an access token using a signed JWT tied to that specific asset. Salesforce IoT and Connected Devices use this flow.
When to use it: IoT scenarios where physical devices need to authenticate directly with Salesforce.
Quick Reference Table
| Flow | User Present? | Server-Side? | Best For |
|---|---|---|---|
| Web Server | Yes | Yes | Web apps with a backend |
| User-Agent | Yes | No | SPAs, mobile (legacy) |
| JWT Bearer | No | Yes | Server-to-server automation |
| Device | Yes (separate) | Yes | IoT, CLI tools |
| Username-Password | No | Yes | Dev/testing only |
| SAML Assertion | Yes | Yes | SAML SSO environments |
| Asset Token | No | Yes | IoT connected devices |
What are Integration Patterns?
An integration pattern is a reusable architectural blueprint that defines how data moves between Salesforce and an external system. Salesforce documentation describes four primary integration patterns, and understanding when to apply each one is essential for any architect or senior developer.
Request-Reply (Synchronous)
The calling system sends a request and waits for a response before continuing. This is the most straightforward pattern — a classic HTTP callout from Apex, for example.
Use cases:
- Validating a customer’s address against a third-party service before saving a record.
- Retrieving real-time pricing or inventory data from an ERP during a quote creation.
- Verifying a credit card or payment method before processing an order.
Trade-offs: Simple to implement and reason about, but the calling system is blocked while waiting. If the external system is slow or down, the user experience suffers. Salesforce enforces callout time limits (120 seconds for synchronous calls), so this pattern does not work for long-running operations.
Fire and Forget (Asynchronous)
The calling system sends a message and moves on immediately without waiting for a response. The receiving system processes the message on its own schedule.
Implementation options in Salesforce:
- Platform Events
- Outbound Messages (workflow/flow-based)
@futurecallouts- Queueable Apex with callouts
Use cases:
- Sending order details to a fulfillment system after an Opportunity closes.
- Pushing lead data to a marketing automation platform after a form submission.
- Notifying an external logging or auditing system of record changes.
Trade-offs: The calling system does not know if the message was processed successfully. You need to build retry logic, dead-letter queues, or acknowledgment mechanisms if guaranteed delivery matters. Platform Events with a replay ID provide some built-in durability here.
Batch Data Synchronization
Large volumes of data are moved between systems on a scheduled basis — hourly, daily, or on-demand. This is not a real-time pattern. Data is collected, staged, and transferred in bulk.
Implementation options in Salesforce:
- Salesforce Data Loader or Data Import Wizard (manual or CLI-based)
- Batch Apex calling an external API
- Bulk API 2.0 (for external systems pushing data into Salesforce)
- ETL tools like Informatica, Talend, or MuleSoft
Use cases:
- Nightly sync of product catalog data from an ERP.
- Weekly load of customer accounts from a legacy system during a migration.
- Monthly import of financial reconciliation data.
Trade-offs: Efficient for large data volumes, but not suitable when real-time or near-real-time data is required. You need to handle conflict resolution (what happens when the same record was updated in both systems), error logging, and rollback strategies for partial failures.
Remote Call-In
The external system initiates a call into Salesforce. Salesforce exposes an API endpoint, and the external system is the client. This is the opposite of the first three patterns, where Salesforce initiates the communication.
Implementation options in Salesforce:
- Salesforce REST API
- Salesforce SOAP API
- Composite API or GraphQL API
- Custom Apex REST or SOAP web services
Use cases:
- A warehouse management system updating inventory levels in Salesforce.
- An external billing system creating invoices as Salesforce records.
- A customer portal (non-Salesforce) querying account data through a custom REST endpoint.
Trade-offs: Salesforce API limits apply. The external system needs a valid authentication token (see the flows above). You have less control over when and how often the external system calls in, so rate limiting and governor limit awareness are important.
What is an Enterprise Service Bus (ESB)?
An Enterprise Service Bus is middleware that acts as a central communication layer between multiple applications. Instead of every system connecting directly to every other system (point-to-point), all systems connect to the ESB, and the ESB handles routing, message transformation, protocol conversion, and orchestration.
Why Use an ESB?
As the number of integrations grows, point-to-point architectures become unmanageable. If you have five systems and each one needs to talk to every other, that is ten unique connections. With ten systems, it is forty-five connections. An ESB reduces this to one connection per system — each system just talks to the bus.
ESBs also provide:
- Message transformation — Converting data formats between systems (XML to JSON, different field names, date format conversion).
- Routing — Directing messages to the correct destination based on content or rules.
- Protocol mediation — Translating between REST, SOAP, JMS, AMQP, FTP, and other protocols.
- Orchestration — Coordinating multi-step integration workflows that span several systems.
- Monitoring and logging — Centralized visibility into all data flowing between systems.
- Error handling and retry — Standardized approaches to failure across all integrations.
Common ESB / Integration Platforms
- MuleSoft Anypoint Platform — Salesforce’s own integration platform. Deeply integrated with the Salesforce ecosystem and the most common choice for Salesforce-centric enterprises.
- Dell Boomi — Cloud-native integration platform popular in mid-market companies.
- IBM App Connect (formerly IBM Integration Bus) — Enterprise-grade ESB commonly found in large organizations with legacy infrastructure.
- TIBCO, Microsoft Azure Integration Services, AWS EventBridge — Other widely-used options depending on the technology stack.
When to Use an ESB
An ESB makes sense when:
- You have more than three or four systems that need to exchange data.
- You need centralized governance, monitoring, and error handling for integrations.
- Message transformation is complex (different data formats, schemas, and protocols).
- You need to orchestrate workflows that span multiple systems.
- Compliance or audit requirements demand centralized logging of all data exchanges.
An ESB is overkill when you have one or two simple integrations. In those cases, a direct point-to-point connection with Named Credentials and Apex callouts is the right approach.
What is TLS and mTLS?
Every Salesforce integration you build runs over HTTPS, which means it uses TLS (Transport Layer Security) under the hood. Understanding TLS and its mutual variant is important for security-conscious architecture.
TLS (Transport Layer Security)
TLS is the protocol that encrypts data in transit between two systems. When your Apex code makes a callout to https://api.example.com, TLS ensures that the data cannot be read or tampered with by anyone intercepting the network traffic.
In standard TLS, the server presents a certificate to prove its identity. The client (Salesforce) verifies that certificate against a list of trusted certificate authorities. This is one-way authentication — the client trusts the server, but the server has no cryptographic proof of the client’s identity. The client typically proves itself through an API key, OAuth token, or similar credential at the application layer.
mTLS (Mutual TLS)
Mutual TLS adds a second layer: the client also presents a certificate to the server. Both sides verify each other’s identity at the transport layer before any application data is exchanged.
When to use mTLS:
- High-security integrations (financial services, healthcare, government).
- When the external system requires certificate-based client authentication.
- When you need defense-in-depth beyond OAuth tokens.
Salesforce supports mTLS through Mutual Authentication Certificates. You can upload a client certificate in Setup and configure your Named Credential or HTTP callout to present it during the TLS handshake.
mTLS is more complex to set up and maintain (certificates expire and need rotation), but it provides significantly stronger security for sensitive integrations.
How to Choose Between Platform Events and Change Data Capture
Both Platform Events and Change Data Capture (CDC) are part of the Salesforce event-driven architecture, and they are commonly confused. They serve different purposes.
Platform Events
Platform Events are custom event messages that you define. You create a Platform Event object (like Order_Created__e), define its fields, and publish events from Apex, Flows, or external systems. Subscribers (Apex triggers, Flows, external apps via CometD/Pub-Sub API) consume those events.
Key characteristics:
- You define the schema — it can contain any data you want.
- Events are published explicitly by your code or configuration.
- Events are not tied to any specific sObject or record change.
- Supports replay for up to 72 hours (high-volume) or 24 hours (standard volume).
- Can be published from external systems into Salesforce.
When to use Platform Events:
- Custom event-driven workflows that do not map to a simple record change.
- Cross-system notifications (order placed, payment received, shipment dispatched).
- Decoupling processes within Salesforce (a trigger publishes an event, and a separate subscriber handles downstream logic asynchronously).
- When you need external systems to push events into Salesforce.
Change Data Capture (CDC)
CDC automatically publishes events whenever standard or custom object records are created, updated, deleted, or undeleted. You do not write publishing logic — you simply enable CDC for the objects you care about, and Salesforce emits change events automatically.
Key characteristics:
- Schema is automatically generated based on the sObject.
- Events are published automatically on record DML — no code required on the publishing side.
- Change events include only the fields that changed (for updates), plus header fields with change metadata.
- Supports replay for up to 3 days.
- Respects field-level security and sharing rules.
When to use CDC:
- Synchronizing Salesforce record changes to external systems in near-real-time.
- Building audit trails or change logs.
- Replacing polling-based integrations that periodically query for updated records.
- When you need external systems to react to any record change without modifying triggers or process automation.
Decision Guide
| Criterion | Platform Events | Change Data Capture |
|---|---|---|
| Trigger | Explicit publish | Automatic on DML |
| Schema | Custom-defined | Mirrors sObject fields |
| Direction | Bidirectional | Salesforce outbound only |
| Use case | Custom workflows, cross-system events | Record sync, change replication |
| Publishing effort | You write the publish logic | Zero — enable and go |
| External publish | Yes | No |
If you are syncing record changes out of Salesforce, start with CDC. If you need custom events with custom payloads that might not map to a single record change, use Platform Events. In many enterprise architectures, you use both.
Section Notes
Authentication, integration patterns, and middleware are the connective tissue of any multi-system Salesforce architecture. A few practical takeaways:
- Default to the JWT Bearer flow for server-to-server integrations. It is the most secure and maintainable option for automated processes.
- Never use the Username-Password flow in production. It was convenient in a simpler era, but modern security requirements (MFA, token rotation, certificate-based trust) make it a liability.
- Match the integration pattern to the use case. Request-Reply for real-time reads, Fire and Forget for event-driven writes, Batch for bulk data movement, and Remote Call-In when external systems need to push data into Salesforce.
- Introduce an ESB when complexity demands it. Two integrations can be point-to-point. Ten integrations need a bus.
- Use mTLS for high-security integrations where transport-layer identity verification is required on both sides.
- Prefer CDC for record synchronization and Platform Events for custom event-driven architectures.
These concepts form the foundation for designing Salesforce integrations that are secure, scalable, and maintainable. In the next post, we will continue exploring Salesforce architecture by looking at governor limits at scale and how they influence design decisions in large implementations.