Salesforce · · 32 min read

Apex Triggers in Salesforce

Everything about Apex triggers — trigger context variables, trigger handlers, bulkification, the order of execution, and best practices for writing production-grade triggers.

Part 44: Apex Triggers in Salesforce

Welcome back to the Salesforce series. In the previous installments we covered the fundamentals of Apex — data types, collections, control flow, SOQL, SOSL, DML, and classes. Now it is time to put all of that knowledge to work. Triggers are where Apex meets the database. They are the mechanism that lets you run custom code automatically when records are created, updated, deleted, or restored. If Flows are the declarative way to automate record changes, triggers are the programmatic way.

This is Part 44 of the series, and it is one of the most important installments for anyone pursuing Salesforce development. We will cover what triggers are, how to write them, how to structure them with handler classes, how to bulkify them so they survive real-world data volumes, the full order of execution, and best practices that separate production-grade code from throwaway scripts. We will finish with a hands-on project.


What Is a Trigger and When to Use One?

A trigger is a piece of Apex code that executes automatically before or after a data manipulation language (DML) event occurs on a Salesforce object. When a record is inserted, updated, deleted, or undeleted, any trigger associated with that object fires and runs the logic you have defined.

Before vs After Triggers

Triggers come in two timing flavors:

  • Before triggers execute before the record is saved to the database. They are ideal for validating or modifying field values on the record being saved. Because the record has not been committed yet, you can change field values directly on the trigger records without performing an additional DML operation.
  • After triggers execute after the record has been saved to the database and has received an ID. They are ideal for operations that need the record’s ID (such as creating related records) or for making changes to other objects. You cannot modify the records that fired the trigger in an after context — they are read-only.

The Seven Trigger Events

Salesforce supports seven trigger events, each representing a specific combination of timing and operation:

  1. before insert — Fires before new records are saved. Use it to set default values, validate data, or transform fields before the record hits the database.
  2. before update — Fires before existing records are saved with new values. Use it to validate changes, enforce business rules, or modify fields before the update commits.
  3. before delete — Fires before records are deleted. Use it to prevent deletion based on business logic or to perform cleanup before the record is removed.
  4. after insert — Fires after new records are saved and have IDs. Use it to create related records, send notifications, or update other objects.
  5. after update — Fires after existing records are saved with new values. Use it to cascade changes to related records or trigger downstream processes.
  6. after delete — Fires after records are deleted. Use it to clean up related data, log the deletion, or update aggregate fields on parent records.
  7. after undelete — Fires after records are restored from the Recycle Bin. Use it to re-establish related data or recalculate summaries.

When to Use Triggers vs Flows vs Process Builder

Salesforce has evolved its automation story significantly over the years. Here is how to decide what to use:

Use a Flow when:

  • The logic is straightforward (field updates, record creation, email alerts, simple branching).
  • An admin needs to build and maintain the automation.
  • You want a visual, declarative tool that does not require deployment through change sets or metadata API.
  • The automation involves user interaction (Screen Flows).

Use a Trigger when:

  • The logic is too complex for a Flow (heavy computation, intricate recursion control, complex data transformations across multiple objects).
  • You need fine-grained control over the order of operations within the trigger context.
  • Performance is critical and you need to optimize SOQL queries, collections, and processing in ways that Flows cannot express.
  • You are integrating with external systems using callouts that require precise error handling.
  • The operation involves advanced patterns like custom rollup summaries across unrelated objects.

Avoid Process Builder entirely. Process Builder is retired functionality. Salesforce no longer recommends it for new automations, and existing Process Builders should be migrated to Flows. We covered this in Part 25 of this series.

The Evolution of Salesforce Automation

For historical context, Salesforce automation has gone through several generations:

  1. Workflow Rules — The original declarative automation. Limited to field updates, email alerts, outbound messages, and task creation on the same record or its parent. Now retired.
  2. Process Builder — A visual tool that could handle more complex logic than Workflow Rules, including creating records on any object and calling Apex. Now retired.
  3. Flows — The current standard for declarative automation. Flows can do everything Workflow Rules and Process Builder could do, plus screen interactions, loops, sub-flows, and much more.
  4. Apex Triggers — The programmatic option. Triggers have been available since the early days of the platform and remain essential for complex use cases that exceed declarative capabilities.

The general principle is: use declarative tools (Flows) first, and reach for triggers only when declarative tools cannot meet the requirement.


Creating a Trigger

Basic Syntax

A trigger is defined using the trigger keyword, followed by the trigger name, the on keyword, the object name, and a comma-separated list of events in parentheses.

trigger AccountTrigger on Account (before insert, before update, after insert, after update) {
    // Trigger logic goes here
}

This trigger fires on four events: before insert, before update, after insert, and after update on the Account object.

To create a trigger in Salesforce:

  1. Navigate to Setup and search for Apex Triggers in the Quick Find box, or go to the object’s detail page and scroll to the Triggers section.
  2. Click New and write your trigger code.
  3. Alternatively, use the Developer Console: go to File > New > Apex Trigger, select the object, and write your code.
  4. In a real project, you will create triggers in your local development environment (VS Code with the Salesforce Extension Pack) and deploy them through change sets, the Salesforce CLI, or a CI/CD pipeline.

Trigger Context Variables

Trigger context variables are the key to understanding what is happening inside a trigger. They tell you which records are being processed, what operation is occurring, and whether you are in the before or after phase. Every trigger has access to these variables through the Trigger class.

Record Collections

VariableTypeDescription
Trigger.newList<sObject>The new versions of the records being processed. Available in insert and update triggers. In before triggers, you can modify these records directly.
Trigger.oldList<sObject>The old versions of the records (before the change). Available in update and delete triggers. Always read-only.
Trigger.newMapMap<Id, sObject>A map of IDs to the new versions of the records. Available in before update, after insert, after update, and after undelete triggers. Not available in before insert because the records do not have IDs yet.
Trigger.oldMapMap<Id, sObject>A map of IDs to the old versions of the records. Available in update and delete triggers.

Boolean Context Variables

VariableDescription
Trigger.isInsertReturns true if the trigger was fired by an insert operation.
Trigger.isUpdateReturns true if the trigger was fired by an update operation.
Trigger.isDeleteReturns true if the trigger was fired by a delete operation.
Trigger.isUndeleteReturns true if the trigger was fired by an undelete operation.
Trigger.isBeforeReturns true if the trigger is executing in the before context.
Trigger.isAfterReturns true if the trigger is executing in the after context.

Other Context Variables

VariableTypeDescription
Trigger.sizeIntegerThe total number of records in the trigger invocation (both old and new).
Trigger.operationTypeSystem.TriggerOperationAn enum value representing the exact event. Possible values: BEFORE_INSERT, BEFORE_UPDATE, BEFORE_DELETE, AFTER_INSERT, AFTER_UPDATE, AFTER_DELETE, AFTER_UNDELETE.

Here is a trigger that uses context variables to route logic:

trigger ContactTrigger on Contact (before insert, before update, after insert, after update) {
    if (Trigger.isBefore) {
        if (Trigger.isInsert) {
            // Handle before insert logic
            for (Contact c : Trigger.new) {
                if (c.Email == null) {
                    c.addError('Email is required for all contacts.');
                }
            }
        }
        if (Trigger.isUpdate) {
            // Handle before update logic
            for (Contact c : Trigger.new) {
                Contact oldContact = Trigger.oldMap.get(c.Id);
                if (oldContact.Email != c.Email) {
                    c.Email_Changed__c = true;
                }
            }
        }
    }
    if (Trigger.isAfter) {
        if (Trigger.isInsert) {
            // Handle after insert logic
            List<Task> followUpTasks = new List<Task>();
            for (Contact c : Trigger.new) {
                followUpTasks.add(new Task(
                    Subject = 'Follow up with new contact',
                    WhoId = c.Id,
                    ActivityDate = Date.today().addDays(7)
                ));
            }
            insert followUpTasks;
        }
    }
}

The One Trigger Per Object Rule

A widely accepted best practice in the Salesforce development community is to have only one trigger per object. The reason is control. When you have multiple triggers on the same object, Salesforce does not guarantee the order in which they execute. This unpredictability makes debugging difficult and can lead to conflicts between triggers.

With a single trigger per object, you have a single entry point. All logic is routed from that trigger to handler classes, and you control the order of execution within your code.

// ONE trigger on Account — the single entry point
trigger AccountTrigger on Account (
    before insert, before update, before delete,
    after insert, after update, after delete, after undelete
) {
    AccountTriggerHandler handler = new AccountTriggerHandler();
    handler.run();
}

What Is a Trigger Handler?

A trigger handler is an Apex class that contains the actual business logic that a trigger needs to execute. Instead of writing logic directly inside the trigger body, you delegate all processing to the handler class. The trigger itself becomes a thin dispatcher — it simply calls the handler and passes along the context.

Why Keep Logic Out of the Trigger?

There are several compelling reasons:

  1. Separation of concerns. The trigger’s job is to detect the event and delegate. The handler’s job is to process the business logic. Mixing these responsibilities makes code harder to read and maintain.
  2. Testability. You cannot instantiate a trigger directly in a test class. Handler classes, on the other hand, are regular Apex classes that can be tested with standard unit tests. You can test individual methods in isolation.
  3. Reusability. Handler methods can be called from other contexts — batch jobs, REST APIs, Lightning components — not just from triggers.
  4. Maintainability. When multiple developers work on the same object’s automation, having a handler class with clearly named methods reduces merge conflicts and makes code reviews easier.
  5. Readability. A trigger that says “handler.run()” is immediately understandable. A trigger with 200 lines of inline logic is not.

The Handler Pattern

The handler pattern is simple:

  1. The trigger detects the event and calls the handler.
  2. The handler checks the trigger context (before/after, insert/update/delete/undelete) and routes execution to the appropriate method.
  3. Each method contains the business logic for that specific event.

Creating a Trigger Handler

A Basic Handler Class

Here is a complete example of a trigger and its handler for the Account object.

The Trigger (AccountTrigger.trigger):

trigger AccountTrigger on Account (
    before insert, before update, before delete,
    after insert, after update, after delete, after undelete
) {
    AccountTriggerHandler handler = new AccountTriggerHandler();
    handler.run();
}

The Handler (AccountTriggerHandler.cls):

public class AccountTriggerHandler {

    public void run() {
        if (Trigger.isBefore) {
            if (Trigger.isInsert) {
                beforeInsert(Trigger.new);
            } else if (Trigger.isUpdate) {
                beforeUpdate(Trigger.new, Trigger.oldMap);
            } else if (Trigger.isDelete) {
                beforeDelete(Trigger.old);
            }
        } else if (Trigger.isAfter) {
            if (Trigger.isInsert) {
                afterInsert(Trigger.new);
            } else if (Trigger.isUpdate) {
                afterUpdate(Trigger.new, Trigger.oldMap);
            } else if (Trigger.isDelete) {
                afterDelete(Trigger.old);
            } else if (Trigger.isUndelete) {
                afterUndelete(Trigger.new);
            }
        }
    }

    private void beforeInsert(List<Account> newAccounts) {
        // Set default values
        for (Account acc : newAccounts) {
            if (acc.Industry == null) {
                acc.Industry = 'Other';
            }
            if (acc.Rating == null) {
                acc.Rating = 'Warm';
            }
        }
    }

    private void beforeUpdate(List<Account> newAccounts, Map<Id, Account> oldAccountMap) {
        // Validate changes
        for (Account acc : newAccounts) {
            Account oldAcc = oldAccountMap.get(acc.Id);
            if (oldAcc.AnnualRevenue != null && acc.AnnualRevenue == null) {
                acc.addError('Annual Revenue cannot be cleared once it has been set.');
            }
        }
    }

    private void beforeDelete(List<Account> oldAccounts) {
        // Prevent deletion of high-value accounts
        for (Account acc : oldAccounts) {
            if (acc.AnnualRevenue != null && acc.AnnualRevenue > 1000000) {
                acc.addError('Cannot delete accounts with annual revenue over $1,000,000.');
            }
        }
    }

    private void afterInsert(List<Account> newAccounts) {
        // Create default contacts for new accounts
        List<Contact> defaultContacts = new List<Contact>();
        for (Account acc : newAccounts) {
            defaultContacts.add(new Contact(
                FirstName = 'Primary',
                LastName = 'Contact',
                AccountId = acc.Id
            ));
        }
        if (!defaultContacts.isEmpty()) {
            insert defaultContacts;
        }
    }

    private void afterUpdate(List<Account> newAccounts, Map<Id, Account> oldAccountMap) {
        // Cascade name changes to a custom field on child contacts
        List<Id> accountsWithNameChanges = new List<Id>();
        for (Account acc : newAccounts) {
            Account oldAcc = oldAccountMap.get(acc.Id);
            if (acc.Name != oldAcc.Name) {
                accountsWithNameChanges.add(acc.Id);
            }
        }
        if (!accountsWithNameChanges.isEmpty()) {
            List<Contact> contactsToUpdate = [
                SELECT Id, Account_Name_Snapshot__c
                FROM Contact
                WHERE AccountId IN :accountsWithNameChanges
            ];
            for (Contact c : contactsToUpdate) {
                // You would set the snapshot field here
            }
            if (!contactsToUpdate.isEmpty()) {
                update contactsToUpdate;
            }
        }
    }

    private void afterDelete(List<Account> oldAccounts) {
        // Log deleted accounts to a custom object
        List<Deletion_Log__c> logs = new List<Deletion_Log__c>();
        for (Account acc : oldAccounts) {
            logs.add(new Deletion_Log__c(
                Object_Name__c = 'Account',
                Record_Name__c = acc.Name,
                Deleted_Date__c = Date.today()
            ));
        }
        if (!logs.isEmpty()) {
            insert logs;
        }
    }

    private void afterUndelete(List<Account> restoredAccounts) {
        // Handle any post-restore logic
    }
}

The Virtual Base Handler Pattern

Many teams take the handler pattern a step further by creating a base handler class that all object-specific handlers extend. This enforces a consistent structure and reduces boilerplate.

public virtual class TriggerHandler {

    public void run() {
        switch on Trigger.operationType {
            when BEFORE_INSERT  { beforeInsert(Trigger.new); }
            when BEFORE_UPDATE  { beforeUpdate(Trigger.new, Trigger.oldMap); }
            when BEFORE_DELETE  { beforeDelete(Trigger.old); }
            when AFTER_INSERT   { afterInsert(Trigger.new); }
            when AFTER_UPDATE   { afterUpdate(Trigger.new, Trigger.oldMap); }
            when AFTER_DELETE   { afterDelete(Trigger.old); }
            when AFTER_UNDELETE { afterUndelete(Trigger.new); }
        }
    }

    // Virtual methods — override in child classes as needed
    protected virtual void beforeInsert(List<sObject> newRecords) {}
    protected virtual void beforeUpdate(List<sObject> newRecords, Map<Id, sObject> oldRecordMap) {}
    protected virtual void beforeDelete(List<sObject> oldRecords) {}
    protected virtual void afterInsert(List<sObject> newRecords) {}
    protected virtual void afterUpdate(List<sObject> newRecords, Map<Id, sObject> oldRecordMap) {}
    protected virtual void afterDelete(List<sObject> oldRecords) {}
    protected virtual void afterUndelete(List<sObject> newRecords) {}
}

Now each object handler extends this base class and overrides only the methods it needs:

public class OpportunityTriggerHandler extends TriggerHandler {

    protected override void beforeInsert(List<sObject> newRecords) {
        List<Opportunity> newOpps = (List<Opportunity>) newRecords;
        for (Opportunity opp : newOpps) {
            if (opp.CloseDate == null) {
                opp.CloseDate = Date.today().addDays(30);
            }
        }
    }

    protected override void afterUpdate(List<sObject> newRecords, Map<Id, sObject> oldRecordMap) {
        List<Opportunity> newOpps = (List<Opportunity>) newRecords;
        Map<Id, Opportunity> oldOppMap = (Map<Id, Opportunity>) oldRecordMap;
        // Handle stage changes, create tasks, send notifications, etc.
    }
}

And the trigger stays one line:

trigger OpportunityTrigger on Opportunity (before insert, before update, after insert, after update) {
    new OpportunityTriggerHandler().run();
}

Common Handler Frameworks

Several open-source trigger handler frameworks are popular in the Salesforce community:

  • Kevin O’Hara’s Trigger Handler — A lightweight virtual class-based framework that adds features like recursion prevention, max loop count, and trigger bypass.
  • Dan Appleman’s Advanced Apex — Introduces a pattern using a static dispatcher and map of handler methods.
  • FFLIB (FinancialForce) — Part of the larger Enterprise Architecture framework, includes a domain layer pattern that serves as a sophisticated trigger handler.

For most projects, the virtual base handler pattern shown above is sufficient. The important thing is to pick a pattern and use it consistently across all objects in your org.


What Is Bulkification?

Bulkification is the practice of writing trigger code that can efficiently handle not just one record at a time, but hundreds or thousands of records in a single transaction. It is one of the most important concepts in Apex development.

Why Single-Record Thinking Fails

When developers new to Salesforce write triggers, they often think in terms of a single record: “When an Account is created, query its contacts and update them.” This works perfectly in manual testing, where you create one record at a time through the UI. But it fails catastrophically in production.

How Salesforce Batches Trigger Execution

When a DML operation involves multiple records, Salesforce groups them into batches of up to 200 records and fires the trigger once per batch. If you load 10,000 records through Data Loader, the trigger fires 50 times (10,000 / 200 = 50 batches), with each invocation processing 200 records.

Within each invocation, Trigger.new contains up to 200 records. Your code must handle all of them in a single execution without exceeding governor limits.


The Importance of Bulkification

Governor Limit Implications

Salesforce enforces strict governor limits per transaction:

  • 100 SOQL queries per synchronous transaction.
  • 150 DML statements per transaction.
  • 10,000 total DML rows per transaction.
  • 50,000 total SOQL query rows per transaction.

If your trigger runs a SOQL query inside a loop that iterates over Trigger.new, and a batch contains 200 records, you will make 200 SOQL queries in a single transaction — exceeding the 100-query limit at record 101. The entire transaction fails, and none of the 200 records are saved.

A Real-World Scenario

Imagine you have a trigger on the Opportunity object that, for each new Opportunity, queries the Account to check a field and then creates a Task.

If an admin uses Data Loader to insert 10,000 Opportunities:

  • Salesforce processes them in 50 batches of 200.
  • In each batch, if your trigger queries inside a loop, you attempt 200 SOQL queries (limit: 100) and 200 DML inserts (limit: 150).
  • Result: The first batch of 200 records fails at record 101 with System.LimitException: Too many SOQL queries: 101. The data load halts. The admin sees 0 out of 10,000 records loaded and an error log full of governor limit exceptions.

This is why bulkification is not optional. It is a fundamental requirement for any production Apex code.


Bulkification Example

Let us walk through a concrete example. We will start with a non-bulkified trigger, show exactly why it fails, and then refactor it step by step.

The Scenario

When an Opportunity is created, we want to look up the Account’s Industry field and stamp it onto a custom field on the Opportunity called Account_Industry__c.

The Non-Bulkified Trigger (Do NOT Write Code Like This)

trigger OpportunityTrigger on Opportunity (before insert) {
    // BAD: This trigger will fail with bulk data
    for (Opportunity opp : Trigger.new) {
        // BAD: SOQL query inside a loop
        Account acc = [
            SELECT Industry
            FROM Account
            WHERE Id = :opp.AccountId
            LIMIT 1
        ];
        opp.Account_Industry__c = acc.Industry;
    }
}

What happens with 200 records: The loop runs 200 times. Each iteration executes a SOQL query. At iteration 101, the transaction hits the 100 SOQL query limit and throws a System.LimitException. All 200 records fail.

What happens with 1 record: It works perfectly, which is why the developer who wrote it during manual testing never noticed the problem.

Step 1: Collect the IDs

Instead of querying inside the loop, first collect all the Account IDs you need.

// STEP 1: Collect all Account IDs from the incoming Opportunities
Set<Id> accountIds = new Set<Id>();
for (Opportunity opp : Trigger.new) {
    if (opp.AccountId != null) {
        accountIds.add(opp.AccountId);
    }
}

Step 2: Query Once

Now issue a single SOQL query to get all the Accounts at once, and store them in a Map for fast lookup.

// STEP 2: Query all needed Accounts in a single SOQL query
Map<Id, Account> accountMap = new Map<Id, Account>(
    [SELECT Id, Industry FROM Account WHERE Id IN :accountIds]
);

This is one SOQL query regardless of whether Trigger.new contains 1 record or 200.

Step 3: Act on the Records

Loop through the trigger records again and use the Map to look up the Account data.

// STEP 3: Use the Map to update each Opportunity
for (Opportunity opp : Trigger.new) {
    if (opp.AccountId != null && accountMap.containsKey(opp.AccountId)) {
        opp.Account_Industry__c = accountMap.get(opp.AccountId).Industry;
    }
}

The Fully Bulkified Trigger

Putting it all together:

trigger OpportunityTrigger on Opportunity (before insert) {
    // Collect
    Set<Id> accountIds = new Set<Id>();
    for (Opportunity opp : Trigger.new) {
        if (opp.AccountId != null) {
            accountIds.add(opp.AccountId);
        }
    }

    // Query
    Map<Id, Account> accountMap = new Map<Id, Account>(
        [SELECT Id, Industry FROM Account WHERE Id IN :accountIds]
    );

    // Act
    for (Opportunity opp : Trigger.new) {
        if (opp.AccountId != null && accountMap.containsKey(opp.AccountId)) {
            opp.Account_Industry__c = accountMap.get(opp.AccountId).Industry;
        }
    }
}

What happens with 200 records: The trigger runs one loop to collect IDs, one SOQL query, and one loop to assign values. Total SOQL queries: 1. Total DML statements: 0 (we are in a before trigger, so modifying Trigger.new directly does not count as DML). It handles 200 records as easily as 1.

What happens with 10,000 records: Salesforce fires the trigger 50 times, once per batch of 200. Each invocation uses 1 SOQL query. Total across all batches: 50 SOQL queries (each in its own transaction). Everything works.

The Collect-Query-Act Pattern

The refactoring we just did follows a pattern called Collect-Query-Act:

  1. Collect — Gather the data you need from Trigger.new or Trigger.old (IDs, field values, criteria).
  2. Query — Execute SOQL queries outside of any loop, using collections (Sets, Lists) in your WHERE clauses.
  3. Act — Loop through the trigger records and apply your logic, using Maps for lookups instead of queries.

This pattern is the foundation of all well-written Apex triggers. Memorize it.


Understanding How Triggers Actually Work

The Full Order of Execution

When you save a record in Salesforce, the platform does not simply fire your trigger and commit the data. It runs a complex, multi-step process called the order of execution. Understanding this sequence is critical for debugging and for knowing when your code runs relative to other automations.

Here is the full order of execution when a record is saved:

  1. The original record is loaded from the database (for updates and deletes) or initialized (for inserts).
  2. The new field values are loaded from the request and overwrite the old values on the in-memory record.
  3. System validation rules execute. These are built-in validations that Salesforce enforces automatically, such as required fields at the page layout level, field format checks, and maximum field length enforcement.
  4. Before triggers execute. All before triggers on the object fire. This is where you can modify field values on Trigger.new without additional DML.
  5. Custom validation rules execute. The validation rules you have defined in Setup are evaluated. If any fail, the record is rejected, and no further processing occurs.
  6. Duplicate rules execute. Salesforce checks duplicate rules. If a duplicate is found and the rule is set to block, the record is rejected.
  7. The record is saved to the database (but not yet committed). The record now has an ID if it is new.
  8. After triggers execute. All after triggers on the object fire. The records in Trigger.new are read-only.
  9. Assignment rules execute. Lead and Case assignment rules are evaluated.
  10. Auto-response rules execute. Auto-response rules for Leads and Cases are evaluated.
  11. Workflow rules execute. Any active workflow rules on the object are evaluated. If workflow field updates fire, the record is updated, and before and after triggers fire again (re-triggering).
  12. Escalation rules execute. Case escalation rules are evaluated.
  13. Processes and Flows execute. Record-triggered Flows and any remaining Process Builder processes execute.
  14. Entitlement rules execute. Entitlement rules for Cases are evaluated.
  15. If the record was updated by a workflow field update, the before and after triggers fire again on the updated record. This is the re-evaluation step. Validation rules, duplicate rules, and the rest of the order of execution do NOT re-execute during re-evaluation — only triggers and flows.
  16. Criteria-based sharing rules are evaluated. Salesforce recalculates sharing rules based on the record’s field values.
  17. The DML operation is committed to the database. The transaction is finalized.
  18. Post-commit logic executes. This includes sending emails, enqueuing asynchronous Apex (future methods, queueable jobs), and executing outbound messages.

Recursion

Recursion occurs when a trigger’s logic causes the same trigger to fire again. The most common scenario is when an after trigger updates records of the same object. For example, if an after update trigger on Account modifies and updates other Account records, those updates fire the Account trigger again. If the trigger does not guard against this, it enters an infinite loop until it hits the governor limit for maximum trigger depth (16 levels).

Static Variables to Prevent Infinite Loops

The standard approach to prevent recursion is to use a static variable as a flag. Static variables persist for the duration of the transaction but are reset between transactions.

public class TriggerRecursionControl {
    public static Boolean hasRun = false;
}

In the trigger handler:

public class AccountTriggerHandler extends TriggerHandler {

    protected override void afterUpdate(List<sObject> newRecords, Map<Id, sObject> oldRecordMap) {
        if (TriggerRecursionControl.hasRun) {
            return; // Exit to prevent recursion
        }
        TriggerRecursionControl.hasRun = true;

        // Logic that updates other Account records
        List<Account> accountsToUpdate = new List<Account>();
        // ... build the list ...
        if (!accountsToUpdate.isEmpty()) {
            update accountsToUpdate;
        }
    }
}

A more sophisticated approach uses a Set<Id> to track which specific records have already been processed, rather than a blanket boolean flag:

public class TriggerRecursionControl {
    private static Set<Id> processedIds = new Set<Id>();

    public static Boolean hasBeenProcessed(Id recordId) {
        return processedIds.contains(recordId);
    }

    public static void markProcessed(Id recordId) {
        processedIds.add(recordId);
    }

    public static void markProcessed(Set<Id> recordIds) {
        processedIds.addAll(recordIds);
    }
}

Usage in the handler:

protected override void afterUpdate(List<sObject> newRecords, Map<Id, sObject> oldRecordMap) {
    List<Account> accountsToProcess = new List<Account>();
    for (sObject rec : newRecords) {
        if (!TriggerRecursionControl.hasBeenProcessed(rec.Id)) {
            accountsToProcess.add((Account) rec);
            TriggerRecursionControl.markProcessed(rec.Id);
        }
    }

    if (accountsToProcess.isEmpty()) {
        return;
    }

    // Process only records that have not been handled yet
    List<Account> accountsToUpdate = new List<Account>();
    // ... build the list ...
    if (!accountsToUpdate.isEmpty()) {
        update accountsToUpdate;
    }
}

This approach is better because it allows the trigger to run for records that genuinely need processing while skipping records that have already been handled.


Best Practices for Triggers

Here is a consolidated list of best practices that every Salesforce developer should follow when writing triggers.

1. One Trigger Per Object

As discussed earlier, create exactly one trigger per object. This gives you a single entry point and full control over the order of execution. Route all logic through a handler class.

2. Keep All Logic in Handler Classes

The trigger body should contain nothing more than instantiation of the handler and a single method call. No SOQL, no DML, no loops, no conditional logic in the trigger itself.

// GOOD: Thin trigger
trigger CaseTrigger on Case (before insert, after insert, before update, after update) {
    new CaseTriggerHandler().run();
}
// BAD: Fat trigger with inline logic
trigger CaseTrigger on Case (before insert) {
    for (Case c : Trigger.new) {
        Account a = [SELECT Name FROM Account WHERE Id = :c.AccountId];
        c.Description = 'Case for ' + a.Name;
    }
}

3. Bulkify Everything

Never put SOQL queries or DML statements inside loops. Always use the Collect-Query-Act pattern. Use Set<Id> to gather IDs, Map<Id, sObject> for lookups, and List<sObject> for batch DML.

4. Use Static Variables for Recursion Control

Implement recursion prevention using static variables, either a simple boolean flag or a Set<Id> for per-record tracking. Without recursion control, your triggers can re-fire endlessly and consume governor limits.

5. Use Maps for Efficient Lookups

When you need to look up related records, query them into a Map and access them by ID. This avoids repeated queries and keeps your code O(n) instead of O(n^2).

// GOOD: Query into a Map, look up by ID
Map<Id, Account> accountMap = new Map<Id, Account>(
    [SELECT Id, Name, Industry FROM Account WHERE Id IN :accountIds]
);
Account acc = accountMap.get(someOpportunity.AccountId);

6. Avoid Hardcoded IDs

Never put Record Type IDs, Profile IDs, or any other Salesforce IDs directly in your trigger code. IDs are different between sandbox and production environments. Instead, query by name or DeveloperName, or use Custom Metadata Types.

// BAD: Hardcoded ID
if (acc.RecordTypeId == '012000000000ABC') { ... }

// GOOD: Query by DeveloperName
Id enterpriseRecordTypeId = Schema.SObjectType.Account
    .getRecordTypeInfosByDeveloperName()
    .get('Enterprise')
    .getRecordTypeId();
if (acc.RecordTypeId == enterpriseRecordTypeId) { ... }

7. Write Comprehensive Tests

Every trigger must have corresponding test classes with at least 75% code coverage (the minimum for deployment). In practice, aim for 90%+ coverage and test both positive and negative scenarios. Test bulk operations by inserting 200+ records in your test methods.

We will cover Apex testing in detail in Part 45 of this series.

8. Use Custom Metadata for Configuration

Instead of hardcoding field values, thresholds, or business rules in your trigger, store them in Custom Metadata Types. This allows admins to modify behavior without code changes or deployments.

// Query custom metadata for trigger configuration
Trigger_Setting__mdt setting = Trigger_Setting__mdt.getInstance('AccountTrigger');
if (setting != null && setting.Is_Active__c) {
    // Run the trigger logic
}

9. Use addError() for Validation

When you need to prevent a record from being saved, use the addError() method on the sObject in a before trigger. This displays the error message to the user and rolls back the operation for that record.

for (Account acc : Trigger.new) {
    if (acc.AnnualRevenue != null && acc.AnnualRevenue < 0) {
        acc.AnnualRevenue.addError('Annual Revenue cannot be negative.');
    }
}

10. Filter Records Before Processing

Not every record in Trigger.new needs to be processed. Check whether the relevant fields have actually changed (in update triggers) before doing any work.

protected override void afterUpdate(List<sObject> newRecords, Map<Id, sObject> oldRecordMap) {
    List<Opportunity> stageChangedOpps = new List<Opportunity>();
    for (sObject rec : newRecords) {
        Opportunity newOpp = (Opportunity) rec;
        Opportunity oldOpp = (Opportunity) oldRecordMap.get(newOpp.Id);
        // Only process Opportunities where the Stage actually changed
        if (newOpp.StageName != oldOpp.StageName) {
            stageChangedOpps.add(newOpp);
        }
    }
    if (!stageChangedOpps.isEmpty()) {
        // Process only the filtered list
    }
}

PROJECT: Auto-Generating Tasks for Records

Let us put everything together with a hands-on project. We will create a trigger and handler that automatically creates follow-up Tasks when Opportunities are created or updated to a specific stage.

Requirements

  1. When a new Opportunity is created with the Stage “Qualification,” create a Task assigned to the Opportunity Owner with the subject “Initial qualification follow-up” and a due date 3 days from today.
  2. When an existing Opportunity’s Stage changes to “Proposal/Price Quote,” create a Task with the subject “Prepare and send proposal” and a due date 5 days from today.
  3. The solution must be bulkified.
  4. The solution must use the trigger handler pattern.
  5. The solution must include recursion control.

Step 1: Create the Recursion Control Class

public class OpportunityRecursionControl {
    private static Set<Id> processedForTaskCreation = new Set<Id>();

    public static Boolean hasBeenProcessed(Id oppId) {
        return processedForTaskCreation.contains(oppId);
    }

    public static void markProcessed(Id oppId) {
        processedForTaskCreation.add(oppId);
    }

    public static void markProcessed(Set<Id> oppIds) {
        processedForTaskCreation.addAll(oppIds);
    }
}

Step 2: Create the Trigger Handler

public class OpportunityTriggerHandler extends TriggerHandler {

    protected override void afterInsert(List<sObject> newRecords) {
        createQualificationTasks((List<Opportunity>) newRecords);
    }

    protected override void afterUpdate(List<sObject> newRecords, Map<Id, sObject> oldRecordMap) {
        createProposalTasks(
            (List<Opportunity>) newRecords,
            (Map<Id, Opportunity>) oldRecordMap
        );
    }

    private void createQualificationTasks(List<Opportunity> newOpps) {
        List<Task> tasksToInsert = new List<Task>();

        for (Opportunity opp : newOpps) {
            if (opp.StageName == 'Qualification'
                && !OpportunityRecursionControl.hasBeenProcessed(opp.Id)) {

                tasksToInsert.add(new Task(
                    Subject = 'Initial qualification follow-up',
                    WhatId = opp.Id,
                    OwnerId = opp.OwnerId,
                    ActivityDate = Date.today().addDays(3),
                    Status = 'Not Started',
                    Priority = 'Normal',
                    Description = 'Follow up on newly created Opportunity in Qualification stage.'
                ));

                OpportunityRecursionControl.markProcessed(opp.Id);
            }
        }

        if (!tasksToInsert.isEmpty()) {
            insert tasksToInsert;
        }
    }

    private void createProposalTasks(
        List<Opportunity> newOpps,
        Map<Id, Opportunity> oldOppMap
    ) {
        List<Task> tasksToInsert = new List<Task>();

        for (Opportunity opp : newOpps) {
            Opportunity oldOpp = oldOppMap.get(opp.Id);

            // Only create a task if the stage CHANGED to Proposal/Price Quote
            if (opp.StageName == 'Proposal/Price Quote'
                && oldOpp.StageName != 'Proposal/Price Quote'
                && !OpportunityRecursionControl.hasBeenProcessed(opp.Id)) {

                tasksToInsert.add(new Task(
                    Subject = 'Prepare and send proposal',
                    WhatId = opp.Id,
                    OwnerId = opp.OwnerId,
                    ActivityDate = Date.today().addDays(5),
                    Status = 'Not Started',
                    Priority = 'High',
                    Description = 'Opportunity has moved to Proposal/Price Quote. Prepare the proposal document.'
                ));

                OpportunityRecursionControl.markProcessed(opp.Id);
            }
        }

        if (!tasksToInsert.isEmpty()) {
            insert tasksToInsert;
        }
    }
}

Step 3: Create the Trigger

trigger OpportunityTrigger on Opportunity (
    before insert, before update, before delete,
    after insert, after update, after delete, after undelete
) {
    new OpportunityTriggerHandler().run();
}

How It Works

  1. A user creates 5 Opportunities, 3 of which are in the Qualification stage. The trigger fires once for the batch of 5. The handler loops through all 5, identifies the 3 that match the criteria, creates 3 Task records in a single list, and inserts them with one DML statement.

  2. An admin uses Data Loader to update 10,000 Opportunities, moving 2,000 of them to the Proposal/Price Quote stage. Salesforce processes them in 50 batches of 200. For each batch, the handler identifies the Opportunities that changed to the target stage, builds a list of Tasks, and inserts them in one DML call. The recursion control ensures that if the Task insertion triggers any downstream logic that touches the Opportunities again, the same Opportunities are not processed a second time.

  3. The trigger is registered for all seven events, but the handler only implements afterInsert and afterUpdate. The other event methods in the base TriggerHandler class are no-ops, so registering for extra events adds no overhead. This future-proofs the trigger — when new requirements arise, you add logic to the handler without modifying the trigger itself.

Testing the Project

While we will cover testing in depth in Part 45, here is a preview of what the test class would look like:

@isTest
private class OpportunityTriggerHandlerTest {

    @isTest
    static void testQualificationTaskCreation() {
        Account testAccount = new Account(Name = 'Test Account');
        insert testAccount;

        List<Opportunity> opps = new List<Opportunity>();
        for (Integer i = 0; i < 200; i++) {
            opps.add(new Opportunity(
                Name = 'Test Opp ' + i,
                AccountId = testAccount.Id,
                StageName = 'Qualification',
                CloseDate = Date.today().addDays(30)
            ));
        }

        Test.startTest();
        insert opps;
        Test.stopTest();

        List<Task> tasks = [
            SELECT Id, Subject, WhatId
            FROM Task
            WHERE Subject = 'Initial qualification follow-up'
        ];
        System.assertEquals(200, tasks.size(),
            'A task should be created for each Opportunity in Qualification stage.');
    }

    @isTest
    static void testProposalTaskCreation() {
        Account testAccount = new Account(Name = 'Test Account');
        insert testAccount;

        List<Opportunity> opps = new List<Opportunity>();
        for (Integer i = 0; i < 200; i++) {
            opps.add(new Opportunity(
                Name = 'Test Opp ' + i,
                AccountId = testAccount.Id,
                StageName = 'Prospecting',
                CloseDate = Date.today().addDays(30)
            ));
        }
        insert opps;

        for (Opportunity opp : opps) {
            opp.StageName = 'Proposal/Price Quote';
        }

        Test.startTest();
        update opps;
        Test.stopTest();

        List<Task> tasks = [
            SELECT Id, Subject, WhatId
            FROM Task
            WHERE Subject = 'Prepare and send proposal'
        ];
        System.assertEquals(200, tasks.size(),
            'A task should be created for each Opportunity that moved to Proposal/Price Quote.');
    }

    @isTest
    static void testNoTaskForOtherStages() {
        Account testAccount = new Account(Name = 'Test Account');
        insert testAccount;

        Opportunity opp = new Opportunity(
            Name = 'Test Opp',
            AccountId = testAccount.Id,
            StageName = 'Prospecting',
            CloseDate = Date.today().addDays(30)
        );

        Test.startTest();
        insert opp;
        Test.stopTest();

        List<Task> tasks = [
            SELECT Id FROM Task WHERE WhatId = :opp.Id
        ];
        System.assertEquals(0, tasks.size(),
            'No task should be created for Opportunities not in a qualifying stage.');
    }
}

Notice how each test method inserts 200 records to verify that the trigger is properly bulkified. This is a critical testing practice — if your test only inserts 1 record, it will not catch bulkification issues.


Summary

Triggers are one of the most powerful tools in the Salesforce developer’s toolkit, and they come with corresponding responsibility. A well-written trigger follows the one-trigger-per-object rule, delegates all logic to handler classes, bulkifies every operation using the Collect-Query-Act pattern, prevents recursion with static variables, and is backed by comprehensive tests that verify bulk behavior.

The key takeaways from this installment:

  • Triggers fire automatically on DML events. Use before triggers to modify the current records and after triggers to work with related records.
  • Seven trigger events cover every possible DML scenario.
  • Trigger context variables (Trigger.new, Trigger.old, Trigger.newMap, Trigger.oldMap, and the boolean flags) give you everything you need to understand what is happening.
  • The handler pattern keeps your triggers thin and your logic testable, reusable, and maintainable.
  • Bulkification is mandatory. The Collect-Query-Act pattern prevents governor limit violations.
  • The order of execution is a multi-step process. Your triggers run at specific points in that sequence, and understanding the full flow is essential for debugging.
  • Best practices are not suggestions — they are requirements for code that survives in production.

In the next installment, Part 45: Apex Tests in Salesforce, we will take a deep dive into writing test classes for your Apex code. We will cover the @isTest annotation, test methods, Test.startTest() and Test.stopTest(), test data factories, mocking, the @TestSetup method, achieving code coverage, and testing triggers, handlers, and utility classes systematically. Testing is the gatekeeper for deployment — if your code does not have sufficient coverage, it does not reach production.

See you in Part 45.