Salesforce · · 11 min read

Data Storage in Salesforce

How to calculate, monitor, and manage Salesforce data storage — per-record sizes, storage allocation, objects that don't count, and strategies for keeping storage under control.

Part 111: Data Storage in Salesforce

Welcome back to the Salesforce series. We have spent a lot of time talking about how to build things in Salesforce — objects, fields, automation, code, integrations. But at some point, every org runs into a question that has nothing to do with features and everything to do with infrastructure: how much data can we actually store, and what happens when we run out?

Data storage is one of those topics that nobody thinks about until it becomes a problem. And when it becomes a problem, it becomes an urgent one. You cannot create records if your org is over its storage limit. Integrations fail. Batch jobs break. Users start seeing errors. Understanding how Salesforce calculates storage, what counts, what does not count, and how to manage it proactively is essential knowledge for any admin or developer working on a production org.


How Salesforce Allocates Storage

Salesforce gives every org two separate storage pools: data storage and file storage. These are tracked independently. You can be well within your data storage limit and still run out of file storage, or vice versa.

Data Storage Allocation

Data storage is where your records live — Accounts, Contacts, Opportunities, Cases, custom objects, and so on. The amount of data storage your org gets depends on your edition and the number of user licenses.

Here is the general formula:

  • Enterprise Edition: 10 GB base + 20 MB per user license
  • Unlimited Edition: 10 GB base + 20 MB per user license
  • Performance Edition: 10 GB base + 120 MB per user license
  • Professional Edition: 10 GB base + 20 MB per user license
  • Developer Edition: 5 MB base (this is intentionally small)

So if you have an Enterprise Edition org with 500 users, your data storage allocation would be:

10 GB + (500 users × 20 MB) = 10 GB + 10 GB = 20 GB

That sounds like a lot, but orgs with heavy integration traffic, audit logging, or large transaction volumes can burn through 20 GB faster than you would expect.

File Storage Allocation

File storage covers attachments, documents, files uploaded to Salesforce Files (ContentDocument/ContentVersion), and static resources. The allocation formula is similar:

  • Enterprise Edition: 10 GB base + 2 GB per user license
  • Unlimited Edition: 10 GB base + 2 GB per user license
  • Performance Edition: 10 GB base + 2 GB per user license

File storage is almost always the less constrained pool. Most orgs hit data storage limits long before they hit file storage limits, unless they are storing a lot of PDFs, images, or attachments directly in Salesforce.


Per-Record Storage Sizes

This is where the math gets specific. Not every record uses the same amount of storage. Salesforce assigns a fixed storage size to each record based on the object type. These are not estimates — they are the exact values Salesforce uses for storage calculation, regardless of how many fields you have populated on the record.

Object TypeStorage Per Record
Accounts, Contacts, Leads, Opportunities, Cases, Custom Objects2 KB
Activities (Tasks, Events)1 KB
Campaigns8 KB
Campaign Members1 KB
Email Messages2 KB
Forecasting Items1 KB
Articles (Knowledge)4 KB
Tags (including Tag relationships)0.5 KB
Person Accounts4 KB (because they create both an Account and a Contact)

A few things to note here. Custom object records are always 2 KB, regardless of how many fields are on the object. You could have a custom object with 3 fields or 300 fields — each record still counts as 2 KB. The storage is allocated per record, not per field. This also means that records with mostly empty fields use just as much storage as fully populated records.

Doing the Math

Let us say your org has:

  • 2 million Account records (2 KB each = 4 GB)
  • 5 million Contact records (2 KB each = 10 GB)
  • 10 million Activity records (1 KB each = 10 GB)
  • 3 million Custom Object records (2 KB each = 6 GB)

That is 30 GB of data storage from those four objects alone. If you are on Enterprise Edition with 500 users and a 20 GB allocation, you are already 10 GB over your limit. This is not a hypothetical scenario. This is common in orgs that have been running for 5+ years with active integrations.


Objects That Do Not Count Toward Storage

Here is the good news. Not everything counts. Salesforce has several object types that are explicitly excluded from data storage calculations. Knowing about these can change how you architect your solutions.

Big Objects

Big Objects are designed for storing massive volumes of data — billions of records — without consuming standard data storage. They live on a separate storage infrastructure. There are two types:

  • Standard Big Objects: Salesforce provides these out of the box. FieldHistoryArchive is the most commonly used one. It stores archived field history tracking data.
  • Custom Big Objects: You can define your own Big Objects to store large datasets. They have a __b suffix instead of the usual __c.

Big Objects have limitations — you can only query them with async SOQL, they do not support all field types, and you cannot use them in standard reports. But for archival and analytics on large datasets, they are invaluable because they do not touch your storage quota.

External Objects

External Objects represent data that lives outside of Salesforce, typically in an external database or service. They use Salesforce Connect (OData or custom adapters) to access that data in real time. Since the actual data is not stored in Salesforce, External Objects consume zero data storage. They have a __x suffix.

This is a powerful pattern. If you have a legacy system with 50 million transaction records, you do not need to migrate them into Salesforce. You can expose them as an External Object, and users can view them alongside native Salesforce data without any storage impact.

Platform Events

Platform Events are used for event-driven architecture — publishing and subscribing to messages. The event records are transient. They are stored temporarily in the event bus (typically for 72 hours in the default retention window) and do not count toward standard data storage. Once the retention window passes, they are automatically removed.

Other Objects That May Surprise You

  • Field History Tracking records that have been archived to FieldHistoryArchive do not count.
  • Change Data Capture events do not count.
  • Data Cloud ingestion records live in Data Cloud’s own storage layer.

Monitoring Your Storage Usage

You should not wait until you get an alert to check your storage. Salesforce provides several tools for monitoring.

Setup Menu

Navigate to Setup > Storage Usage. This page gives you a breakdown of data storage and file storage, showing how much each object is consuming. It also shows your total allocation and current usage percentage. This is the first place to look when you want to understand where your storage is going.

The Storage Usage API

For programmatic monitoring, you can query storage information using the REST API. The endpoint /services/data/vXX.0/limits/ returns a JSON response that includes DataStorageMB with Max and Remaining values. You can build a scheduled job that queries this endpoint daily and sends an alert when usage crosses a threshold — say 80%.

// Simple scheduled job to check storage
global class StorageMonitor implements Schedulable {
    global void execute(SchedulableContext ctx) {
        OrgLimits.Map<String, System.OrgLimit> limitsMap = OrgLimits.getAll();
        System.OrgLimit dataStorage = limitsMap.get('DataStorageMB');

        Decimal percentUsed = (Decimal.valueOf(dataStorage.getValue()) /
                               Decimal.valueOf(dataStorage.getLimit())) * 100;

        if (percentUsed > 80) {
            // Send notification to admin
            Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
            mail.setToAddresses(new String[]{'admin@yourcompany.com'});
            mail.setSubject('Storage Alert: ' + percentUsed.setScale(1) + '% used');
            mail.setPlainTextBody('Data storage is at ' + percentUsed.setScale(1) +
                '%. Please review storage usage.');
            Messaging.sendEmail(new Messaging.SingleEmailMessage[]{mail});
        }
    }
}

Reports and Dashboards

You can also create a report on the “Storage Usage” report type (if enabled in your org) to visualize storage trends over time. Pair it with a dashboard that your admin team checks regularly.


Strategies for Managing Storage

When you are approaching your limit — or already over it — here are the practical strategies that actually work.

1. Data Archival with Big Objects

Move old records that are rarely accessed into Big Objects. This is the most Salesforce-native approach. For example, you might archive Activity records older than two years into a custom Big Object. You keep the data accessible for compliance or reporting, but it no longer counts against your storage.

The process typically involves a batch job that reads old records, inserts them into the Big Object, and then deletes the originals.

2. External Storage Solutions

For truly large datasets, move data out of Salesforce entirely. Common destinations include:

  • AWS S3 or Azure Blob Storage for file-heavy data
  • Snowflake, BigQuery, or Redshift for analytical data
  • Heroku Postgres for transactional data that still needs to integrate with Salesforce

You can use Salesforce Connect with External Objects to keep the data queryable from within Salesforce even after moving it out.

3. Delete What You Do Not Need

This sounds obvious but is often overlooked. Common candidates for deletion:

  • Old email message records from years-old cases
  • Duplicate records that were never merged
  • Orphaned records from decommissioned integrations
  • Debug logs and API event logs that accumulate over time
  • Old campaign member records from campaigns that ended years ago

Run reports to identify which objects are consuming the most storage, then work with business stakeholders to define retention policies.

4. Recycle Bin Awareness

When you delete records, they go to the Recycle Bin and stay there for 15 days. During that time, they still count toward your storage. If you need to free up storage immediately, you need to empty the Recycle Bin. In Apex, use Database.emptyRecycleBin() to permanently remove records.

5. Optimize Your Data Model

Sometimes storage bloat is a design problem. If you are creating child records for things that could be handled with a multi-select picklist, a long text area, or a JSON blob in a single field, you are creating unnecessary records. Every record is at least 1 KB. If you are creating millions of junction object records where a different design would work, reconsider the architecture.

6. Purchase Additional Storage

When all else fails, Salesforce sells additional data storage. As of recent pricing, additional storage blocks typically come in increments of 500 MB or 1 GB. This is the quickest fix but also the most expensive long-term solution. It should be a last resort after you have exhausted the other strategies.


Planning for Future Storage Needs

When scoping a new project or integration, calculate the storage impact before you build it. Here is a simple formula:

Expected records per year × record size (KB) = Annual storage consumption

If a new integration will sync 100,000 records per month of a custom object:

100,000 records/month × 12 months × 2 KB = 2.4 GB per year

Over three years, that is 7.2 GB. If your org currently has 2 GB of headroom, you have a problem before you even deploy. This kind of forward planning prevents storage emergencies and gives you time to implement archival strategies alongside the integration rather than after it.


Section Notes

Data storage in Salesforce is a finite, shared resource that directly impacts your org’s ability to operate. The key takeaways:

  • Storage is split into data and file pools — track them separately.
  • Per-record sizes are fixed — 2 KB for most standard and custom objects, 1 KB for activities, regardless of field count.
  • Big Objects, External Objects, and Platform Events do not count toward standard storage — use them strategically.
  • Monitor proactively — do not wait for the “storage full” error. Use the OrgLimits class, scheduled jobs, or the REST API to build alerts.
  • Archival is the best long-term strategy — Big Objects for Salesforce-native archival, external databases for large-scale offloading.
  • Always calculate storage impact for new features and integrations before building them.

Storage management is not glamorous work, but it is the kind of thing that separates a well-run org from one that is constantly firefighting. Build the monitoring, define the retention policies, and plan ahead. Your future self will thank you.