Performance and Scale Overview
The following information is designed to help organizations improve the efficiency and scalability of PSA in managing large volumes of data.
Depending on your organization's business needs, we recommend you or your administrator review the feature configuration options listed in the table below to optimize the speed, stability, and reliability of PSA.
General Recommendations
We recommend adjusting batch sizes and scheduling jobs during off-peak hours, if possible. These optimizations minimize the impact on system performance and reduce the risk of reaching or exceeding Salesforce platform limits.
You can set the schedule, day, time, and frequency for bulk jobs such as recalculating schedules, processing actuals, or updating project variance values. Adjusting batch sizes and carefully planning job schedules can significantly enhance performance.
Custom indexes can make queries more selective, especially when filtering on fields with high data cardinality (fields containing many unique values). This helps improve the efficiency of the query.
Implementing custom indexes must be done with consideration of Salesforce's indexing limits and the potential impact on overall system performance.
- We recommend working with Salesforce support to ensure that custom indexing is applied correctly and does not inadvertently lead to performance degradation in other areas.
- If your business processes involve large volumes of schedule data, such as resource management or project planning, we recommend applying a custom index to the Start Date and End Date fields on the Schedule object to improve responsiveness and performance in these areas. For example, this indexing can speed up schedule query times by 10 times in an organization with 1.7 million schedule records.
- We recommend creating custom indexes on the Currency and Resource Role fields on the Rate Card object if your organization processes large volumes of rate card data.
For more information, see the Salesforce Help.
- We recommend you do not create custom triggers on PSA system objects such as Schedule Exception, Transaction, Revenue Forecast Version Detail, Utilization Engine, and Project Actuals. These system objects are integral to core functionalities in PSA. Creating custom triggers to control their behavior can disrupt or conflict with the built-in processes that manage the relationships and data between these objects. This can lead to errors, data inconsistencies, or performance degradation.
- As custom triggers execute in real-time, they can create bottlenecks in multi-user environments. Multiple users or automated processes triggering actions on system objects simultaneously can lead to delays or timeout errors, further impacting scalability when the organization grows or the number of concurrent operations increases.
- System objects often have intricate dependencies managed by native PSA triggers. Introducing custom logic can conflict with these built-in processes, leading to errors or unexpected behavior, which can reduce system stability, and impact scalability in larger organizations.
- Custom triggers require ongoing maintenance and testing, especially when upgrading PSA. As organizations scale and increase their customization footprint, ensuring that custom triggers work harmoniously with out-of-the-box functionality adds complexity, which can degrade system performance if not carefully managed.
- We recommend using Utilization Analytics instead of Utilization Calculation. Utilization Analytics is designed to handle larger data volumes and address performance limitations found in Utilization Calculation.
- Utilization Analytics uses Salesforce's Queueable Architecture for faster calculations and better scalability, making it suitable for environments with high data volumes.
-
Adjust batch sizes to avoid exceeding platform limits when running utilization calculations in Utilization Analytics. You can achieve this by modifying values in Utilization Setup fields, such as Resource Batch Size, Assignment Batch Size, and Timecard Batch Size. These settings control how data is processed in chunks, helping to manage system load effectively, especially in environments with extensive data volumes. Lowering batch sizes can improve stability, while higher values might enhance performance in smaller datasets or optimized environments.
-
Exclude any roles, assignments, resource requests, or timecards that are not relevant to the calculation, such as PTO projects, placeholder assignments, or non-billable resource requests. By focusing only on relevant data, you can significantly reduce the amount of information processed, and improve both the speed and efficiency of the calculation.
- Schedule utilization runs during off-peak hours to minimize the impact on system performance. This can be done by setting specific days and times for the calculations, which helps manage large data volumes more effectively.
For more information, see:
- Enable TC Transaction Platform Event on the Advanced Settings custom setting. This enables timecard transaction deltas to be created asynchronously, which improves the speed of the timecard triggers.
- Enable Timecard Async Submit on the Timecard Entry UI Global custom setting. This makes the timecard submission and approval process asynchronous, reducing wait times during submission.
For more information, see Managing Advanced Settings and Timecard Settings.
Select the Calculate EVA Actuals Incrementally field on the Est Vs Actuals Settings custom setting. This improves performance by focusing calculations only on modified records, thereby avoiding unnecessary recalculations of unchanged data.
For more information, see Actuals Settings.
- Use Mass Billing Event Generation when creating billing events for more than 1,800 records.
- Modify the generateBatchSize and deleteBatchSize fields in the Billing configuration group to manage the number of records processed during billing event generation and deletion. Adjusting these values can help prevent errors related to Salesforce governor limits, particularly when handling large volumes of billing data.
- Schedule billing event generation jobs for all active projects at least once a day. Spreading the generation of billing events across multiple smaller daily batches helps to avoid Salesforce governor limits that can occur when attempting to handle large volumes of records in one batch. In addition, when billing events are generated in smaller batches, it becomes easier to identify and address errors without dealing with a large backlog of unprocessed data.
For more information, see Billing Settings and Projects Awaiting Billing Tab.
When optimizing actuals processing, recalculate only modified data rather than processing all data. This approach improves performance by reducing the workload on large volumes of actuals data.
For more information, see Setting the Actuals Processing Mode.
Key Fields and Settings
Field / Setting | Action |
---|---|
UseActualsCalculateDeltaContinuous |
This field enables continuous processing of transaction deltas in the background, ensuring near real-time updates to actuals without waiting for scheduled batch jobs. Set this to "true" (the default value) to enable efficient scaling. |
CalculationMode |
This setting determines when actuals are calculated. Set it to Scheduled to enable continuous processing of transaction deltas. The alternative option, Immediate, is not suitable for large volumes of data as it calculates changes in real time, which can be less efficient. With Scheduled mode and continuous processing enabled, you can achieve more scalable performance. If continuous is disabled, scheduled batch jobs will still run, which is better than immediate but less optimal than continuous processing. |
CalculationDeltaBatchSize |
This field controls the batch size for processing transaction deltas during actuals calculations. Adjusting this setting in the Actuals configuration group enables PSA to handle larger or smaller volumes of data per batch. The default value is 200, but when using the continuous process, you can increase this. We recommend setting it to 1,000. If any errors occur, reduce the batch size. |
- Use continuous actuals processing to handle transaction deltas as they occur to improve scalability and performance.
- Increase the CalculationDeltaBatchSize value to 1,000 when dealing with large volumes of data to boost performance, but lower it if you encounter processing errors.
For more information, see Setting the Actuals Processing Mode and Actuals Settings.
The Revenue Forecast Setup object contains several batch setting fields that you can adjust to improve the performance of Revenue Forecasting when handling large volumes of data.
For more information, see Revenue Forecast Setup Fields and Running Project Revenue Forecasts.
- Use the Filters panel to reduce the number of records displayed at one time. Focusing on specific data subsets improves the speed and responsiveness of the Work Planner.
- Consider excluding unnecessary assignments by selecting the Exclude from Planners field on the Assignment object. Any excluded assignments are omitted from totals.
- Adjust the maximum number of assignments, held resource requests, and unheld resource requests displayed in the Work Planner, using the Maximum Results Returned Lightning page for the Work Planner.
For more information, see Work Planner Lightning Component Properties.
Optimize Gantt for large projects by grouping tasks into nested parent hierarchies. This organization improves performance by breaking down the project into more manageable sections.
Save the project the first time you open Gantt to ensure the Work Breakdown Structure column accurately reflects the task hierarchy. This helps maintain the correct structure and improves usability.
For more information, see Creating Project Hierarchies, Gantt Overview, and Managing Project Tasks from Gantt on a Project Record.
When importing 1,000 or more rate cards from an external system, we recommend you first ensure there are no duplicates in the source data. Using tools such as the Salesforce Bulk API 2.0, you can upload large volumes of data in CSV format. Ensure that each rate card entry is unique based on its combination of key fields such as Resource Role, Account, Region, Practice, Group, Start Date, and End Date. While fields such as Resource Role or Account might appear multiple times, rate cards are only considered duplicates if all key field values are identical. For example, multiple rate cards can exist for the same role, but provided they are associated with different regions, they are not considered duplicates.
Disable validation for rate cards by setting the RateCardValidator option in the ASM Triggers configuration group to "false". This improves the performance of the import process by bypassing unnecessary validation checks. Once the import process is completed, we recommend setting the RateCardValidator option to "true" to avoid errors if any of the imported rate cards are subsequently duplicated.
For more information, see the Salesforce Help.
To avoid performance issues when using PSA for PTO projects, where many users have assignments linked to the same project, we recommend excluding non-essential fields from updates or rollups. Since PTO projects are typically non-billable, fields related to financial metrics, such as revenue or billable amounts, might not be required.
Consider excluding:
- Fields related to revenue and billable metrics, such as Total Projected Revenue from Assignments or Total Billable Amount.
- Non-billable and incurred cost fields, such as Non-Billable Incurred Subtotal.
You can achieve this by selecting Exclude from Timecard Expense Rollups on assignments or milestones.
For more information, see Customizing Page Layouts and Rolling Up Timecards.
To avoid performance and scale issues with backlog calculations in PSA, we recommend the following:
- Schedule calculations strategically. Run backlog calculations during off-peak hours or schedule them at intervals (for example, weekly or monthly) to distribute the processing load and prevent performance bottlenecks during peak usage times.
- Optimize batch sizes. Adjust settings such as "Assignment Batch Size" and "Post Process Project Batch Size" to match your organization's needs. Lowering batch sizes can reduce the risk of reaching limits, although it might increase processing time.
- Exclude inactive projects. Ensure that inactive projects are excluded from backlog calculations to avoid unnecessary processing, which can improve overall performance.
For more information, see Backlog Calculations Overview, Scheduling Backlog Calculations, and Scheduled Backlog Settings.
Split large projects into smaller, manageable projects and link them using a project hierarchy to avoid task limits when integrating with Jira - PSA.
Optimize triggers that are queried during the creation of project tasks to reduce processing load and enhance performance.
For more information, see Project Hierarchies and Trigger Settings.
Recommendations for optimizing resource searching in orgs with large volumes of resource are available in Best Practices for Resource Search at Enterprise Scale.