APEX code runs inside a transaction that was started by an interaction which represents a context of execution. Each context has its own behaviors and is sensible more or less to governor limits.
Apex Class
An apex class is defined by an access level:
Access Level | Description |
---|---|
global | Class is accessible by external applications through WSDL for example |
public | Class is accessible by any other class |
private | Class isn’t accessible |
An apex method is also defined by an access level:
Access Level | Description |
---|---|
global | Method is accessible by external applications through WSDL for example |
public | Method is accessible by another class |
private | Method is only accessible by another method of the same class |
protected | Method is only accessible by another method of the same class or subclass |
Sharing
Every class must declare explicitly the sharing level. Class can be annotated by following keywords:
Keyword | Description |
---|---|
with sharing | class enforces sharing rules |
without sharing (default) | class do not enforce sharing rules and this is the default behavior when keyword is omitted |
inherited sharing | class inherits sharing model of the caller class, the default behavior is with sharing |
The way the sharing rules are applied depends on many conditions that are exclusives or cumulative:
//Use case 1
public xxx class A{
public void method1(){
new B().method2();
}
}
//Use case 2
public xxx class B{
public void method2(){
//something
}
}
The method1 of class A is called from the following context:
- Visualforce page
- Lightning component
- Anonymous code (without view all data)
Class A | Class B | Results |
---|---|---|
with sharing | with sharing | with sharing |
with sharing | without sharing | without sharing |
with sharing | inherited sharing | with sharing |
with sharing | omitted | with sharing |
without sharing | with sharing | with sharing |
without sharing | without sharing | without sharing |
without sharing | inherited sharing | without sharing |
without sharing | omitted | without sharing |
omitted | with sharing | with sharing |
omitted | without sharing | without sharing |
omitted | inherited sharing | without sharing from Lightning since Spring18 with sharing from Anonymous without sharing from visual force page |
omitted | omitted | without sharing from Lightning since Spring18 with sharing from Anonymous without sharing from visual force page |
The following behaviors have been noticed during the test:
- Sharing rules are applied depending on the last class behaviors, if A calls B calls C then sharing rules of C is applied.
- When sharing is omitted, it depends on the sharing of the caller class, but when there are no caller class then it’s without sharing except for anonymous which is with sharing.
- When sharing is omitted or inherited, anonymous apex applies with sharing but lightning and visualforce pages applies without sharing
Inheritance and polymorphism
Keyword | Description |
---|---|
abstract | Class is abstract and cannot be instantiated, it has to be overloaded by a class that extends it. Method is abstract and must implemented by a class that extends the super class. |
virtual | The class can be overloaded by a class that extends it. The method can be overloaded by a method of the class that extends it. |
override | The method overrides a method of the super class. |
implements | The class implements an interface that is defining signatures of methods. |
interface | The class is an interface that is defining signatures of methods. |
The sharing model will be applied depending on B if it’s omitted on A, otherwise it will depends on A:
public xxx class C {
public void method1() {
new A().method2();
}
}
public xxx class A extends B {
public void method2() {
//something
}
}
public virtual xxx class B {
}
Transaction
A transaction is a set of operations that must follow ACID principles:
Keyword | Description |
---|---|
Atomicity | Transaction completes totally or not. |
Consistency | Transaction starts with a valid state and finishes with a valid state. |
Isolation | Transaction are independent each other. |
Durability | Data from a transaction is stored permanently. |
To comply with these principles, error handling should be put in place:
Keyword | Description |
---|---|
Try catch block | Allow to capture and proceed errors |
Database.setSavepoint | Allow to set a state to rollback to |
Database.rollback | Allow to rollback to the last state |
It’s also possible to lock records during processing to maintain a consistency in number generation sequence for example, but be aware that records will be locked until the end of the transaction, meaning that access to write to these records from another transaction will fire an exception:
Select Id from Account FOR UPDATE
In Salesforce, transaction starts when a request is made from a context of execution:
- Apex controller
- Trigger
- Batch
- Webservice call
This transaction consumes resources exposed to governor limits that are reset for each new transaction. Those limits are different when context is:
- Synchronous
- Asynchronous (less restricted limits)
The context of execution also defines the available actions:
- Asynchronous calls are not allowed within asynchronous context
- Synchronous callout are not allowed in trigger context
- Callout is not allowed after DML operations
- All asynchronous call will be fired at the end of the transaction
Some annotations allow us to bypass some rules:
Keyword | Description |
---|---|
@future | Making the method asynchronous |
@future(callout=true) | Making the method asynchronous and allowing callout in trigger context |
@ReadOnly | Less restricted limits for readonly operations in database from controllers, schedulable or webservice |
static variables are shared across one entire transaction and reset for any new transaction. But there are some use cases where the behavior is quite different.
Trigger Considerations
Triggers are fired on DML operations in Salesforce database, there are part of a transaction depending on the context of execution. There can be many triggers in a single transaction fired sequentially or in cascade. Resources will be shared between all.
static variables will also be shared between all trigger invocations within one transaction. Now suppose you are doing some bulk operations to insert/update more than 200 records. The bulk operation will generate one job that will be split in many chunks. With a bulk load of 10K records (bulk V1 and V2), there will be 50 invocations of triggers, each trigger will get a separate set of limits, but the static variables will not be reset between each execution. Try it !
Triggers are split into 2 events Before and After for each operation type (insert, update, delete, merge, upsert, undelete. You should avoid making any DML operation on other objects in BEFORE events as it purpose is mainly to validate and prepare data of the current object, and avoid making any DML operations on the current SObject in AFTER events as it will fire again trigger flow.
Batch Considerations
Batch are asynchronous process that have less restrictive limits. A batch is generally split into 3 transactions:
- start
- execute
- finish
When scheduling or executing the batch, we are creating one instance of the batch with a scope that will determine the number of jobs to run. Each job will get a new set of limits. start method and finish method are only called once while execute method is called many times. Instance variables will be reset unless you specify Database.stateful on class level.
Platform Events Considerations
Platform event is a publish/subscribe feature allowing to send messages in a dedicated event object. It can be called from a transaction:
EventBus.publish(<collection of event>);
There a 2 ways to configure the publication:
- Immediate action
- Post commit action
Both will be proceeded in dedicated non rollbackable transaction. The first one will be fired even if the initial transaction fails while the second one will not.
Order Of Execution Considerations
Technically speaking, you can write many triggers and many process builders for one Object, but as a best practice, one should keep only one trigger per event and one process builder per object (better will be to have only one trigger per object). The reason behind this is that we can’t control in which order Salesforce will execute your triggers or your process builders, which can lead to inconsistency and more resource consumption in the process flow.
Try also to not mix too much triggers, flow, process builders as it will complexify the implementation and the maintenance. The choice depends on many factors and it’s not simple to say if you goes by one or another or on hybrid path. What you should keep in mind is that it is always recommended to do implementation by point and click solution first and to consider triggers only if those solution are limited. In reality, you have to make your choice depending on the size of the project, on the complexity of implementation that you are feeling, on performance (some flow or process builder are not bulkified), on maintainability, on code sharing (simple or complex code that have to be shared in many implementation)…
Conclusion
The objectives of this article is to bring you some insights on transaction and context of execution, it’s not surely exhaustive but enough to make you understand the importance of this topic before starting any new design or implementation.
Hope you enjoy reading this article, see you soon for the next one ...