APEX, Development

[ART-009] APEX Class, Transaction and Context of Execution


APEX code runs inside a transaction that was started by an interaction which represents a context of execution. Each context has its own behaviors and is sensible more or less to governor limits.


Apex Class

An apex class is defined by an access level:

Access LevelDescription
globalClass is accessible by external applications through WSDL for example
publicClass is accessible by any other class
privateClass isn’t accessible

An apex method is also defined by an access level:

Access LevelDescription
globalMethod is accessible by external applications through WSDL for example
publicMethod is accessible by another class
privateMethod is only accessible by another method of the same class
protectedMethod is only accessible by another method of the same class or subclass

Sharing

Every class must declare explicitly the sharing level. Class can be annotated by following keywords:

KeywordDescription
with sharingclass enforces sharing rules
without sharing (default)class do not enforce sharing rules and this is the default behavior when keyword is omitted
inherited sharingclass inherits sharing model of the caller class, the default behavior is with sharing

The way the sharing rules are applied depends on many conditions that are exclusives or cumulative:

//Use case 1
public xxx class A{
    public void method1(){
        new B().method2();
    }
}

//Use case 2
public xxx class B{
    public void method2(){
         //something
    }
}

The method1 of class A is called from the following context:

  • Visualforce page
  • Lightning component
  • Anonymous code (without view all data)
Class AClass BResults
with sharingwith sharingwith sharing
with sharingwithout sharingwithout sharing
with sharinginherited sharingwith sharing
with sharingomittedwith sharing
without sharingwith sharingwith sharing
without sharingwithout sharingwithout sharing
without sharinginherited sharingwithout sharing
without sharingomittedwithout sharing
omittedwith sharingwith sharing
omittedwithout sharingwithout sharing
omittedinherited sharingwithout sharing from Lightning since Spring18
with sharing from Anonymous
without sharing from visual force page
omittedomittedwithout sharing from Lightning since Spring18
with sharing from Anonymous
without sharing from visual force page

The following behaviors have been noticed during the test:

  • Sharing rules are applied depending on the last class behaviors, if A calls B calls C then sharing rules of C is applied.
  • When sharing is omitted, it depends on the sharing of the caller class, but when there are no caller class then it’s without sharing except for anonymous which is with sharing.
  • When sharing is omitted or inherited, anonymous apex applies with sharing but lightning and visualforce pages applies without sharing

Inheritance and polymorphism

KeywordDescription
abstractClass is abstract and cannot be instantiated, it has to be overloaded by a class that extends it.

Method is abstract and must implemented by a class that extends the super class.
virtualThe class can be overloaded by a class that extends it.

The method can be overloaded by a method of the class that extends it.
overrideThe method overrides a method of the super class.
implementsThe class implements an interface that is defining signatures of methods.
interfaceThe class is an interface that is defining signatures of methods.

The sharing model will be applied depending on B if it’s omitted on A, otherwise it will depends on A:

public xxx class C {
    public void method1() {
        new A().method2();
    }
}

public xxx class A extends B {
    public void method2() {
         //something
    }
}

public virtual xxx class B {

}

Transaction

A transaction is a set of operations that must follow ACID principles:

KeywordDescription
AtomicityTransaction completes totally or not.
ConsistencyTransaction starts with a valid state and finishes with a valid state.
IsolationTransaction are independent each other.
DurabilityData from a transaction is stored permanently.

To comply with these principles, error handling should be put in place:

KeywordDescription
Try catch blockAllow to capture and proceed errors
Database.setSavepointAllow to set a state to rollback to
Database.rollbackAllow to rollback to the last state

It’s also possible to lock records during processing to maintain a consistency in number generation sequence for example, but be aware that records will be locked until the end of the transaction, meaning that access to write to these records from another transaction will fire an exception:

Select Id from Account FOR UPDATE

In Salesforce, transaction starts when a request is made from a context of execution:

  • Apex controller
  • Trigger
  • Batch
  • Webservice call

This transaction consumes resources exposed to governor limits that are reset for each new transaction. Those limits are different when context is:

  • Synchronous
  • Asynchronous (less restricted limits)

The context of execution also defines the available actions:

  • Asynchronous calls are not allowed within asynchronous context
  • Synchronous callout are not allowed in trigger context
  • Callout is not allowed after DML operations
  • All asynchronous call will be fired at the end of the transaction

Some annotations allow us to bypass some rules:

KeywordDescription
@futureMaking the method asynchronous
@future(callout=true)Making the method asynchronous and allowing callout in trigger context
@ReadOnlyLess restricted limits for readonly operations in database from controllers, schedulable or webservice

static variables are shared across one entire transaction and reset for any new transaction. But there are some use cases where the behavior is quite different.

Trigger Considerations

Triggers are fired on DML operations in Salesforce database, there are part of a transaction depending on the context of execution. There can be many triggers in a single transaction fired sequentially or in cascade. Resources will be shared between all.

static variables will also be shared between all trigger invocations within one transaction. Now suppose you are doing some bulk operations to insert/update more than 200 records. The bulk operation will generate one job that will be split in many chunks. With a bulk load of 10K records (bulk V1 and V2), there will be 50 invocations of triggers, each trigger will get a separate set of limits, but the static variables will not be reset between each execution. Try it !

Triggers are split into 2 events Before and After for each operation type (insert, update, delete, merge, upsert, undelete. You should avoid making any DML operation on other objects in BEFORE events as it purpose is mainly to validate and prepare data of the current object, and avoid making any DML operations on the current SObject in AFTER events as it will fire again trigger flow.

Batch Considerations

Batch are asynchronous process that have less restrictive limits. A batch is generally split into 3 transactions:

  • start
  • execute
  • finish

When scheduling or executing the batch, we are creating one instance of the batch with a scope that will determine the number of jobs to run. Each job will get a new set of limits. start method and finish method are only called once while execute method is called many times. Instance variables will be reset unless you specify Database.stateful on class level.

Platform Events Considerations

Platform event is a publish/subscribe feature allowing to send messages in a dedicated event object. It can be called from a transaction:

EventBus.publish(<collection of event>);

There a 2 ways to configure the publication:

  • Immediate action
  • Post commit action

Both will be proceeded in dedicated non rollbackable transaction. The first one will be fired even if the initial transaction fails while the second one will not.

Order Of Execution Considerations

Technically speaking, you can write many triggers and many process builders for one Object, but as a best practice, one should keep only one trigger per event and one process builder per object (better will be to have only one trigger per object). The reason behind this is that we can’t control in which order Salesforce will execute your triggers or your process builders, which can lead to inconsistency and more resource consumption in the process flow.

Try also to not mix too much triggers, flow, process builders as it will complexify the implementation and the maintenance. The choice depends on many factors and it’s not simple to say if you goes by one or another or on hybrid path. What you should keep in mind is that it is always recommended to do implementation by point and click solution first and to consider triggers only if those solution are limited. In reality, you have to make your choice depending on the size of the project, on the complexity of implementation that you are feeling, on performance (some flow or process builder are not bulkified), on maintainability, on code sharing (simple or complex code that have to be shared in many implementation)…

Conclusion

The objectives of this article is to bring you some insights on transaction and context of execution, it’s not surely exhaustive but enough to make you understand the importance of this topic before starting any new design or implementation.

Hope you enjoy reading this article, see you soon for the next one ...

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.