Keep up to date with every new upload!

Join free & follow Technology Flows : Salesforce
Share
  • 1 year ago
Processing Large Data Volumes using PK Chunking & Hyperbatch with Daniel Peter

Processing Large Data Volumes using PK Chunking & Hyperbatch with Daniel Peter

In this episode I will be speaking with Daniel Peter (@danieljpeter) about processing large volumes of data on Salesforce. Daniel is Lead Application Developer at Kenandy, an ISV who had built an ERP solution on the Salesforce Platform. Daniel’s first hand experience of how the Salesforce multi-tenant database behaves has lead him to develop techniques for processing tens of millions of records. He will describe the techniques which he has refined to ensure SOQL queries are executed with consistent reliability and not fall foul of the most common exceptions relating to row selection, which are: Non-selective query Too many query rows returned Query time out during execution Daniel will explain how the Batch Apex query locator can be used to implement a technique called PK chunking which allows fine-grained control of the number of rows to be processed in each batch which largely overcomes the 3 common exceptions. Daniel has even gone as far as experimenting with parallel execution th

Comments