![]() ![]() Meaning, the timestamp comparison used in CRM is allowing time drift of up to 10 milliseconds for different reasons, but in this clock drift context the 10 milliseconds is nothing more than a magic number, it doesn't make it robust, just less likely to hit an issue. which has a precision that goes down to the nearest 0, 3, or 7 milliseconds An example of this is the T-SQL datetime type, The threshold must be used to compensate for the varying levels of precision This one compares the previous "Last Modified At" with the value on record, using the below helper function. I've skimmed the code for it now and found the two functions below:ĬRMIntegrationRecord.IsModifiedAfterLastSynchonizedCRMRecord(): If the CRM integration in BaseApp is built upon that assumption, I'd imagine it is not robust. I don't think that assumption is robust in the real world across distributed servers, see things like below articles for examples of what I mean: You can look at the CRM synchronization engine in the base app for inspiration on how last modified can be used to do data synchronization until the timestamp becomes Thanks for the answer!Īlthough it is the one I feared a little bit - if the "Last Modified At" field is populated by each NST server that means using it as an external API consumer would be like trusting the clocks of any number of load balanced NST servers to be in sync down to millisecond precision. You can look at the CRM synchronization engine in the base app for inspiration on how last modified can be used to do data synchronization until the timestamp becomes available. Good point, we should probably document this better, it is indeed assigned by the NST. Yep, this is exactly what I had in mind - and yes, we already have the data and 'internal field (as you can see when you do Field(0), but it still requires work to make sure everything works from dev environment and all the way through to the UI. I see, yes, this is a scenario where having an actual field will help, but again, we would rather focus on the "right" solution, exposing the timestamp field on all tables instead of opening the SqlTimestamp = true up for table would one option be that you expose it on all tables by default, just like the other new system fields? You might even be retrieving it already in your NST data stack, depending on how you check stale records, in which case it would not be a change impacting your "query engine" at all? I know that doesn't make it easy, but just saying :) But there are downsides:įor example, we cannot add an index on the field so sorting to get the most recent records (which have timestamp greater than a timestamp we saved in previous run) would be slow. Yes, will work also to read the value directly in the API pages. Please advise for how to do performant index-covered data replication best as a partner in a cloud compliant app :) On this same topic, it should be mentioned that the reason we are not eagerly jumping on "Last date modified" with an index added on it, is because it is not clear to us if we can trust the value of this timestamp to be guaranteed to monotonically increase even with load balanced webservice NSTs? It is undocumented by MS if it is based on database server time with this guarantee or if it is pulled by the processing NST in which case I assume there is zero guarantee and "SQL timestamp" with an index is still better for robust data sync? This would also prevent any of us partners from needing a tableextension just to add a "fake field", ideally we don't want additional joins. However, I am not sure what it is you need to accomplish that cannot be done by reading the timestamp with the RecordRef workaround you listed (getting field 0), but would be possible if you could add the timestamp field? The timestamp field you add yourself is simply an alias to the same timestamp field you are reading with RecRef.Field(0), so it would return the sasme would one option be that you expose it on all tables by default, just like the other new system fields? You might even be retrieving it already in your NST data stack, depending on how you check stale records, in which case it would not be a change impacting your "query engine" at all? I know that doesn't make it easy, but just saying :) We are planning to support reading the timestamp field as a normal field from any table in a future version but I cannot give you a timeline for this. Unfortunately, the platform does not support defining multiple timestamp fields to a single table, which could happen if we supported adding timestamp fields to table extensions (since the table + table extensions is merged into one table at runtime).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |