Hindhustani Posted January 8, 2023 Report Posted January 8, 2023 @Transactional(propagation = Propagation.REQUIRED) public String triggerDataExport(Map<String, String> doc) { Step 1: Get 1000 records from DB Step 2: Update the above 1000 records status to "in processing" in DB (JPA Repository method annotated with @Transactional) Step 3: Generate JSON payload for above 1000 records and POST the payload to REST API Step 4: Update the Step 2 "In Processing" status records to "Processed" in DB (JPA Repository method annotated with @Transactional) } 1. If there is an exception at Step 3 what will happen, likewise at Step 4. 2. Method triggerDataExport is Transactional, Step 2 and Step 4 methods are also transactional. How does rollback work? 3. At step 3, sending 1000 records in one payload/REST call is not ideal. So, if we batch it as 100 records then it will be 10 batches assuming an exception happens at the sending 2nd batch. How does rollback work in this scenario? How do you keep the system consistent and recover from failures? And finally sending all 1000 records without duplicates and not missing any records. @Vaampire @csrcsr Quote
BattalaSathi Posted January 8, 2023 Report Posted January 8, 2023 17 minutes ago, Hindhustani said: @Transactional(propagation = Propagation.REQUIRED) public String triggerDataExport(Map<String, String> doc) { Step 1: Get 1000 records from DB Step 2: Update the above 1000 records status to "in processing" in DB (JPA Repository method annotated with @Transactional) Step 3: Generate JSON payload for above 1000 records and POST the payload to REST API Step 4: Update the Step 2 "In Processing" status records to "Processed" in DB (JPA Repository method annotated with @Transactional) } 1. If there is an exception at Step 3 what will happen, likewise at Step 4. 2. Method triggerDataExport is Transactional, Step 2 and Step 4 methods are also transactional. How does rollback work? 3. At step 3, sending 1000 records in one payload/REST call is not ideal. So, if we batch it as 100 records then it will be 10 batches assuming an exception happens at the sending 2nd batch. How does rollback work in this scenario? How do you keep the system consistent and recover from failures? And finally sending all 1000 records without duplicates and not missing any records. @Vaampire @csrcsr aaa chivarlo Vampire/Sucker thappa migatha antha okka mukka arthamaithe Bata chepputho kottu nannu.. 1 Quote
csrcsr Posted January 8, 2023 Report Posted January 8, 2023 34 minutes ago, Hindhustani said: @Transactional(propagation = Propagation.REQUIRED) public String triggerDataExport(Map<String, String> doc) { Step 1: Get 1000 records from DB Step 2: Update the above 1000 records status to "in processing" in DB (JPA Repository method annotated with @Transactional) Step 3: Generate JSON payload for above 1000 records and POST the payload to REST API Step 4: Update the Step 2 "In Processing" status records to "Processed" in DB (JPA Repository method annotated with @Transactional) } 1. If there is an exception at Step 3 what will happen, likewise at Step 4. 2. Method triggerDataExport is Transactional, Step 2 and Step 4 methods are also transactional. How does rollback work? 3. At step 3, sending 1000 records in one payload/REST call is not ideal. So, if we batch it as 100 records then it will be 10 batches assuming an exception happens at the sending 2nd batch. How does rollback work in this scenario? How do you keep the system consistent and recover from failures? And finally sending all 1000 records without duplicates and not missing any records. @Vaampire @csrcsr you dont want to update Each record (3 and 4 steps) once Json is generated so that you know for which record is updated to "Processed" so that you know the records which are processed ? Also in your post triggerExportData is divided into methods Step1 step2 ? Quote
areyentiraidhi Posted January 8, 2023 Report Posted January 8, 2023 What you are doing here is combing different I/0 as part of one transaction. Mixing DB I/O and API I/O , this is not advisable . If the API call takes too much time, it will choke the connections from connection pooling. Go back to the drawing board and figure out if API call really has to be part of transaction ? If yes, you should go for eventual consistency type of thing. If you still want to do it..there are some hacky ways for sure by manually handling transactions. 1 Quote
Vaampire Posted January 8, 2023 Report Posted January 8, 2023 Very complicated scenario bro. There are many different ways to do it. if i understood correctly, step 2 & step 4 can happen in background. I mean fire and forget or let another background service handle it. In case of error, the back ground service should handle it. ideally this task should be split into multiple macro services. 1) server will fetch the 1000 records and asks service 2 to process 2) service to will process 100 records at a time with retry mechanism and will ask service 3 to update the status 3) service 3 will just update records Depending on complete requirement, 2&3 could be combined Quote
JAMBALHOT_RAJA Posted January 8, 2023 Report Posted January 8, 2023 Who designed it bro lol inka migilindi front end code okkate adi kuda ade method lo add chey oka Java script code raasi You are making the entire thing super tight coupling, like someone said above you need to go back to whiteboard and redesign this entire thing, also avoid single point of failures Quote
JAMBALHOT_RAJA Posted January 8, 2023 Report Posted January 8, 2023 Try to implement this with event based architecture for example using Kafka topics… as soon as you process a record publish an event to Kafka topic and stand up a separate listener app which listens to that kafka topic and updates the status in db, this way you are making your system loosely coupled 2 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.