Constant and synchronous data along the standard solutions for SMEs
Often companies are faced with the problem that they have a standard software solution in use for various areas, which is very well designed for their requirements. If the company grows or changes organizationally, the manual data exchange between the standard software solutions usually becomes more and more torturous and the error rate increases. Errors and disparate data can reduce or endanger efficiency and customer service, and uniform data management is no longer guaranteed in the long term and leads to additional reporting costs. An individually tailored tool for the synchronization of the various software solutions can help here and thus keep customer information and other data uniform in different systems.
What is the best procedure?
In order to make a synchronization of standard solutions meaningful, one must first make a few clarifications to determine the scope and an adequate implementation:
What kind of data should be synchronized (type & quantity)?
Which systems are involved?
How soon must the data be synchronized?
How can the data be identified and compared together?
Why should the data be synchronized and what are the possible risks?
Which is the leading system?
Once you have answered these six questions, you can start with the detailed technical clarifications. Here, data access and the API (interface) of the software are the focus of the analysis. Each of the questions is important so that the synchronization interval, the amount of data and changes, and the technologies used can be optimally aligned to the customer's needs.
Which technologies should be considered/used?
Once all necessary clarifications regarding the interface of the standard solutions have been made and an overview of the data has been obtained, it is time to evaluate which technologies are to be used. Whether one uses a WebHook or a console application that addresses an API always depends on the different factors.
Excursus & Example
Webhook means that the recipient is "supplied" with the updates from the source. You "hook" me as the recipient and you will be supplied by the source. In a console application, as the target system, you fetch the data from the source.
If there is a lot of data and the changes cannot be sent via a WebHook, you should take further steps to get the changes fast enough. This can be achieved by asynchronous programming, parallelization, splitting the synchronization or even limiting the data to be checked. This only works with good communication and understanding of the individual needs of the customer.
What is there to consider and where do you have to pay special attention?
Especially when large amounts of data must be synchronized, interfaces of standard tools such as elastic.io, mule soft, dell boomi etc. can sometimes reach their limits. Especially when the data sets must be queried or checked individually and therefore thousands of queries are necessary. It is always important to keep in mind that the software solutions are not used to full capacity during a synchronization and the user can therefore work with them poorly or not at all. We also have a solution for this problem and reduce the number of queries to a subsystem to such an extent that working during a synchronization is always possible without restrictions.
It can also be an obstacle, if a lot of data is synchronized, that individual queries are not synchronized. This can happen due to a network error, a short-term overload on the server or any other server-side error. Since this leads to an error in the application, conventional tools usually simply skip it and the data is not synchronized. However, it is better to create a "retry policy" for this and specify that the queries are resent after a specified interval. This will ensure that the synchronization will take place, or you will be informed if the synchronization was aborted due to server errors.
In comparison, an individual solution can find an adequate way for the implementation, be it to connect API's on both sides or to exchange data via files in older applications. With the usual sync tools on the market such as mule soft, dell boomi etc. one usually achieves good results and can build on existing infrastructure. However, running costs are incurred and in addition a high dependency on the solution is built. For this reason, a careful conception and analysis should be carried out in advance.
We would be happy to present the advantages and feasibility of data synchronization to your standard software solution in use and would be happy to explain our detailed approach in a personal meeting. Please do not hesitate to contact us.