I am re-designing an old legacy system built in access (done as a monolithic monster). The old system is decomposed into 4 distinct applications each with their own login, etc.
The second application is about to roll out for production use, but it needed to communicate with the first system as the first system is essentially a document template generator, and the second system uses these templates to build work orders. That communication was ultimately done directly in SQL via Views and a restricted account permissions. REST caused a few issues and because they are internal applications, it was decided direct SQL was a better (significantly faster solution - both in development time and execution).
That being said, during the development of the first system, integration with the legacy system was the biggest challenge. The second system however relies not only on the first, but also on a third and fourth system, which do not exist yet, except in legacy form.
The problem this poses, is that the Access database schema is complicated and convoluted at times, with some tables serving the requirements of all four new systems. These tables need to broken into several smaller tables in the new system(s) - no problems here but might help explain what I am asking.
In plain English, the problem I anticipate having is sharing common data across each application (login details, customer details, etc). Hitherto, this hasn't been much of a challenge, however now it will be as the data changes far more frequently so real time sycnronization is required if two distinct tables are used.
Question: These systems *should* be designed to run independently of the others, therefore, it's own copy (often limited in detail) of the required data should be used. For example, lets say our last system is a customer management system. When done it will allow advanced manipulation and storage of arbitrary customer data. The system I am working on now requires only a subset of this information.
What I have done is created a customer table that stores only that subset of informaiton but anytime an ord is updated or added via the work order system, the customer management system needs its tables updated accordingly and vis-versa.
I am thinking, this is where triggers would come in real handy, as it also means I don't need to implement small parts of non-existing systems until it's time has actually come. All that being said, anyone have any suggestions, experiences they care to share in how you may have dealth with a similar situation?
Cheers,
Alex
Sharing tables or syncronizing two distinct tables?
Moderator: General Moderators
-
alex.barylski
- DevNet Evangelist
- Posts: 6267
- Joined: Tue Dec 21, 2004 5:00 pm
- Location: Winnipeg
Re: Sharing tables or syncronizing two distinct tables?
So whats the problem? Removing the dependency in 'new' the code? Or encapsulating the data in the legacy app? Either should not be hard, basically you're looking for an API that runs over localhost. Triggers are another way. Modifying the apps to keep users in a central location is even better. Perform the merge once, that's what I'd do. That way you can assess all the problems. Whereas if you try to sync them you might have bugs that only users discover. If you just do a text merge, you can simply mark duplicate accounts and other things that could cause bugs, and once you know they exist decide what to do with them.
Ex. When banks merge,
IF: I have an acct # at the old bank
AND: A customer with the 'new' bank had an account # that matched it
THEN: since the routing # was different, but now will be the same, both customers must be assigned a new account #.
ELSE: Customer gets to keep their old account #
Ex. When banks merge,
IF: I have an acct # at the old bank
AND: A customer with the 'new' bank had an account # that matched it
THEN: since the routing # was different, but now will be the same, both customers must be assigned a new account #.
ELSE: Customer gets to keep their old account #