cricketbird
Registered User.
- Local time
- Today, 13:51
- Joined
- Jun 17, 2013
- Messages
- 118
Our database (split fe/be, 80 users, Office 365, in use for 6 years, regular small modifications to add features or new reports) has become mission-critical enough that folks are rightly nervous about the backend living "loose" on a shared network drive. I'm being asked to move it to SQL server. I would not be an admin on the SQL server - all requests will have to go through our very slow and bureaucratic IT folks (as in weeks to months to get a request through).
Some questions:
1) To future-proof the data, I was thinking I'd add a bunch of dummy columns to each table as well as a few dummy tables. This way, I don't have to go through IT when I need a new field or table (at least for a while). Is this an okay approach or a terrible idea?
2) We very rarely (but not never) have our network go down. I currently have it set up so that users launch a batch file which copies the front end to their C drive and makes a copy of the data (backend) on their C drive, but DOES NOT USE that data. There's an option within the database itself to switch to the backup data if needed with appropriate warnings that any changes will not be saved (mostly our users need to SEE the data, not interact with it). It's a crutch, but it helps us continue to operate if the system is down. Would we lose this ability to work offline if the network is down? Or would there be a way to create a local copy of the data and switch to it as needed? The data itself is not sensitive and nobody has a problem if there are copies in different places.
3) We have external images for each product located on the network drive and linked to within the database. Would we still be able to link to those if the rest of the data was on SQL server?
4) In general, does moving to a server offer speed gains or declines?
Thank you!
Some questions:
1) To future-proof the data, I was thinking I'd add a bunch of dummy columns to each table as well as a few dummy tables. This way, I don't have to go through IT when I need a new field or table (at least for a while). Is this an okay approach or a terrible idea?
2) We very rarely (but not never) have our network go down. I currently have it set up so that users launch a batch file which copies the front end to their C drive and makes a copy of the data (backend) on their C drive, but DOES NOT USE that data. There's an option within the database itself to switch to the backup data if needed with appropriate warnings that any changes will not be saved (mostly our users need to SEE the data, not interact with it). It's a crutch, but it helps us continue to operate if the system is down. Would we lose this ability to work offline if the network is down? Or would there be a way to create a local copy of the data and switch to it as needed? The data itself is not sensitive and nobody has a problem if there are copies in different places.
3) We have external images for each product located on the network drive and linked to within the database. Would we still be able to link to those if the rest of the data was on SQL server?
4) In general, does moving to a server offer speed gains or declines?
Thank you!