Newbie SQL server questions (1 Viewer)

GBalcom

Much to learn!
Local time
Today, 00:13
Joined
Jun 7, 2012
Messages
459
Hello,
I'm creating an access FE application that will conect to a SQL server BE. I have the following questions, at least initially:

  • Which Connection is better DSN or DSN less....This application will be distributed, so it seems like DSN-less is preferred, but I wanted to ask the experts.
  • DAO or ADO? I've read that Access is back to DAO....
  • Do I need to create a Login for each individual user, or can end users share a generic login? or should they?


Thanks!
 

namliam

The Mailman - AWF VIP
Local time
Today, 09:13
Joined
Aug 11, 2003
Messages
11,695
- DNS or not, cant really say
- Access has always been DAO, it used ADO as a coding basis for a bit, but access' forms and queries were always DAO, therefor DAO has always been the prefered programming way as well... Though the use of DAO vs ADO in code really isnt that important....
- Personal accounts vs functional accounts, there is a can of worms... for maximum security you want personal accounts.... though users will knowingly and willingly share their pesonal account to create a functional account...
 

GBalcom

Much to learn!
Local time
Today, 00:13
Joined
Jun 7, 2012
Messages
459
namlian,
Thank you for your response. Can you elaborate on your last answer? I haven't heard the term functional account yet. Is that like the "roles"? In essence, Every distributed user will have the same permissions....so I'm wondering if they could use the same login?
 

namliam

The Mailman - AWF VIP
Local time
Today, 09:13
Joined
Aug 11, 2003
Messages
11,695
A functional user is a user to a database or application that is not tied to an actual person.

It is by all means a "normal" user, except there is no one particular person actually using it....

i.e. I would have a user namliam on the forum, that is a normal user account
Then my finance department may have a user called Finance, this account has a normal password but is widly known in the department and anyone can use it at any given time.

A role is a standard "profile" that you may assign any user.
Admin, super user, advanced user, user, dummy user, test user, etc...
These are used to "tunnel" priviliges in a database or application...
A few users will get admin, most will probably get user.
This, usually, is used when the priviliges are complex... still starting out having some sort of roles in place is a good thing.
 

Rx_

Nothing In Moderation
Local time
Today, 01:13
Joined
Oct 22, 2009
Messages
2,803
Would highly suggest DNS-Less. If you search several of us have posted articles on this.
The other question is to use the OS ODBC driver or to install each workstation with SQL Server Native Client 11.0. The latter is my choice.
The SQL Server Native Client 11.0 is used for all MS demos for AZURE, Outlook, .NET and other applications. It is free. From my point of view, it will dimish any issues with older ODBC drivers on a workstation.

DSN-LESS: I create a local Access Table that includes all of the tables that should be connected. Mine has 2 columns of true/false checkbox, one column for users, one for developers.
A function destroys all linked tables. Then goes down the list of table names and creates the DSN-LESS linked table. Every time a new SQL table (or view) is created, the name should be listed in this local table as a table to be connected.

Personal preference: I use Linked tables (and views) with this odbc.
From there, the DAO is my favorite method since I have been using it since its introduction. Nothing against ADO.

My preference is to prototype in Access Local Tables then use the SQL Server Migration Tool for Acces (free). See other postings on that too.

SQL Server Security has two flavors. One is to have a username/password the other is to integrate the Network Security user names.
My enterprise uses Citrix as a secured sandbox. The users can only see a window with the Access applicaiton running full screen.
Because of that, I have one User / Password for the connection. Everyone uses that. It prevents me from depending on an IT staff to refresh expired passwords and such. IT staffs make excuses not to like Access, from my experience it usually comes down to the additional time to maintain it. Password management is one of those maintence things for them.
That is not to say that tracking the individual users is not being done. It is still easy to determine the user ID for a new record or updated record.

I might suggest starting with a single UN /PW because the code to relink is very, very easy to change. It is in a loop anyway. A single line of code.

You also want to have the option to back up production into a TEST DB.
On SQL Server, mine is MyDBName and MyDBNameT (for test).
The same code that reads the local table to create the DSN-LESS tables has a option to connect to Production or Test DB.

Be prepared to move larger queries over to SQL Server as Views. They can really speed things up. From Access just refrence those views as a linked table from a query. e.g. Select SuperComplexCustomerView where CustomerID = 24429234
The Linked Table ODBC will actually thake the "where" criteria and merge that into a SQL Server SP on the SQL execution plan side so long as SQL syntax is used. This means avoiding IIF and custom Access functions we commonly use in Access.

example of Local Linking Table http://www.access-programmers.co.uk/forums/attachment.php?attachmentid=50155&d=1378907710

You may also be interested in how to distribute a new Access Release to your clients.
http://www.access-programmers.co.uk/forums/showthread.php?t=269966&highlight=linked

Example of linking code:
http://www.access-programmers.co.uk/forums/showthread.php?t=269947&highlight=linked
 
Last edited:

GBalcom

Much to learn!
Local time
Today, 00:13
Joined
Jun 7, 2012
Messages
459
Thanks guys for the good information so far...I have some more questions:

When I look into the sql db connection, some of the ADO structure implies you can connect, then disconnect the database when not in use. Is there an advantage to this?

In my future application, I could see at least two paths I can walk down.

Path 1- Pursue a persistent connection, and utilize the linked tables as is.

Path 2- upon loading the application, pull certain information down into local temp tables to minimize communications to the hosted db, then when the user finishes the session, push all the required information back into the cloud and clear the local temp tables.


The only advantage I could see with path 2 would be reduced "lag" while using the application. (If this is even an issue)

I'm interested in your opinions....
 

Rx_

Nothing In Moderation
Local time
Today, 01:13
Joined
Oct 22, 2009
Messages
2,803
The correct answer is "It Depends".
I use that constantlly in my consulting answers.

For example, if you have a steady connection, stay connected. It isn't like a telephone conversation where something is streaming full time. It is more like you are keeping the phone number (in the table or connection def) handy to use.

We can also get into the discussion of locking, concurrency, transaction, transaction roll back....

The lag of an 'on demand' is probably very tiny in human terms.
If regression analysis is being used, then the connection should stay until the analysis is completed. Don't connect and disconnect 100,000 times.

Non-persistant is often associated with HTTP connections where routing can change.
persistant connection is one that is upposed to be ALWAYS connected... this is commonly used with servers and/or clients that regularly transfer data on a daily basis.
In very general terms, a non-persistant connection is a temporary connection often used to access something that will only be needed 1 time.

Another consideration might be the number of persistant concurrent license connections. Lets say the SQL Server DB only had a license for 10 concurrent connections with 30 users.


If the database is extremely busy, a snapshot might be in order. Lets say you needed a point-in-time analysis for something that is receiving tons of transactions via automation. It could be risky to read some tables, evaluate the data, then read the next set of related tables that could be dirty (updated, deleted) by the time they are used.

Since you used the word Cloud there are other considerations.
The Cloud is typically low bandwidth with a lag.
Your description of letting the user have a disconnected data, making changes then pushing it back at the end of a session brings up Replication.
While the user has the data, other users can access that same data and make changes to that data. Now your user pushes up the data into a dirty recordset. Do you have a plan for data Collision avoidance ? Your path 2 would need a plan. hint - avoid it if you can.
Your path 1 can use standard code during an update to notify your program the recordset has become dirty. It is much more straighforward to address if the system allows persistent connection.
 

GBalcom

Much to learn!
Local time
Today, 00:13
Joined
Jun 7, 2012
Messages
459
Rx,
Thank you for your response. I will certainly try path #1 first. We'll see how it goes, and I'll be sure to keep you posted.
 

Users who are viewing this thread

Top Bottom