General Issue (1 Viewer)

Khalil Islamzada

Registered User.
Local time
Today, 22:01
Joined
Jul 16, 2012
Messages
49
Dear all :),


I have developed an access database for customer management of a utility, which nearly 65,000 customers have been recorded with more than 200,000 billing and payment records. Beside around 25 users are working with this database at the same time.


- The database is split into back-end and front-ends.

- All these computers are connected through a wired network.
- The back-end file is stored on a simple computer with a normal operating system.



My problem is :banghead::


When 1 - 10 users access the database the speed is normal but when the number of users will exceed up to 25 the speed of accessing the data is getting slowly.


Can anyone help :confused: me in this regards?


Thanks in advance.
Khalil :cool:
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 17:31
Joined
Sep 12, 2006
Messages
15,613
It depends on the type of activity that the users are doing, and the design of the database.

If you are having to move a lot of data around your network, then 25 users may be approaching the limit of expectations for your database. The more you can restrict queries to small amounts of data, the better the database is likely to perform.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
26,999
The diagnostic side of this problem is to get to that computer that is hosting the back-end file and start up the Windows task manager or the Windows performance manager. You will be looking for a limited number of bottlenecks.

Wait for one of the "slowdowns." Check on which, if any, processes are the biggest users of CPU time. If you have any process running above 90% then you have found a bottleneck. But the second possible factor is to look at which processes are biggest users of I/O. Third is to look for page faulting, which probably requires you to use the performance monitor.

What you are looking for is some process that is eating your CPU alive, or that is monopolizing the disk channels, or that is straining memory. Since you are using an "ordinary" computer for hosting a back-end, it might not have server-class I/O channels, and we don't know how many cores or threads it will support.

Note that on a multi-threaded CPU, divide up 100% according to the thread count. A CPU thread is saturated at 25% on a 4-thread system, and that would be enough if the bottleneck is a single process.
 

Khalil Islamzada

Registered User.
Local time
Today, 22:01
Joined
Jul 16, 2012
Messages
49
Dear all,


Thanks for your kind reply, I have an idea to install a server computer for back-end data, do you think it will help?


Khalil
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
26,999
Khalil, what will matter here includes MANY factors.

Going to a server WILL help if it is configured with high-speed disk interfaces. Most "ordinary" computers use ATA or some other type of interface for which the balance is to have good performance for a cheaper price. I.e. a trade-off. With servers, you typically use faster disk interfaces because you know ahead of time that you need to share data among more users. So, for example, instead of ATA or some other lower-end I/F, you get high-speed SCSI or Fiber-channel and you get disks with faster rotation rates (thus decreasing "latency" issues).

But there are other issues to consider as well, such as having indexes on frequently searched fields; tuning your queries to ask the right questions; using JOIN queries to minimize subsequent lookup operations, etc.

You would do well to look at the article posted by jdraw because you would be astounded at how much you can do to get things working faster even without a hardware change. But then, to upgrade the CPU hardware isn't a trivial action either.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 17:31
Joined
Sep 12, 2006
Messages
15,613
I would have assumed that with 25 users, you WERE working with a dedicated server.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
26,999
Dave, the OP's post #1 includes:

- The back-end file is stored on a simple computer with a normal operating system.

The "simple computer" could be an I5 or a lower-end AMD chip and limited memory. The fact that it even tries to serve the load suggests that it isn't running into encoded or hard resource limits, but it might be slamming up against virtual space (small swap file) limits. Might be anything.
 

Solo712

Registered User.
Local time
Today, 13:31
Joined
Oct 19, 2012
Messages
828
Khalil, for what is worth, nearly every time I experienced the kind of issue you indicate - sudden poor response after certain limit of users is reached, the first solution I learned to look at was the server's (or back-end computer's) memory. Most of the time, simply adding memory fixed the problem. With 25 concurrent users I would say you need a minimum of 16Gb of memory to avoid "thrashing", or excessive paging during the peak use of the database.

Best,
Jiri
 

Mark_

Longboard on the internet
Local time
Today, 10:31
Joined
Sep 12, 2017
Messages
2,111
Along with what others have posted, what type of network card does your "Server" have? Remember, a slow card/slow network means EVERYONE is trying to get across a small road at the same time, so sudden slowdowns due to network traffic can also be a big issue.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
26,999
While Jiri's comments are not necessarily wrong, you would do well to use Performance Monitor as I suggested to verify that your problem IS insufficient memory before going to that expense. Though you DID say you were going to switch to a "real" server. Extra memory on a server rarely hurts anything.

I just think it would be good for you to use the PerfMon task to verify where you are having a resource issue before you attempt to attack it. That way you know you are directing your budget to an appropriate solution.
 

Solo712

Registered User.
Local time
Today, 13:31
Joined
Oct 19, 2012
Messages
828
While Jiri's comments are not necessarily wrong, you would do well to use Performance Monitor as I suggested to verify that your problem IS insufficient memory before going to that expense. Though you DID say you were going to switch to a "real" server. Extra memory on a server rarely hurts anything.

I just think it would be good for you to use the PerfMon task to verify where you are having a resource issue before you attempt to attack it. That way you know you are directing your budget to an appropriate solution.

I don't find PerfMon a useful tool. It's difficult to configure to get anything of value out of it, it's buggy and breaks down easily; besides it is known to cause high CPU utilization itself. AFAIK, none of the easily available monitoring tools will give you what you would need to do a real, heavy duty analysis specific to Access such as CPU usage to handle requests for data, number of page swaps, and lengths of Access service queues. DiskMon may supply some idea of traffic on the disks relative to a number of users but unless you know what the data tell you I would rather not. Let me repeat again - before going to the expense of a new server, a quick check of how much memory there is on the Back-End computer would be well worth the effort. If it is half or less of what I indicated (16Gb), it may be a good idea to install some RAM for a few hundred bucks to see how that helps, before shelling several thousand for a new server. That would be my approach.

Best,
Jiri
 

Mark_

Longboard on the internet
Local time
Today, 10:31
Joined
Sep 12, 2017
Messages
2,111
Jiri,

I think you may have missed the point.

If there is an observable issue that other tools can identify with little/no expense (other than some time) that can help resolve the issue PRIOR to paying a couple hundred bucks, the OP can benefit from this FIRST.

More importantly, the OP can also look at the machine they are currently using and see if they can swap it for an existing machine they own that already has more RAM / better processor / faster HD to see if any help resolve the problem BEFORE they have to buy RAM and open the case.

Once the OP can identify what is causing their issue they can choose the best solution to solving it. Because you do not find a given tool useful doesn't mean others haven't find ways of doing so. So far the description of "Being slow" mean file access isn't going as quickly as they like. That means the machine being used needs a better HD, more RAM, better bus, better process, change in OS to optimize use as a dedicated file server, different network card, upgraded network, or an overhaul of the ACCESS front end to optimize queries and processing can be the right answer.

Figure out what needs to be fixed first then fix it.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
26,999
AFAIK, none of the easily available monitoring tools will give you what you would need to do a real, heavy duty analysis specific to Access such as CPU usage to handle requests for data, number of page swaps, and lengths of Access service queues.

Jiri, Task Manager tells you a LOT and isn't that likely to barf on you.

The Performance tab shows CPU saturation (on split screens, one per thread, so you can easily see if you have a saturated CPU) and physical memory usage, so you could tell if you had filled up physical memory. Two major tests on one screen.

The Networking tab shows bandwidth (on split screens, one per connection) which would clearly tell you if your network was banging on itself or not that busy. A "spiky" graph looks RADICALLY different from a saturation graph. You EXPECT an occasional burst on your network and trust me, there IS NO MISTAKING the graph of a saturated network. I've seen them.

The Processes tab can be your friend, too. You can click in any of several column headers to see if some particular process is eating your socks. E.g....

Click on the CPU column and it will sort the processes to show top CPU hog as the top process. If you find that something other than "System Idle Process" is chowing down on your system, you have a major clue right there.

You could also click on I/O Reads, I/O Writes, or Page Faults. And you can select those columns to be displayed from Task Manager using View >> Select Columns... In each case, clicking on the column header allows you to sort to show the top user of that resource at the top of the list.

Those three screens right there take care of many potential problems. You can identify CPU, memory, or network saturation and can also identify "hog" processes from one tool. PerfMon is good for more advanced problem testing if you need to be precise. Yes, it is possible to overconfigure PerfMon to ask too much of it. But then, it IS a second line of attack, not a first line.

Think of it as going to a doctor. Would you like it if you had a sore ankle and he said, "Well, I guess we will have to do exploratory surgery"? Or would you prefer that he took an X-ray (non-invasive and non-anesthetic) first? I was suggesting "cheapest/quickest alternative first."

Your contention that the OP needs 16 GB of RAM is at best a guess - and it might not even be wrong - but it is easy to test whether that surmise is correct for this case. To my way of thinking, bolstered by fifty years of programming experience (no, that number is not a typo), quantitatively identifying a resource hog is a major part AND the first part of any performance battle.
 

Solo712

Registered User.
Local time
Today, 13:31
Joined
Oct 19, 2012
Messages
828
Jiri, Task Manager tells you a LOT and isn't that likely to barf on you.

The Performance tab shows CPU saturation (on split screens, one per thread, so you can easily see if you have a saturated CPU) and physical memory usage, so you could tell if you had filled up physical memory. Two major tests on one screen.

The Networking tab shows bandwidth (on split screens, one per connection) which would clearly tell you if your network was banging on itself or not that busy. A "spiky" graph looks RADICALLY different from a saturation graph. You EXPECT an occasional burst on your network and trust me, there IS NO MISTAKING the graph of a saturated network. I've seen them.

The Processes tab can be your friend, too. You can click in any of several column headers to see if some particular process is eating your socks. E.g....

Click on the CPU column and it will sort the processes to show top CPU hog as the top process. If you find that something other than "System Idle Process" is chowing down on your system, you have a major clue right there.

You could also click on I/O Reads, I/O Writes, or Page Faults. And you can select those columns to be displayed from Task Manager using View >> Select Columns... In each case, clicking on the column header allows you to sort to show the top user of that resource at the top of the list.

Those three screens right there take care of many potential problems. You can identify CPU, memory, or network saturation and can also identify "hog" processes from one tool. PerfMon is good for more advanced problem testing if you need to be precise. Yes, it is possible to overconfigure PerfMon to ask too much of it. But then, it IS a second line of attack, not a first line.

Think of it as going to a doctor. Would you like it if you had a sore ankle and he said, "Well, I guess we will have to do exploratory surgery"? Or would you prefer that he took an X-ray (non-invasive and non-anesthetic) first? I was suggesting "cheapest/quickest alternative first."

Your contention that the OP needs 16 GB of RAM is at best a guess - and it might not even be wrong - but it is easy to test whether that surmise is correct for this case. To my way of thinking, bolstered by fifty years of programming experience (no, that number is not a typo), quantitatively identifying a resource hog is a major part AND the first part of any performance battle.

So it was, PerfMon first, but when I pointed out some issues with it, now you are into Task Manager. Ok. I see. Are you talking about Task Manager on a Windows Server by any chance, are you? I had an impression Khalil was using workstation hardware and OS.

Incidentally, the OP mentions that performance is ok up to 10 users and then gets slow. That is significant info (for me), if this behaviour is consistent. It indicates the issue is workload vs. resource related. I made the outrageous "guess" that it is likely memory. Sure, it could be other things and by all means check them out. It could be a chattering port on the hub when traffic hits a threshold, the disk may be too fragmented to handle efficiently extra load or, hell, maybe CPU's MIPS are just not able to handle it. But rather than spending hours monitoring and gathering data, which in the end may not tell me much more than what I already know - i.e. the app gets slow with more than 10 people on it - I would go and check (first) how much memory I have. If it's 8 Gigs or less (again I assume it's a workstation) I would say: hell, no way that's enough for twenty five people. Then I would make a decision whether to go into new hardware right away or add RAM first.

But that's just shows that you and I approach problems differently, Doc.

Best,
Jiri
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
26,999
Task Manager on a server is the same task as Task Manager on a workstation. PerfMon on a server is the same as PerfMon on a workstation. They are the same utilities on both. Your objection to PerfMon did make me stop and test what I could do with my version of Task Manager. So yes, I backed down on PerfMon though I don't think it is nearly as bad as you portray.

Jiri, as I said, your guess that it was memory might well be right - but with diagnostic tools available, don't spend ANY money on memory until you verify that the problem is one that can be solved by throwing memory on the box.

Don't forget that if it is an "ordinary" workstation, there might not be ROOM for extra memory since one of the main differences between servers and workstations is backplane capacity. OK, you could buy higher-density memory chips and swap them out - but test first and see if that is the right path. The first test should NEVER be to muck hardware where you aren't sure WHICH hardware is the culprit.

As to "hours monitoring data" that's just plain wrong thinking. Takes me about 20 seconds to launch Task Manager and configure the "Processes" screen to have the columns I named. You would be able to tell within minutes whether you were banging up against a maximum. Trying the three screens I mentioned, you would have your answer in five minutes or less. If it takes you hours, you are doing something very wrong here.

The observation that there is a "knee" in the performance curve at 10 users is indeed significant. That means it would be good to run this test during a typical high-load period with his user load above 10. Results should come in quickly. No need to "fake" a load when it appears that a natural load is likely to be available.

AND you are leaving out a third possibility. This might not be purely a matter of hardware. It COULD be that the hosting system does not have a big enough swap file, which is fixable with NO hardware. It could ALSO be that the back end hasn't been optimized according to the FMS article linked by jdraw, which again is fixable without any new hardware.

There is also the issue that if it is a DISK speed problem, memory won't fix it, but the swap to a server would help due to probable differences in backplane speed and in the speed of data transfer for higher-speed disk bus and faster rotation speed on more modern disks.

The test I proposed would reveal what is happening. Yes, we approach problems differently Jiri. I don't shoot from the hip at every performance problem I see. I take the time to ask the right questions.

During my career, I was a system administrator, system analyst, and system programmer for the U.S Navy for over 28 years. (Also security manager, but that's not relevant here.) I learned a few things about system trouble-shooting. As to how well I did? My primary system had an actual (i.e. measured) availability rating of 99.93% uptime, with system log files to prove my point. If it hadn't been for a few hardware failures, I would have been closer to 99.96% - but nobody's perfect.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
26,999
Khalil - you should recognize that people with experience often do not have the same backgrounds or experiences and thus come to problems a different way, then get into a discussion of how each one has gone astray. Do not let that discourage you.

My approach takes a few extra minutes but costs you nothing. And I certainly agree that Jiri COULD be right in his quick guess on this being a memory issue. I don't want to take that off the table because it IS a possible answer.

My main point is that there is a cheap and easy way to test first and KNOW that it is (or is not) a memory issue. Don't let our disagreement on your problem stop you from running Task Manager on the hosting machine, select the Performance tab, and see how much physical memory is in use on your system. Costs you almost nothing except a little bit of time to do it.
 

Vassago

Former Staff Turned AWF Retiree
Local time
Today, 13:31
Joined
Dec 26, 2002
Messages
4,751
The user hasn't even stated how much memory the host machine has. Certainly testing the machine for memory issues would be a better approach than randomly spending money on possible solutions without any data to back them up.

I second (third?) the monitoring first before taking a random throw at solutions until something sticks. You'll likely get valuable information, even if it doesn't solve your problem in the long term.
 

jdraw

Super Moderator
Staff member
Local time
Today, 13:31
Joined
Jan 23, 2006
Messages
15,364
Khalil,

I agree with those supporting some testing before throwing money (hardware etc) at the problem. Better to identify the problem area and look at feasible solutions.
I also suggest that you identify the backend machine in detail - processor, memory, disk type and capacity, OS, Access version......
People are offering advice re approach, but they are really working blind without some details of your setup.

Good luck with your project.
 

Khalil Islamzada

Registered User.
Local time
Today, 22:01
Joined
Jul 16, 2012
Messages
49
Dear all,


Thanks for your nice comments, I went through the database itself and finally changed my back-end computer to a server hardware and software, fortunately the problem is solved now and the speed is normal.


Thanks again for your nice comments and ideas.




Best,
Khalil
 

Users who are viewing this thread

Top Bottom