too slow on getting calc done

although all PC are on x64 systems
it's not the window system that matters, its the office version

if you don't split, you are asking for trouble - trouble that may not be fixable except by replacing with a backup, and that is only a temporary patch as it will happen again and again. And at some point you'll find the only viable backup is 6 months old (and you still won't have solved the problem).

as an aside, from your post in #4 - looks like a crosstab query would do the job. But without knowing your table design and relationships, just a guess
 
OK, I watched the video and it DID clear up some things. What you are doing is not terribly different than the U.S. Navy did in evaluating risks for a given project. Language? Different. Concept? Same.

For the logic of what you are doing, it is OK to build a query of a query (layered query). I don't say that I feel qualified to actually build the exact queries because I can't see field names that well. But you can build a (rather ugly but simple) query to divide the work. Start with something like

SELECT Site, Status, IsActive, DueDate<Date() AS DateOverDue, (Status='New') as NewStatus, (Status='in progress') As ProgStatus, .... FROM whatever-source; Call this Layer1.

Next,

SELECT Site, COUNT(IsActive=True) AS CtIsActive, COUNT( DateOverDue=True) CtDateOver, COUNT(NewStatus=True) As CtIsNew,
COUNT( ( DateOverDue=FALSE ) AND ( NewStatus=TRUE ) AND ( IsActive=FALSE ) ) AS Cond1, etc.
FROM Layer1 GROUP BY SITE;

Then you can perhaps bind this to your continuous form. In the Layer1 query, (Status='New') as NewStatus will be either TRUE or FALSE and the second query just counts the TRUE ones. You CAN build counts of more complex cases (see 2nd layer, Cond1). I'm not sure because it would take a long analysis, but you don't have that many columns so it might be viable. But more important, it should be quick. And you will have to play with this quite a bit to get the correct counts gathered together.

Now, as to double-clicking in a continuous form... Each record in a continuous form displays as a separate record - but there is only one row in the form and it just gets repeated. There is a way to know which field is being selected and from which row even for continuous forms, but from your description, it looks like you identify the site but then hard-coded the other items. This is slow because of the DCounts which act like sub-queries and you are doing a LOT of sub-queries from what I can see. IF you use the layered query approach, you short-cut some of those DCount calls. And every one of those calls you bypass is a call that won't slow you down.
 
I'll add another comment to amplify what CJ said. Windows on most modern machines these days is a 64-bit operating system. However, MS Office comes in two flavors (for backwards compatibility issues) and that means you have either 32-bit or 64-bit Office. If you develop code on a 32-bit version of Office, it will run on 64-bit versions if you do some things to assure that your library routine calls (API calls) are set up to allow for the call. If you develop code on a 64-bit version of Office, you cannot go to a machine with a 32-bit version of Office because the addressing will be all wrong. So your 32/64 message is about the machine where you developed and the machine where you are running not having the same versions of Office installed. That isn't a "split" issue. It is a mobility issue.
 
is it possible to set up a dbl click event ?
As the others have already said, events are available on form controls so the answer is yes. You never said what you want it to do.

although all PC are on x64 systems
Do not confuse the bitness of your hardware and OS with the bitness of Office. All hardware is probably 64 bit and has been for more than a decade. Therefore the OS will also be 64 bit since it matches the hardware. Other software can be 32 bit. If you are running 32 bit office on your development PC, then the other users should be also.


If you post a db with the table (obfuscate the data if necessary) as well as the domain function for each box across, then someone will try to turn this into a totals query for you.

Splitting your database is very important. Right now, your users are interfering with each other. You should just be able to use the built in splitter tool. You don't even have to do the split manaually.
 
as well as the domain function for each box across
domain function ? - u mean action status ?
i put there ENG vs RU version, cause when new record is created - it has status "new" \ "новый" status.

there are 3 of them:
new - новый
completed - завершён
In progress - в работе
 
I have lost track of this topic, but I assume performance is still the problem and that the cause is the number of DCounts. I think you can use cross tab queries to speed up things. For the "Action Statistics" (sub form) use as recordsource:
Code:
TRANSFORM Count(ActionID) AS ActionCount
SELECT Site, Count(ActionID) AS TotalActions
FROM Tactions
GROUP BY Site
PIVOT Status;
 
I have lost track of this topic, but I assume performance is still the problem and that the cause is the number of DCounts. I think you can use cross tab queries to speed up things. For the "Action Statistics" (sub form) use as recordsource:
Code:
TRANSFORM Count(ActionID) AS ActionCount
SELECT Site, Count(ActionID) AS TotalActions
FROM Tactions
GROUP BY Site
PIVOT Status;
thank u for your code.
looks good :)

1719867990424.png


i have now cross tab query in my form. so as i understood correctly , there is no way to make a dbl click events on these numbers in cross-tab queries, right ?
or is it possible for user opens up some kinda master list so user can see a list of certain records. i have something like this one in my dbl click on every numbers.
1719868196006.png
 
OK, I watched the video and it DID clear up some things. What you are doing is not terribly different than the U.S. Navy did in evaluating risks for a given project. Language? Different. Concept? Same.

For the logic of what you are doing, it is OK to build a query of a query (layered query). I don't say that I feel qualified to actually build the exact queries because I can't see field names that well. But you can build a (rather ugly but simple) query to divide the work. Start with something like

SELECT Site, Status, IsActive, DueDate<Date() AS DateOverDue, (Status='New') as NewStatus, (Status='in progress') As ProgStatus, .... FROM whatever-source; Call this Layer1.

Next,

SELECT Site, COUNT(IsActive=True) AS CtIsActive, COUNT( DateOverDue=True) CtDateOver, COUNT(NewStatus=True) As CtIsNew,
COUNT( ( DateOverDue=FALSE ) AND ( NewStatus=TRUE ) AND ( IsActive=FALSE ) ) AS Cond1, etc.
FROM Layer1 GROUP BY SITE;

Then you can perhaps bind this to your continuous form. In the Layer1 query, (Status='New') as NewStatus will be either TRUE or FALSE and the second query just counts the TRUE ones. You CAN build counts of more complex cases (see 2nd layer, Cond1). I'm not sure because it would take a long analysis, but you don't have that many columns so it might be viable. But more important, it should be quick. And you will have to play with this quite a bit to get the correct counts gathered together.

Now, as to double-clicking in a continuous form... Each record in a continuous form displays as a separate record - but there is only one row in the form and it just gets repeated. There is a way to know which field is being selected and from which row even for continuous forms, but from your description, it looks like you identify the site but then hard-coded the other items. This is slow because of the DCounts which act like sub-queries and you are doing a LOT of sub-queries from what I can see. IF you use the layered query approach, you short-cut some of those DCount calls. And every one of those calls you bypass is a call that won't slow you down.
hm.. thank you for your explanation. i will try to implement it on this week and surly will give a feedback on it :)
 
guys, i've finally reached my db in workplace.
I did split my db and it took 7-8 minutes (!) to calc all cells. average network consumption 8-10Mbit per sec. and it doesn't matter if 1 user is in the db or more. WHY ? damn...
unsplit db took 1 minute to calc everything when more than 2 users are using it. and 1 second if 1 user.
1720101930064.png
 
unsplit db took 1 minute to calc everything when more than 2 users are using it. and 1 second if 1 user.

The test didn't run into the third possibility: Taking a lot more than 10 minutes to recover data from a corrupted database because simultaneous users of the same file had a lock collision.
 
Are all users using the same front end?
 
The test didn't run into the third possibility: Taking a lot more than 10 minutes to recover data from a corrupted database because simultaneous users of the same file had a lock collision.
so do i understand u correctly here... there is nothing wrong with my db \ system \ network it takes much more time to do anything - its just the way how split db works... right ?
 
Last edited:
am i guessing correctly that front end part should not be on the network server as i did as a test, but on a local hard drive ?
what if i put front end part on my local hard drive. would it be a bit faster ?
i'm gonna check this out tomorrow. im out of the network.
 
am i guessing correctly that front end part should not be on the network server as i did as a test, but on a local hard drive ?
what if i put front end part on my local hard drive. would it be a bit faster ?
i'm gonna check this out tomorrow. im out of the network.
To help with the speed a little bit, the recommended set up is front end (FE) on user's local computer, back end (BE) on network share folder, and keep an open connection to the BE in the FE. This can be done by either opening a bound form in the background or a global recordset object.
 
You might want to take a look at this thread as I suspect you have a number of issues to address

 
so do i understand u correctly here... there is nothing wrong with my db \ system \ network it takes much more time to do anything - its just the way how split db works... right ?

That depends. See the posts by theDBguy and CJ_London.

In the specific case where a user is running a monolithic database (i.e. not split) on user's local machine, the speed difference includes that the back-end data is on the same machine as the front-end data and so no network latency is involved. If the back-end is hosted on another machine across the network, a large part of that speed difference is due to the difference between internal memory-bus vs. networki nterface transfers.
 

Users who are viewing this thread

Back
Top Bottom