[Solved] What is the proper way to calculate accumulative user for everyday?
Looking to automatically optimize YOUR SQL query? Start for free.

EverSQL Database Performance Knowledge Base

What is the proper way to calculate accumulative user for everyday?

Database type:

I have a MySQL table, named transaction, which has 5 columns, id(int), from(int), to(int), value(float), time(datetime).

And I need to calculate the accumulative user (the number of unique "from") for some specific receiver ("to") everyday.

For example:

+-----+------+-----+-------+----------------------------+
| id  | from | to  | value | time                       |
+-----+------+-----+-------+----------------------------+
| 1   |  1   | 223 |     1 | 2019-01-01 01:11:30.000000 |
| 2   |  1   | 224 |     2 | 2019-01-01 21:37:30.000000 |
| 3   |  2   |  25 |   0.1 | 2019-01-02 03:05:30.000000 |
| 4   |  2   | 223 |   0.2 | 2019-01-02 13:26:30.000000 |
| 5   |  3   |  26 |     3 | 2019-01-02 19:29:30.000000 |
| 6   |  3   | 227 |     4 | 2019-01-03 21:37:30.000000 |
| 7   |  1   | 224 |     5 | 2019-01-05 22:03:30.000000 |
| 8   |  4   | 224 |     1 | 2019-01-05 23:48:30.000000 |
| 9   |  5   | 223 |     2 | 2019-01-06 05:41:30.000000 |
| 10  |  6   |  28 |     2 | 2019-01-06 20:19:30.000000 |
+-----+------+-----+-------+----------------------------+

And the specific to is [223, 224, 227]

Then the expected result is:

2019-01-01: 1 # [1]
2019-01-02: 3 # [1, 2, 3]
2019-01-03: 3 # [1, 2, 3]
2019-01-04: 3 # [1, 2, 3]
2019-01-05: 4 # [1, 2, 3, 4]
2019-01-05: 5 # [1, 2, 3, 4, 5]

The direct way is using SQL

SELECT COUNT(DISTINCT(`From`))
FROM `transaction`
FORCE INDEX (to_time_from)
WHERE `time` < '2019-01-0X'
AND `to` IN (223, 224, 227)

But the problem is, transaction table is big (1 million per day, about 2 years), and to list is about 1000. The above SQL is very slow, even though I have created an index on [to, time, from] and force use it.

Besides, although daily transactions amount reaches about 1 million, the daily active user is only about 10,000. So I'm considering to store DAU list in No-SQL, like

2019-01-01: [1]
2019-01-02: [2, 3]
2019-01-03: [3]
2019-01-04: []
2019-01-05: [1, 4]
2019-01-05: [5]

And when given a date d, I just retrieve all the DAU list no later than d and make a union to get the accumulative user. Something like: len(set([dau_list1]+[dau_list2]+[dau_list3]...))

But I have no idea which No-SQL to use.

  1. Redis will load everything into memory, but I only need these data when I query.
  2. MongoDB
    1. it seems I need to create a collection for every date because I need to create a unique index on from. Am I right?
    2. I know I can use an array field and $addToSet operation. But it is O(n), very slow.

So, what is the proper way to make it?

How to optimize this SQL query?

The following recommendations will help you in your SQL tuning process.
You'll find 3 sections below:

  1. Description of the steps you can take to speed up the query.
  2. The optimal indexes for this query, which you can copy and create in your database.
  3. An automatically re-written query you can copy and execute in your database.
The optimization process and recommendations:
  1. Avoid Optimizer Hints (modified query below): Using optimizer hints such as FORCE INDEX can be valuable in the short term. When important aspects such as the amount of data or the data distribution change, these hints can do more harm than good.
  2. Create Optimal Indexes (modified query below): The recommended indexes are an integral part of this optimization effort and should be created before testing the execution duration of the optimized query.
Optimal indexes for this query:
ALTER TABLE `transaction` ADD INDEX `transaction_idx_time` (`time`);
The optimized query:
SELECT
        COUNT(DISTINCT (`transaction`.`From`)) 
    FROM
        `transaction` 
    WHERE
        `transaction`.`time` < '2019-01-0X' 
        AND `transaction`.`to` IN (
            223, 224, 227
        )

Related Articles



* original question posted on StackOverflow here.