When querying on MongoDB collection with millions of documents and filtering or sorting on non-indexed fields, queries run too slow because mongo needs to scan the whole collection. On Mysql this is possible by performing a subselect only filtering the last 40k rows, for example:
select c.name, c.age, c.address //another fields
from (select * from myTable order by id desc limit 40000) as c
where c.name = 'My name' //more and more filters
order by c.date_req desc
limit 25
In this SQL, I get the last 40k rows and then apply the filtering and sorting logic, and it runs quickly even if the table has millions of rows.
On MongoDB, I only get good performance when filtering or sorting indexed fields, otherwise, it runs too slow. I presume I can't create an index in every field, so what can I do in this case? There is something similar to this on MongoDB?
The following recommendations will help you in your SQL tuning process.
You'll find 3 sections below:
ALTER TABLE `myTable` ADD INDEX `mytable_idx_id` (`id`);
SELECT
c.name,
c.age,
c.address
FROM
(SELECT
*
FROM
myTable
ORDER BY
myTable.id DESC LIMIT 40000) AS c
WHERE
c.name = 'My name'
ORDER BY
c.date_req DESC LIMIT 25