Afraid not. That mod is just to help automate common administrative queries. The easiest solution to implement is changing the option I mentioned above.
If you do not want to do that (and honestly I don't see why not...) then here is my thought.... Why on earth would you want to browse through 120,000 threads? The data does not get lost; you can still search and find it. I don't see people going past about 5-10 pages. Maybe 20 if they are bored. People use searching.
If your forum is that busy, where you cannot restrict posts by the past 6-12m because users will want to browse further, then I would suggest that you create some subforums and shift the content there instead. This way you can index by forumid and get way faster results without losing any user experience.
Anyway, if you insist on returning huge datasets back to the user, then here are more ideas:
Upgrade your hardware - This is generally expensive and will only solve the problem temporarily.
Implement caching - If you use 3rd party caching, I suppose it's of medium difficulty to set up. If you use MySQL's query cache, it would be pretty simple. In this example, all you'd need to do is some normalization on the linux timestamp it uses in the field list. If you round it to the nearest thousand or even hundred, it would be utilized. The way it's set up now, it can't be used. What this would do is have the query results re-used (instant repeat queries) until the data changes and invalidates it.
Again, especially if you use MySQL query caching, this is a band-aid solution.
Implement Sphinx -This is quite difficult / expensive to set up, but it's probably the most beneficial in terms of performance and growth support. Sphinx is a search daemon which is often used to replace keyword searching, but it can also perform full data scans VERY quickly. You can have it scan through hundreds of millions of rows in the time it would take MySQL to scan tens of thousands. For a board your size, it would be bringing the data back in under a tenth of a second easily.
|