In order to reduce the load on your server and allow more concurrent users to hit your website, you may use caching. There are many types of caching in Ruby on Rails, some of which are discussed in the guides. Some require the use of a dedicated caching service such as AWS's Elasticache (memcache or redis). The minimum cost of one of those instances is the same as a t2 micro. If you are only running a simple Rails app, this service may not be worth it, especially since there is a simple method of caching that doesn't cost anything extra and provides an immense speed boost in many situations. This is described in the Rails guides as page caching, and while it was removed the Rails core since Rails 4, there is still a handy gem that provide the same functionality called actionpack-page_caching. This caching creates a html only snapshot of your rails page, and allows it to be rendered by only Nginx or Apache, which should be much faster. I personally installed in on my own rails blog (this site) using the instructions and here are the results (I did this test with Apache Bench -> I ran
ab -n 1000 -c 100 http://localhost:3000/
where 3000 is the local reverse forwarding port of my Nginx server
Before this change, here are the results
Server Software: nginx/1.10.2 Server Hostname: localhost Server Port: 3000 Document Path: / Document Length: 17147 bytes Concurrency Level: 100 Time taken for tests: 130.231 seconds Complete requests: 1000 Failed requests: 98 (Connect: 0, Receive: 0, Length: 98, Exceptions: 0) Non-2xx responses: 98 Total transferred: 16733866 bytes HTML transferred: 15933070 bytes Requests per second: 7.68 [#/sec] (mean) Time per request: 13023.087 [ms] (mean) Time per request: 130.231 [ms] (mean, across all concurrent requests) Transfer rate: 125.48 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 4.2 0 16 Processing: 60 12789 22374.0 5491 85057 Waiting: 52 12764 22355.9 5484 85054 Total: 63 12790 22373.3 5491 85057 Percentage of the requests served within a certain time (ms) 50% 5491 66% 5742 75% 6160 80% 7752 90% 45290 95% 82171 98% 83974 99% 84495 100% 85057 (longest request)
The results after applying the caching are here
Server Software: nginx/1.10.2 Server Hostname: localhost Server Port: 3000 Document Path: / Document Length: 189 bytes Concurrency Level: 100 Time taken for tests: 25.383 seconds Complete requests: 1000 Failed requests: 46 (Connect: 0, Receive: 0, Length: 46, Exceptions: 0) Non-2xx responses: 954 Total transferred: 1283196 bytes HTML transferred: 969068 bytes Requests per second: 39.40 [#/sec] (mean) Time per request: 2538.254 [ms] (mean) Time per request: 25.383 [ms] (mean, across all concurrent requests) Transfer rate: 49.37 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 56 169.4 0 986 Processing: 11 2036 5814.8 62 23053 Waiting: 11 2035 5813.1 61 23053 Total: 11 2093 5982.0 62 23611 Percentage of the requests served within a certain time (ms) 50% 62 66% 70 75% 78 80% 82 90% 18281 95% 18288 98% 22617 99% 23073 100% 23611 (longest request)
As you can see, the requests per second went from 7.68/second to almost 39.4/second, most requests were much faster, and there were fewer failing requests.
In theory, the gem suggests that you have to call expire_page when updating a page, for instance when a comment is added or when a post is updated, but I've found that this seems to be unnecessary. The gem seems to know on it's own when to invalidate the page and reload it.
I would not expect this gem to provide much of a performance speedup if each user has a login area or a customized browsing experience. A new page would be generated each time it was different that a previously cached page. However, it's absolutely brilliant for a blog where each user sees that same content. I suspect that it's limited usefulness in many rails applications is the reason it was removed from the base install in Rails 4. Nevertheless, In the correct use case the performance gain is massive.