internals.asciidoc 4.1 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394
  1. == Internals
  2. This chapter may not apply to embedded Ranch as embedding allows you
  3. to use an architecture specific to your application, which may or may
  4. not be compatible with the description of the Ranch application.
  5. Note that for everything related to efficiency and performance,
  6. you should perform the benchmarks yourself to get the numbers that
  7. matter to you. Generic benchmarks found on the web may or may not
  8. be of use to you, you can never know until you benchmark your own
  9. system.
  10. === Architecture
  11. Ranch is an OTP application.
  12. Like all OTP applications, Ranch has a top supervisor. It is responsible
  13. for supervising the `ranch_server` process and all the listeners that
  14. will be started.
  15. The `ranch_server` gen_server is a central process keeping track of the
  16. listeners and their acceptors. It does so through the use of a public ets
  17. table called `ranch_server`. The table is owned by the top supervisor
  18. to improve fault tolerance. This way if the `ranch_server` gen_server
  19. fails, it doesn't lose any information and the restarted process can
  20. continue as if nothing happened.
  21. Ranch uses a custom supervisor for managing connections. This supervisor
  22. keeps track of the number of connections and handles connection limits
  23. directly. While it is heavily optimized to perform the task of creating
  24. connection processes for accepted connections, it is still following the
  25. OTP principles and the usual `sys` and `supervisor` calls will work on
  26. it as expected.
  27. Listeners are grouped into the `ranch_listener_sup` supervisor and
  28. consist of three kinds of processes: the listener gen_server, the
  29. acceptor processes and the connection processes, both grouped under
  30. their own supervisor. All of these processes are registered to the
  31. `ranch_server` gen_server with varying amount of information.
  32. All socket operations, including listening for connections, go through
  33. transport handlers. Accepted connections are given to the protocol handler.
  34. Transport handlers are simple callback modules for performing operations on
  35. sockets. Protocol handlers start a new process, which receives socket
  36. ownership, with no requirements on how the code should be written inside
  37. that new process.
  38. === Number of acceptors
  39. The second argument to `ranch:start_listener/5` is the number of
  40. processes that will be accepting connections. Care should be taken
  41. when choosing this number.
  42. First of all, it should not be confused with the maximum number
  43. of connections. Acceptor processes are only used for accepting and
  44. have nothing else in common with connection processes. Therefore
  45. there is nothing to be gained from setting this number too high,
  46. in fact it can slow everything else down.
  47. Second, this number should be high enough to allow Ranch to accept
  48. connections concurrently. But the number of cores available doesn't
  49. seem to be the only factor for choosing this number, as we can
  50. observe faster accepts if we have more acceptors than cores. It
  51. might be entirely dependent on the protocol, however.
  52. Our observations suggest that using 100 acceptors on modern hardware
  53. is a good solution, as it's big enough to always have acceptors ready
  54. and it's low enough that it doesn't have a negative impact on the
  55. system's performances.
  56. === Platform-specific TCP features
  57. Some socket options are platform-specific and not supported by `inet`.
  58. They can be of interest because they generally are related to
  59. optimizations provided by the underlying OS. They can still be enabled
  60. thanks to the `raw` option, for which we will see an example.
  61. One of these features is `TCP_DEFER_ACCEPT` on Linux. It is a simplified
  62. accept mechanism which will wait for application data to come in before
  63. handing out the connection to the Erlang process.
  64. This is especially useful if you expect many connections to be mostly
  65. idle, perhaps part of a connection pool. They can be handled by the
  66. kernel directly until they send any real data, instead of allocating
  67. resources to idle connections.
  68. To enable this mechanism, the following option can be used.
  69. .Using raw transport options
  70. [source,erlang]
  71. {raw, 6, 9, << 30:32/native >>}
  72. It means go on layer 6, turn on option 9 with the given integer parameter.