* pooler - An OTP Process Pool Application The pooler application allows you to manage pools of OTP behaviors such as gen_servers, gen_fsms, or supervisors, and provide consumers with exclusive access to pool members using =pooler:take_member=. ** What pooler does *** Protects the members of a pool from being used concurrently The main pooler interface is =pooler:take_member/0= and =pooler:return_member/2=. The pooler server will keep track of which members are *in use* and which are *free*. There is no need to call =pooler:return_member= if the consumer is a short-lived process; in this case, pooler will detect the consumer's normal exit and reclaim the member. To achieve this, pooler tracks the calling process of =take_member= as the consumer of the pool member. Thus pooler assumes that there is no middle-man process calling =take_member= and handing out the member pid to another worker process. *** Maintains the size of the pool You specify an initial and a maximum number of members in the pool. Pooler will create new members on demand until the maximum member count is reached. New pool members are added to replace member that crash. If a consumer crashes, the member it was using will be destroyed and replaced. You can configure Pooler to periodically check for and remove members that have not been used recently using to reduce the member count back to its initial size. *** Manage multiple pools A common configuration is to have each pool contain client processes connected to a particular node in a cluster (think database read slaves). Pooler will randomly select a pool to fetch a member from. If the randomly selected pool has no free members, pooler will select a member from the pool with the most free members. If there is no pool with available members, pooler will return =error_no_members=. You can ask for a member from a specified pool using =pooler:take_member/1=. If ensure your code always asks for members by pool name, you can use pooler to pool clients for different backend services. ** Motivation The need for pooler arose while writing an Erlang-based application that uses [[https://wiki.basho.com/display/RIAK/][Riak]] for data storage. Riak's protocol buffer client is a =gen_server= process that initiates a connection to a Riak node. A pool is needed to avoid spinning up a new client for each request in the application. Reusing clients also has the benefit of keeping the vector clocks smaller since each client ID corresponds to an entry in the vector clock. When using the Erlang protocol buffer client for Riak, one should avoid accessing a given client concurrently. This is because each client is associated with a unique client ID that corresponds to an element in an object's vector clock. Concurrent action from the same client ID defeats the vector clock. For some further explanation, see [[http://lists.basho.com/pipermail/riak-users_lists.basho.com/2010-September/001900.html][post 1]] and [[http://lists.basho.com/pipermail/riak-users_lists.basho.com/2010-September/001904.html][post 2]]. Note that concurrent access to Riak's pb client is actual ok as long as you avoid updating the same key at the same time. So the pool needs to have checkout/checkin semantics that give consumers exclusive access to a client. On top of that, in order to evenly load a Riak cluster and be able to continue in the face of Riak node failures, consumers should spread their requests across clients connected to each node. The client pool provides an easy way to load balance. ** Usage and API *** Pool Configuration Pool configuration is specified in the pooler application's environment. This can be provided in a config file using =-config= or set at startup using =application:set_env(pooler, pools, Pools)=. Here's an example config file that creates three pools of Riak pb clients each talking to a different node in a local cluster: #+BEGIN_SRC erlang % pooler.config % Start Erlang as: erl -config pooler % -*- mode: erlang -*- % pooler app config [ {pooler, [ {pools, [ [{name, "rc8081"}, {max_count, 5}, {init_count, 2}, {start_mfa, {riakc_pb_socket, start_link, ["localhost", 8081]}}], [{name, "rc8082"}, {max_count, 5}, {init_count, 2}, {start_mfa, {riakc_pb_socket, start_link, ["localhost", 8082]}}], [{name, "rc8083"}, {max_count, 5}, {init_count, 2}, {start_mfa, {riakc_pb_socket, start_link, ["localhost", 8083]}}] ]} %% if you want to enable metrics, set this to a module with %% an API conformant to the folsom_metrics module. %% If this config is missing, then no metrics are sent. %% {metrics_module, folsom_metrics} ]} ]. #+END_SRC Each pool has a unique name, an initial and maximum number of members, and an ={M, F, A}= describing how to start members of the pool. When pooler starts, it will create members in each pool according to =init_count=. **** Culling stale members The =cull_interval= and =max_age= pool configuration parameters allow you to control how (or if) the pool should be returned to its initial size after a traffic burst. Both parameters specify a time value which is specified as a tuple with the intended units. The following examples are valid: #+BEGIN_SRC erlang %% two minutes, your way {2, min} {120, sec} {1200, ms} #+END_SRC The =cull_interval= determines the schedule when a check will be made for stale members. Checks are scheduling using =erlang:send_after/3= which provides a light-weight timing mechanism. The next check is scheduled after the prior check completes. During a check, pool members that have not been used in more than =max_age= minutes will be removed until the pool size reaches =init_count=. The default value for =cull_interval= is ={0, min}= which disables stale member checking entirely. The =max_age= parameter has the same default value which will cause any members beyond =init_count= to be removed if scheduled culling is enabled. **** Retry behvaior when members do not start If there are no free members, but the pool size is less than =max_count=, pooler will attempt to add a new member to the pool to satisfy a =take_member= request. By default, pooler tries a single time to add a new member and will return =error_no_members= if this fails. You can increase the number of retries by specifying a value for the =add_member_retry= configuration parameter. *** Using pooler Here's an example session: #+BEGIN_SRC erlang application:start(pooler). P = pooler:take_member(), % use P pooler:return_member(P, ok). #+END_SRC Once started, the main interaction you will have with pooler is through two functions, =take_member/0= (or =take_member/1=) and =return_member/2= (or =return_member/1=). Call =pooler:take_member()= to obtain a member from a randomly selected pool. When you are done with it, return it to the pool using =pooler:return_member(Pid, ok)=. If you encountered an error using the member, you can pass =fail= as the second argument. In this case, pooler will permanently remove that member from the pool and start a new member to replace it. If your process is short lived, you can omit the call to =return_member=. In this case, pooler will detect the normal exit of the consumer and reclaim the member. *** pooler as an included application In order for pooler to start properly, all applications required to start a pool member must be start before pooler starts. Since pooler does not depend on members and since OTP may parallelize application starts for applications with no detectable dependencies, this can cause problems. One way to work around this is to specify pooler as an included application in your app. This means you will call pooler's top-level supervisor in your app's top-level supervisor and can regain control over the application start order. To do this, you would remove pooler from the list of applications in your_app.app add it to the included_application key: #+BEGIN_SRC erlang {application, your_app, [ {description, "Your App"}, {vsn, "0.1"}, {registered, []}, {applications, [kernel, stdlib, crypto, mod_xyz]}, {included_applications, [pooler]}, {mod, {your_app, []}} ]}. #+END_SRC Then start pooler's top-level supervisor with something like the following in your app's top-level supervisor: #+BEGIN_SRC erlang PoolerSup = {pooler_sup, {pooler_sup, start_link, []}, permanent, infinity, supervisor, [pooler_sup]}, {ok, {{one_for_one, 5, 10}, [PoolerSup]}}. #+END_SRC *** Metrics You can enable metrics collection by adding a =metrics_module= entry to pooler's app config. Metrics are disabled by default. The module specified must have an API matching that of the [[https://github.com/boundary/folsom/blob/master/src/folsom_metrics.erl][folsom_metrics]] module in [[https://github.com/boundary/folsom][folsom]] (to use folsom, specify ={metrics_module, folsom_metrics}}= and ensure that folsom is in your code path and has been started. When enabled, the following metrics will be tracked: | Metric Label | Description | | pooler.POOL_NAME.take_rate | meter recording rate at which take_member is called | | pooler.error_no_members_count | counter indicating how many times take_member has returned error_no_members | | pooler.killed_free_count | counter how many members have been killed when in the free state | | pooler.killed_in_use_count | counter how many members have been killed when in the in_use state | | pooler.event | history various error conditions | *** Demo Quick Start 1. Clone the repo: #+BEGIN_EXAMPLE git clone https://github.com/seth/pooler.git #+END_EXAMPLE 2. Build and run tests: #+BEGIN_EXAMPLE cd pooler; make && make test #+END_EXAMPLE 3. Start a demo #+BEGIN_EXAMPLE erl -pa .eunit ebin -config demo Eshell V5.8.4 (abort with ^G) 1> application:start(pooler). ok 2> M = pooler:take_member(). <0.49.0> 3> pooled_gs:get_id(M). {"p2",#Ref<0.0.0.47>} 4> M2 = pooler:take_member(). <0.48.0> 5> pooled_gs:get_id(M2). {"p2",#Ref<0.0.0.45>} 6> pooler:return_member(M). ok 7> pooler:return_member(M2). ok #+END_EXAMPLE ** License Pooler is licensed under the Apache License Version 2.0. See the [[file:LICENSE][LICENSE]] file for details. #+OPTIONS: ^:{}