In Syn, every node is the authority in terms of the processes that run on it. This means that all register / unregister / join / leave operations for a specific process are routed to the syn process (registry, pg) that runs on the specific process' node.
It is then the responsibility of this node to communicate the operation results and propagate them to the other nodes.
This serializes per node operations and allows keeping per node consistency.
Syn implement Scopes, which are a way to create namespaced, logical overlay networks running on top of the Erlang distribution cluster. Nodes that belong to the same Scope will form a "sub-cluster": they will synchronize data between themselves, and themselves only.
Note that all of the data related to a Scope will be replicated to every node of a sub-cluster, so that every node has a quick read access to it.
When you add a node to a scope (see add_node_to_scopes/1
) i.e. users
,
the following happens:
gen_server
processes get created (aka "scope processes"), in the given example named syn_registry_users
(for registry)
and syn_pg_users
(for process groups).4 new ETS tables get created:
syn_registry_by_name_users
(of type set
).syn_registry_by_pid_users
(of type bag
).syn_pg_by_name_users
(of type ordered_set
).syn_pg_by_pid_users
(of type ordered_set
).
These tables are owned by the syn_backbone
process, so that if the related scope processes were to crash, the data
is not lost and the scope processes can easily recover.
The 2 newly created scope processes each join a sub-cluster (one for registry, one for process groups) with the other processes in the Erlang distributed cluster that handle the same Scope (which have the same name).
{'3.0', discover, self()}
to every process with the same name running on all the nodes in the Erlang cluster.{'3.0', ack_sync, self(), LocalData}
.ack_sync
message: