rpc: limited multithread support for svc_nl
The rpc(3) itself was not designed with multithreading in mind, but we can
actually achieve some parallelism without modifying the library and the
framework. This transport will allow to process RPCs in threads, with
some hacks on the application side (documented in code). We make
reentrable only one method - SVC_REPLY(). Reading and parsing of incoming
calls is still done synchronously. But the actual processing of the calls
can be offloaded to a thread, and once finished the thread can safely
execute svc_sendreply() and the reply would be sent with the correct xid.
Differential Revision: https://reviews.freebsd.org/D48569
rpcsec_tls: cleanup the rpctls_syscall()
With all the recent changes we don't need extra argument that specifies
what exactly the syscalls does, neither we need a copyout-able pointer,
just a pointer sized integer.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48649
rpc.tlsservd: provide parallelism with help of pthread(3)
At normal NFS server runtime there is not much RPC traffic from kernel to
rpc.tlsservd. But as Rick rmacklem@ explained, the notion of multiple
workers exists to handle a situation when a server reboots and it has
several hundred or thousands of TLS/TCP connections from clients. Once it
comes back up, all the clients make TCP connections and do TLS handshakes.
So cleanup the remnants of the workers, that left after the conversion of
RPC over netlink(4) transport and restore desired parallelism with help of
pthread(3).
We are processing the TLS handshakes in separate threads, one per
handshake. Number of concurrent threads is capped by hw.ncpu / 2, but this
can be overriden with -N.
Differential Revision: https://reviews.freebsd.org/D48570
rpc.tlsservd/rpc.tlsclntd: rename 'refno' field to 'cookie'
Since in the kernel and in the API this is now called socket cookie.
No functional change.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48777
rpc.tlsservd: followup of API refactoring in the previous commit
Userland counterpart of the previous commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48567
rpc.tlsservd: run netlink(4) service and use new API to get sockets
Userland counterpart of the previous commit.
Note: this change intentionally ignores aspect of multiple workers of
rpc.tlsservd(8). This also will be addressed in a future commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48562
rpcsec_tls/server: API refactoring between kernel and rpc.tlsservd(8)
Now that the conversion of rpcsec_tls/client + rpc.tlsclntd(8) to the
netlink(4) socket as RPC transport started using kernel socket pointer as
a reliable cookie, we can shave off quite a lot of complexity. We will
utilize the same kernel-generated cookie in all RPCs. And the need for
the daemon generated cookie in the form of timestamp+sequence vanishes.
We also stop passing notion of 'process position' from userland to
kernel. The TLS handshake parallelism to be reimplemented in the daemon
without any awareness about that in the kernel.
This time bump the RPC version.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48566
rpc.tlsclntd: followup of API refactoring in the previous commit
Userland counterpart of the previous commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48565
rpcsec_tls/server: use netlink RPC client to talk to rpc.tlsservd(8)
The server part just repeats what had been done to the client. We trust
the parallelism of clnt_nl and we pass socket cookie to the daemon, which
we then expect to see in the rpctls_syscall(RPCTLS_SYSC_SRVSOCKET) to find
the corresponding socket+xprt. We reuse the same database that is used
for clients.
Note 1: this will be optimized further in a separate commit. This one is
made intentionally minimal, to ease the review process.
Note 2: this change intentionally ignores aspect of multiple workers of
rpc.tlsservd(8). This also will be addressed in a future commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48561
rpcsec_tls/client: API refactoring between kernel and rpc.tlsclntd(8)
Now that the conversion of rpcsec_tls/client + rpc.tlsclntd(8) to the
netlink(4) socket as RPC transport started using kernel socket pointer as
a reliable cookie, we can shave off quite a lot of complexity. We will
utilize the same kernel-generated cookie in all RPCs. And the need for
the daemon generated cookie in the form of timestamp+sequence vanishes.
In the clnt_vc.c we no longer need to store the userland cookie, but we
still need to observe the TLS life cycle of the client. We observe
RPCTLS_INHANDSHAKE state, that lives for a short time when the socket had
already been fetched by the daemon with the syscall, but the RPC call is
still waiting for the reply from daemon.
This time bump the RPC version.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48564
rpc.tlsclntd: run netlink(4) service and use new API to get sockets
Userland counterpart of the previous commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48560
nfs: set vnet(9) context in mountnfs()
This seems to be the right place to set it once and for all, without
setting it deep in kgssapi/rpctls/etc leaf functions.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48558
rpcsec_tls/client: use netlink RPC client to talk to rpc.tlsclntd(8)
In addition to using netlink(4) socket instead of unix(4) to pass
rpctlscd_* RPC commands to rpc.tlsclntd(8), the logic of passing file
descriptor is also changed. Since clnt_nl provides us all needed
parallelism and waits on individual RPC xids, we don't need to store
socket in a global variable and serialize all communication to the daemon.
Instead, we will augment rpctlscd_connect arguments with a cookie that is
basically a pointer to socket, that we keep for the daemon. While
sleeping on the request, we will store a database of all sockets
associated with rpctlscd_connect RPCs that we have sent to userland. The
daemon then will send us back the cookie in the
rpctls_syscall(RPCTLS_SYSC_CLSOCKET) argument and we will find and return
the socket for this upcall.
This will be optimized further in a separate commit, that will also touch
clnt_vc.c and other krpc files. This commit is intentionally made minimal,
so that it is easier to understand what changes with netlink(4) transport.
[2 lines not shown]
nlm: set vnet(9) context in the NLM syscall
With the kernel RPC binding moving to Netlink transport all clients need
to have proper vnet(9) context set. This change will unlikely make NLM
properly virtualized, but at least it will not panic on the default VNET
when kernel is compiled with VIMAGE.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48555
krpc: let the kernel talk to the rpcbind(8) service via netlink(4)
At the moment the only kernel service that wants to register RPC binding
in the rpcbind(8) is kernel NLM.
Kernel counterpart of the previous commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48557
rpcbind: run netlink(4) service
To register RPC bindings coming from the kernel. At the moment, we expect
such bindings only from the kernel NLM service.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48556
gssd: use netlink(4) RPC service to talk to kernel GSS
Userland counterpart of the previous commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48553
ffs: fix build with GEOM_LABEL and without FFS, e.g. MINIMAL
The root of vfs.ffs sysctl tree was declared in ffs_alloc.c. The
1111a44301da started to use the root in ffs_subr.c. However, ffs_subr.c
may be included in kernels that do not have FFS in their config. Such
kernel won't link after 1111a44301da.
Fixes: 1111a44301da39d7b7459c784230e1405e8980f8
libc/rpc: add userland side RPC server over netlink(4)
To be used by NFS related daemons that provide RPC services to the kernel.
Some implementation details inside the new svc_nl.c.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48550
xdr: provide x_putmbuf method for kernel XDR
Get it implemented for mbuf based XDR. Right now all existing consumers
use only mbuf based XDR. However, future changes will require appending
data stored in an mbuf to memory buffer based XDR.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48547
krpc: add kernel side client over netlink(4)
This shall be the official transport to connect kernel side RPC clients
to userland side RPC servers. All current kernel side clients that
hijack unix(4) sockets will be converted to it. Some implementation
details are available inside new clnt_nl.c.
The complementary RPC server over netlink(4) coming in next commit.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48549
genl: add RPC parser that dumps what sys/rpc/clnt_nl.c sends
Use a separate file for the RPC parser. Potentially it may get bigger.
Also to avoid include RPC header pollution of the genl.c.
Reviewed by: rmacklem
Differential Revision: https://reviews.freebsd.org/D48551
xdr: provide x_putmbuf method for xdrmem
It has slightly different semantic than same method for xdrmbuf. The
mbuf data is copied and caller is responsible to keep or free the
original mbuf.
Reviewed by: rmacklem, markj
Differential Revision: https://reviews.freebsd.org/D48548
mtree: TESTSBASE directory always starts with a /
Remove the extra forward slash ("/"), otherwise the mtree specification
file will have the double slash and will not be parsed by makefs when
attempting to build NanoBSD with NO_ROOT privileges.
Fixes: 07670b30fa43 ("Create /usr/tests *.debug file directory hierarchy")
Reviewed by: emaste
Approved by: emaste (mentor)
Differential Revision: https://reviews.freebsd.org/D47722
(cherry picked from commit 01ff67f4bdf5959a719a6511a855f6a60c0e3a93)