| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Note: template_lvbitx.{cpp,hpp} need to be excluded from the list of
files that clang-format gets applied against.
|
|
|
|
|
|
| |
Add a new method to io_service::send_io to check whether the destination
is ready for data, to make it possible to poll send_io rather than block
waiting for flow control credits.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Implement I/O service detach link methods
- The I/O service manager instantiates new I/O services or connects
links to existing I/O services based on options provided by the user
in stream_args.
- Add a streamer ID parameter to methods to create transports so that
the I/O service manager can group transports appropriately when using
offload threads.
- Change X300 and MPMD to use I/O service manager to connect links to
I/O services.
- There is now a single I/O service manager per rfnoc_graph (and it is
also stored in the graph)
- The I/O service manager now also knows the device args for the
rfnoc_graph it was created with, and can make decisions based upon
those (e.g, use a specific I/O service for DPDK, share cores between
streamers, etc.)
- The I/O Service Manager does not get any decision logic with this
commit, though
- The MB ifaces for mpmd and x300 now access this global I/O service
manager
- Add configuration of link parameters with overrides
Co-Authored-By: Martin Braun <martin.braun@ettus.com>
Co-Authored-By: Aaron Rossetto <aaron.rossetto@ni.com>
|
|
|
|
|
|
|
|
|
| |
Change transports to reserve the number of frame buffers they actually
need from the I/O service. Previously some I/O service clients reserved
0 buffers since they shared frame buffers with other clients, as we know
the two clients do not use the links simultaneously. This is possible
with the inline_io_service but not with a multithreaded I/O service
which queues buffer for clients before they are requested.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
chdr_ctrl_xport is a dumb-pipe transport for RFNoC control transactions
and management frames.
Also remove the I/O service's check on num_recv_frames and num_send_frames.
The transports may request additional virtual channels, so the send_io_if
and recv_io_if may not reserve additional frames, as they are shared with
a previously-allocated instance.
Note: this uses a mutex to force sequentual access to the
chdr_ctrl_xport. This is supposed to go away when the multi threaded
xport is done.
|
|
The inline_io_service connects transports to links without any
worker threads. Send operations go directly to the link, and recv
will perform the I/O as part of the get_recv_buffer() call.
The inline_io_service also supports muxed links natively. The receive
mux is entirely inline. There is no separate thread for the
inline_io_service, and that continues here. A queue is created for
each client of the mux, and packets are processed as they come in. If
a packet is to go up to a different client, the packet is queued up
for later. When that client attempts to recv(), the queue is checked
first, and the attempts to receive from the link happen ONLY if no
packet was found.
Also add mock transport to test I/O service APIs. Tests I/O service
construction and some basic packet transmision. One case will also
uses a single link that is shared between the send and recv transports.
That link is muxed between two compatible but different transports.
|