| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On back-edges, no properties are forwarded, but properties must be
consistent after property resolution. This breaks when the source edge
on a back-edge has an edge property which the destination block does
not. Consider the following graph:
DDC -> Replay -> DDC
where both instances of 'DDC' refer to the same block. Now, assume the
first edge is declared a back edge (in principle, it shouldn't matter).
The DDC block has an edge property `samp_rate` which the Replay block
does not. Therefore, it can't forward this edge property to the Replay
block's input edge property list.
In the consistency check code, we don't check for the existence of edge
nodes, because it is assumed edge properties where either forwarded, or
aligned through some other manner. This leads to a property lookup
failure.
With this fix, we skip the consistency check for edge properties which
don't exist on the destination node. This is safe because the
destination block can not have a property resolver defined for undefined
properties. This means the destination block can either:
- Drop the property. In this case, there is no value in checking
consistency. Even if we could forward edge properties on back-edges,
they would always have the same value.
- Forward the property. In that case, the consistency check would happen
elsewhere in the graph where there's no back-edge.
|
| |
|
| |
|
|
|
|
|
|
| |
The ops pending for each operation was stored implicitly in the data
structure. This adds it explicitly, which is useful for debugging
and packet dissection.
|
| |
|
|
|
|
| |
Support DPDK versions 19.11 and 20.11
|
|
|
|
|
|
|
|
|
|
|
|
| |
This class has a member _num_drops, which can be read out using the
get_num_drops() API call. However, when dropping packets, this counter
was not incremented, which is fixed now.
This also includes a very minor optimization from 2 map<> lookups to
1 lookup (they are in O(log N)). Since there are usually a small
two-digit number of endpoints connected to the async message receiver,
this change is not expected to yield major improvements, but the lookup
*is* in a hot loop.
|
|
|
|
|
|
| |
The keys for the table of frequency ranges for each VCO value counts up
consecutively but key "1" was there twice while "2" was missing. This
is fixed here.
|
|
|
|
|
| |
On newer versions of Boost, they show deprecation notes. However,
they're not actually used any more so they can go.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default block controller is used whenever no other block controller
is used. It currently defaults to dropping both property propagation and
actions.
When a custom block is injected into a graph like this for example:
Radio -> DDC -> Custom Block -> Rx Streamer
This default behaviour causes the Rx Streamer to not be able to send
actions (like stream commands) nor does it allow MTU propagation (or any
other property's propagation).
The default block behaviour is ONE_TO_ONE, meaning that actions and
properties on input channel N will get forwarded to output channel N. In
absence of an actual block controller, this is more useful default than
setting the propagation to DROP for both actions and properties. Most
blocks that pass through data, or do some simple processing, will now
work in the absence of a block controller.
The new disadvantage is that blocks which would modify properties such as
sampling rate, scaling, or MTU will no longer work properly in the
absence of a block controller.
However, the recommended behaviour is anyway not to operate without a
block controller. For the cases where no block controller is present,
ONE_TO_ONE is considered the generally more useful default.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have noticed that on 1 GbE connections, MTU discovery can become
unreliable. Since we now use the MTU directly for deriving spp and other
values, a correct MTU is important.
Because we don't have a way of knowing if MTU discovery worked or not,
we add some heuristics in form of a plausibility check. For now, the
only rule in this check is if that the detected MTU is a bit larger than
1472 bytes, we coerce down to 1472, because this is such a standard
value (most 1 GbE interfaces default to an IPv4 MTU of 1500 bytes).
For the cases where the interface MTU is set to be between 1500 and 1528
bytes, this would cause a very minor performance loss. We accept this
performance loss as it is small, and those cases are very rare. MTUs are
usually 1500 bytes, or >= 8000 bytes for high-speed links using jumbo
frames.
|
|
|
|
|
|
|
| |
This constant was generally harmful, since it was only correct under
certain circumstances (64 bit CHDR with timestamps). The X3x0 code was
the last place it was being used, and we remove it without substitute
because it was not doing anything useful here.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These two values where being mixed up in the code. To summarize:
- The MTU is the max CHDR packet size, including header & timestamp.
- The max payload is the total number of bytes regular payload plus
metadata that can be fit into into a CHDR packet. It is strictly
smaller than the MTU. For example, for 64-bit CHDR widths, if
a timestamp is desired, the max payload is 16 bytes smaller than
the MTU.
The other issue was that we were using a magic constant (DEFAULT_SPP)
which was causing conflicts with MTUs and max payloads.
This constant was harmful in multiple ways:
- The explanatory comment was incorrect (it stated it would cap packets
to 1500 bytes, which it didn't)
- It imposed random, hardcoded values that interfered with an 'spp
discovery', i.e., the ability to derive a good spp value from MTUs
- The current value capped packet sizes to 8000 bytes CHDR packets, even
when we wanted to use bigger ones
This patch changes the following:
- noc_block_base now has improved docs for MTU, and additional APIs
(get_max_payload_size(), get_chdr_hdr_len()) which return the
current payload size given MTU and CHDR width, and the CHDR header
length.
- The internally used graph nodes for TX and RX streamers also get
equipped with the same new two API calls.
- The radio, siggen, and replay block all where doing different
calculations for their spp/ipp values. Now, they all use the max
payload value to calculate spp/ipp. Unit tests where adapted
accordingly. Usage of DEFAULT_SPP was removed.
- The replay block used a hardcoded 16 bytes for header lengths, which
was replaced by get_chdr_hdr_len()
- The TX and RX streamers where discarding the MTU value and using the
max payload size as the MTU, which then propagated throughout the
graph. Now, both values are stored and can be used where appropriate.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The replay block is more like the radio block than like a FIFO. In
particular, consider this flow graph:
Replay -> DDC -> Replay
Imagine you're using the replay block to test the DDC block with
prerecorded data. If we treated the Replay Block like a FIFO, then we'd
have a loop in the graph (which is already wrong). If we used the DDC to
resample, then the input- and output sample rate of the Replay mismatch,
which is a legal way to use the Replay block, but not possible if we
treat the graph like a loop.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When connect_through_blocks() was called on blocks within a single
chain, there was a bug where the chain was incorrectly cropped. In
a standard FPGA image, say one was to use this API call to connect the
radio to the DDC. It would generate a chain of blocks hanging off the
radio as such:
Radio -> DDC -> SEP
What the code should do, and what this fix provides, is that the chain
gets cropped after the DDC, to look like this:
Radio -> DDC
With the current bug, it would assume the chain has a dangling edge, and
incorrectly throw an exception.
Note that this bug would not appear when source and destination block
are on separate chains (i.e., both have an SEP in their chain).
This patch includes minor logging and comment improvements around the
offending lines of code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The async message handler and the async message validator would
erroneously compare channel numbers for RX async messages with the
number of valid TX channels. On TwinRX, where there are zero TX
channels, this would always fail. Elsewhere in the code, the comparisons
for TX and RX channels mixed up input and output ports.
The second issue is that the comparison made was a "greater than" rather
than "greater or equal".
The effect of these two bugs was that potentially, we could have
accepted async messages for an invalid port N, where N is the number of
valid ports of this block, and that for TwinRX/X300 users, async
messages on channel 1 would not get accepted (they would, however, get
accepted for channel 0 because of the second issue). This includes
overrun handling, which was broken for channel 1 and 3 on an X300.
Another effect of the bug was that EPIDs for async messages weren't
always programmed correctly.
|
|
|
|
|
|
|
| |
Getting the time from the mb_controller is slow, so try to get the time
from the Radio on the fast path first.
Signed-off-by: michael-west <michael.west@ettus.com>
|
|
|
|
|
|
| |
Add API calls to Radio control to get ticks and time.
Signed-off-by: michael-west <michael.west@ettus.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The order must:
- Check transaction has the right number of hops, then read hop
- Check hop has the right number of operations (at least 2), then read
those ops
- Check the ops have the correct opcodes
The code was doing checks in the wrong order. Thanks to Github user
johnwstanford for pointing this out.
|
|
|
|
|
|
|
| |
This provides every block controller with a copy of its CHDR width.
Note: mock blocks always get configured with a 64-bit CHDR width, to
retain API compatibility.
|
|
|
|
|
|
|
|
|
|
| |
The times on the device can glitch if either the tick rate changes or
the number of active chains changes. This throws off the time if the
user gets streamers, changes the sample rate, or changes the tick rate
after synchronizing the time. This change re-synchronizes the times
automatically in those cases.
Signed-off-by: michael-west <michael.west@ettus.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
This gets closer to what our hardware can actually support. See the
comments for further explanations.
This has the side-effect of patching an issue on X410 (using 200 MHz
images) where garbage samples would get injected (one per packet). It
is not, however, the final fix for that problem.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This implements the GPIO API for X410 through get_gpio_attr and
set_gpio_attr. In ATR mode, which channel's ATR state is chosen by the
set_gpio_src call, setting e.g. DB0_RF0 for channel 0 or DB0_RF1 for
channel 1. In manual mode, all 24 bits (for both ports) are set in
a single register write.
Although the front panel of the device has two ports, labelled GPIO0 and
GPIO1, this API exposes them as though they were a single 24-bit GPIO
port.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes some constants from UHD that were left over from RFNoC/UHD
3.x. They are unused.
rfnoc_rx_to_file had a commented-out section that was also UHD-3 only.
Note that rfnoc/constants.hpp is pretty bare now, and could be removed.
However, it is in the public header section, so we shall leave the used
constants where they are.
This requires fixing includes in mgmt_portal.cpp.
|
| |
|
|
|
|
|
|
|
| |
Refactors register addresses into a gpio_atr_offsets structure which
contains the various register addresses. This allows creating other
devices with different GPIO register layouts with greater ease, and
eliminates the use of macros (yay!)
|
|
|
|
|
|
|
|
|
| |
The I and Q were swapped in sine_tone, which caused confusion and made
the rotation of REG_CARTESIAN clockwise by default. This effectively
made the resulting frequency negative. This PR makes the I and Q order
consistent with RFNoC and fixes the direction of rotation so that a
positive value for REG_PHASE_INC (phase increment) results in a
counter-clockwise rotation, which yields a positive frequency.
|
|
|
|
| |
Thanks to Mait for pointing these out!
|
|
|
|
|
|
|
|
|
| |
In multiple places in the UHD code, we were doing the same calculation
for a wrapped frequency (wrap it into the first Nyquist zone). This math
was using boost::math, too. Instead of editing every instance, we create
a new function, uhd::math::wrap_frequency(), and replace all of its
separate implementations with this function. The new function also no
longer relies on boost::math::sign.
|
|
|
|
| |
Replaced by std::numeric_limits<>.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a very mechanical task that could almost have been done with
sed. Boost versions of mutexes and locks were removed, and replaced with
std:: versions. The replacement tables are as follows:
== Mutexes ==
- boost::mutex -> std::mutex
- boost::recursive_mutex -> std::recursive_mutex
Mutexes behave identically between Boost and std:: and have the same
API.
== Locks ==
C++11 has only two types of lock that we use/need in UHD:
- std::lock_guard: Identical to boost::lock_guard
- std::unique_lock: Identical to boost::unique_lock
Boost also has boost::mutex::scoped_lock, which is a typedef for
boost::unique_lock<>. However, we often have used scoped_lock where we
meant to use lock_guard<>. The name is a bit misleading, "scoped lock"
sounding a bit like an RAII mechanism. Therefore, some previous
boost::mutex::scoped_lock are now std::lock_guard<>.
std::unique_lock is required when doing more than RAII locking (i.e.,
unlocking, relocking, usage with condition variables, etc.).
== Condition Variables ==
Condition variables were out of the scope of this lock/mutex change, but
in UHD, we inconsistently use boost::condition vs.
boost::condition_variable. The former is a templated version of the
latter, and thus works fine with std::mutex'es. Therefore, some
boost::condition_variable where changed to boost::condition.
All locks and mutexes use `#include <mutex>`. The corresponding Boost
includes were removed. In some cases, this exposed issues with implicit
Boost includes elsewhere. The missing explicit includes were added.
|
|
|
|
| |
Thanks for github user johnwstanford for pointing those out.
|
|
|
|
|
|
|
|
|
|
|
| |
The USB managed buffer implementation created a context every time one
was generated.
The additional load is not very high, because the global session is a
singleton, and simply returns the same context again with only a few
branches. Also, managed buffers persist for the entire session.
However, the context is never used in the managed buffer.
This code is thus confusing for the reader of this code, and we remove
the extraneous, unused context variable.
|
|
|
|
|
|
|
|
|
|
| |
As GitHub user marcosino points out, we're running the AD9361 in
overclocked mode. This is because the driver was written with no longer
valid recommendations.
We add a comment and some debug messages to clarify this. Should there
be RF impairments (signal integrity or other) because of overclocking,
users would be able to check DEBUG log statements to correlate with
overclocked configurations.
|
|
|
|
|
|
|
|
|
| |
The variable max_size_bytes has a different name in the source than in
the header and is not self-explanatory in both. Therefore when comparing
against it in the assertion in line 142 one could assume that a number
of bytes needs to be compared with a byte value. Change variable to
`buff_size` in source and header file to avoid confusion and add
documentation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous behaviour of UHD for setting gain was:
1. Set "Mask Clr Atten Update". This will avoid "Immediately Update TPC
Atten" to be cleared.
2. Then, assert "Immediately Update TPC Atten".
3. Poke the LSBs of the attenuation value.
4. Poke the MSB of the attenuation value.
This order of operations has the downside of causing large Tx power
spikes when setting the attenuation, because you need both registers to
properly set the attenuation, but we are updating the gain immediately,
even between the two attenuation register's update.
Moreover, the upstream Linux driver for AD9361 by ADI also does not
do this. We therefore change the procedure to match the kernel driver
behaviour, which is:
0. [During initialization: Clear "Mask Clr Atten Update"
1. Poke the attenuation registers
2. Then, assert "Immediately Update TPC Atten".
This avoids Tx power spikes. It also reduces the Tx-gain procedure to
3 pokes instead of 4.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So, both the set_tx_antenna_switches and set_rx_antenna_switches functions
configure the TX0_ANT_11 register (which controls the final switch before
the TX/RX port, switching it between the three TX paths and the RX path).
The RX antenna configuration code will, if the RX antenna is set to TX/RX,
configure that switch to the TX/RX->RX path when the ATR is set to RX.
However, the TX antenna config code will always configure that switch to
the "bypass" path, for both the 0X and RX ATR modes, regardless of whether
the RX side actually needs that path.
Ergo, this change makes set_tx_antenna_switches only configure that
switch when it is configuring the XX or TX modes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds uhd_rc as a link target to static builds of libuhd. This fixes
build errors like this:
```
uhd/lib/cal/database.cpp:12:10: fatal error: cmrc/cmrc.hpp: No such file
or directory
#include <cmrc/cmrc.hpp>
```
This also adds uhd_rc as to $libuhd_libs instead of just listing it
separately, and target objects from uhd_rc to $libuhd_sources.
|
|
|
|
|
| |
Error message was not adapted when support for 11.52 MHz and 23.04 MHz
references was added. Fixing this.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See the CMake 3.8 documentation on these two variables:
https://cmake.org/cmake/help/v3.8/variable/PROJECT-NAME_SOURCE_DIR.html
https://cmake.org/cmake/help/v3.8/variable/CMAKE_SOURCE_DIR.html
Under normal circumstances, these two are identical. For sub-projects
(i.e., when building UHD as part of something else that is also a CMake
project), only the former is useful. There is no discernible downside of
using UHD_SOURCE_DIR over CMAKE_SOURCE_DIR.
This was changed using sed:
$ sed -i "s/CMAKE_SOURCE_DIR/UHD_SOURCE_DIR/g" \
`ag -l CMAKE_SOURCE_DIR **/{CMakeLists.txt,*.cmake}`
$ sed -i "s/CMAKE_BINARY_DIR/UHD_BINARY_DIR/g" \
`ag -l CMAKE_BINARY_DIR **/{CMakeLists.txt,*.cmake}`
At the same time, we also replace the CMake variable UHD_HOST_ROOT (used
in MPM) with UHD_SOURCE_DIR. There's no reason to have two variables
with the same meaning and different names, but more importantly, this
means that UHD_SOURCE_DIR is defined even in those cases where MPM calls
into CMake files from UHD without any additional patches.
Shoutout to GitHub user marcobergamin for bringing this up.
|
|
|
|
|
| |
This allows UHD clients to determine, for example, whether the currently
loaded filesystem is up-to-date.
|
|
|
|
|
| |
Fix function definition set_rx_iq_balance so that Python can reach the
overloaded C++ function. There was a copy & paste error in there.
|