Object "merging" in C++
At my previous job I worked on video processing hardware - like a video switcher, but for LED displays. This processor was scriptable - you could trigger it from the control system and it would change the layout on the fly. The simplest thing this could do was “fade in” a video zone over some configurable time. It could get crazy complex with multiple actions happening to multiple zones all at different times - all perfectly synchronized with each other.
For reasons that aren’t important for this post, I determined the best way this software would be written is to have a “state” object that represents all of the hardware settings at any point in time (a single video frame). There would be a sparse list of these “state” objects, representing what the settings should be on different frames.
So here’s an example. Lets say we trigger zone 1 to fade in over 4 frames (starting half a second later), and halfway through that, we trigger zone 2 to fade out for 6 frames.
Frame # | Zone 1 Alpha | Zone 2 Alpha |
---|---|---|
30 | 0.0 | |
31 | 0.25 | |
32 | 0.5 | 0.0 |
33 | 0.75 | 0.2 |
34 | 1.0 | 0.4 |
35 | 0.6 | |
36 | 0.8 | |
37 | 1.0 |
The datastructure that would result from this would look something like this:
frame: 30
zone_0:
alpha: 0.0
frame: 31
zone_0:
alpha: 0.25
frame: 32
zone_0:
alpha: 0.5
zone_1:
alpha: 0.0
frame: 33
zone_0:
alpha: 0.75
zone_1:
alpha: 0.2
frame: 34
zone_0:
alpha: 1.0
zone_1:
alpha: 0.4
frame: 35
zone_1:
alpha: 0.6
frame: 36
zone_1:
alpha: 0.8
frame: 37
zone_1:
alpha: 1.0
Notice that this datastructure only contains what changed each frame.
The alpha value for both zones before frame 30 and 32 (respectively) is whatever it happens to be at that time.
The value of zone_1’s alpha
after frame 34 stays at 1.0 (even though it’s not defined in subsequent frames).
The way this datastructure was coded looked something like this:
struct Rectangle {
int32_t x, y, w, h;
};
struct ChannelSettings {
std::optional<double> alpha;
std::optional<Rectangle> src_area;
std::optional<Rectangle> dst_area;
// Other channel specific settings
};
struct CanvasSettings {
std::map<uint8_t,ChannelSettings> channels;
// Other hardware entities like keyers or pattern generators
};
There’s way more properties in ChannelSettings
(50 or so) and there was more structure leading up to this, but this is enough to show the point.
Within the engine, there is a “LookQueue” which is essentially std::list<std::pair<uint64_t,CanvasSettings>>
.
This is essentially a list of tuples: settings that change on a specific frame, and the frame number.
The list could be sparse - if nothing changes on a frame, then there is no entry for that frame in the list.
Inside the engine is a datastructure representing the current state of the system “right now” (the current frame number). Every 16ms, the engine would do this sequence of actions:
- Grab a
CanvasSettings
from the LookQueue (if there is one, otherwise a default constructed one) merge
the changed properties from that object into the current state object- Use the current state object to make the hardware do those things 1
#2 is where the interesting magic happens.
The current state object is actually another CanvasSettings
object.
The magic is current_state = merge(current_state, settings_to_change);
The basic idea of the merge
function is “start with the lhs, overwrite with the settings from the rhs”.
There’s a number of overloads for this function, lets start with the simplest ones:
Rectangle merge(Rectangle const &lhs, Rectangle const &rhs)
{
return rhs;
}
double merge(double const &lhs, double const &rhs)
{
return rhs;
}
These are the “scalar” merge functions - they simply use the rhs (the incoming changes) and ignore the lhs.
There is also an overload for the ChannelSettings
and CanvasSettings
types:
ChannelSettings merge(ChannelSettings const &lhs, ChannelSettings const &rhs)
{
return {
merge(lhs.alpha, rhs.alpha),
merge(lhs.src_area, rhs.src_area),
merge(lhs.dst_area, rhs.dst_area),
};
}
CanvasSettings merge(CanvasSettings const &lhs, CanvasSettings const &rhs)
{
return {
merge(lhs.channels, rhs.channels),
};
}
These would be a composite2 types - it’s just a bunch of member variables (usually optional, more on that in second). So merging two of them is simply calling merge on each member and using those to construct the result. This step is trivial, but a decent amount of typing. Once reflection becomes a thing, we can replace this definition with a completely synthesized one.
Hopefully, you noticed that I’m missing an overload - namely one that merges std::optional
s.
So here it is:
template <typename T>
std::optional<T> merge(std::optional<T> const &lhs, std::optional<T> const &rhs)
{
if (!lhs && !rhs) return std::nullopt; // If both objects are empty, return empty
if (!lhs && rhs) return rhs; // If we only have one, take that one
if (lhs && !rhs) return lhs; // If we only have one, take that one
if (lhs && rhs) return rhs; // If we have both, take the 2nd (because that is the "incoming" one (this is like how the "scalar" merge functions work)
}
It’s pretty simple to construct overloads for optional + non-optional merges:
template <typename T>
std::optional<T> merge(T const &lhs, std::optional<T> const &rhs)
{
if (!rhs) return lhs; // If the incoming one is empty, keep the current setting
else return rhs; // Otherwise use the new setting
}
template <typename T>
std::optional<T> merge(std::optional<T> const &lhs, T const &rhs)
{
return b; // Incoming setting is not optional, so it always overwrites
}
(Both of these functions are logically exactly the same as the optional + optional one, just with some of the cases hardcoded and combined.)
There’s one I left out still, and that’s the merge
for std::map<T,U>
.
That one was really complicated and I don’t have access to the code anymore, but the algorithm was similar to std::map::merge()
:
Iterate over both maps in parallel. Where a key exists in one map but not the other, take the value. Where a key exists in both, run
merge()
on the two values and take the result.
Similarly, there were overloads for std::array
and std::vector
.
Their implementation is left as an exercise to the reader.
The corner case with vector is what to do when the two vectors are not the same length.
How you handle that depends on what the data represents and what makes sense.
The real object was a tree with over a 100 leaves and a depth of ~4. It worked really well and it was pretty easy to add new settings to thing like ChannelSettings (when the hardware gained new functionality) and to layers like CanvasSettings (when the hardware learned how to do keying or wipe pattern generation).
-
I don’t go into detail what this step does, but suffice it to say it takes the current state object (which is heavily abstracted) and figures out the registers/commands needed to make the hardware Do The Thing Like That. And it does this every single frame, with very little “state” saved between frames. I’ll probably make a post about just this step in the future. ↩
-
“Composite” is probably the wrong term for that - I’ve never tried to come up with a name for it until just now. ↩