Post

Why I Built My Own Event System for the Brokkr Engine

Why I Built My Own Event System for the Brokkr Engine

If you haven’t seen the full code yet, you can check it out here:
👉 Brokkr.EventSystem on GitHub

Every engine subsystem I build starts with a simple question:

“What problem am I actually trying to solve?”

The Brokkr.EventSystem didn’t begin as a grand architectural vision. It started as an assignment in an engine architecture class, a basic event bus project meant to teach messaging patterns. But the moment I began working on it, I realized something important:

This wasn’t just another homework exercise. This was the part of the engine that everything else would depend on.

So instead of doing the minimum, I spent two weeks and more than a hundred hours researching, prototyping, breaking things, and rebuilding them. I pulled from every past engine I’d worked on, every pain point I’d hit, and every lesson I’d learned about how systems communicate.

I knew from experience that the event system would become the lifeblood of the entire engine; the thing that keeps physics, rendering, gameplay, tools, and debugging all connected without turning into a tangled mess.

And I didn’t want a system that only worked for the assignment. I wanted something I could understand, maintain, and evolve long after the class ended.

My goal wasn’t to create a one‑off assignment solution. I wanted a modular, engine‑agnostic event layer I could reuse across projects, extend with new features, and trust as the communication backbone for increasingly complex systems. And because I was taking this class with an industry veteran, I wanted to take full advantage of the opportunity to learn from their experience and build something that reflected the best practices they’d spent years mastering.

That’s where the Brokkr.EventSystem came from.


Why Anyone Would Want a Custom Event System?

Most engines ship with some form of event manager, but few give you fine-grained control over how events are ordered, filtered, or extended.

Brokkr’s event layer was built around five core goals:

  • Decoupling subsystems like physics, AI, and rendering.
  • Predictable ordering via event and handler priorities.
  • Extensible payloads using component-like attachments.
  • Performance under stress with minimal allocations and cache-friendly access.
  • Engine-level flexibility for reuse across tools and runtime systems.

That design philosophy shapes everything from how events are identified to how they’re dispatched, handled, and retired.

Event Identity via Murmur3 Hashing

One of the first architectural decisions I had to make was how events should be identified. Most engines fall into one of two camps:

String‑based event names, which are flexible but slow and error‑prone.

Enum‑based event IDs, which are fast but rigid and require constant maintenance.

Neither option felt right for Brokkr.

I wanted something that was:

  • Fast

  • Stable

  • Friendly to tooling

  • Flexible enough to support dynamic event creation

That led me to Murmur3 hashing.

Why Hashing Instead of Enums?

Enums work well in small systems, but they break down fast when:

  • You want plugins or modules to define their own events

  • You want to load events dynamically

  • You don’t want to recompile the entire engine every time you add one

  • You want to avoid giant “EventType.h” files that become merge‑conflict magnets

Hashing solves all of that.

With hashing, an event name like “Physics.Collision” becomes a stable 32‑bit integer. No global registry. No giant enum. No rebuilds.

Why Murmur3?

There are dozens of hashing algorithms, but Murmur3 hits the sweet spot for engine work:

  • Extremely fast — ideal for per‑frame operations

  • Great avalanche behavior — tiny input changes produce wildly different outputs

  • Low collision rate — especially for short, structured strings like event names

  • Widely used in game engines and ECS frameworks

It’s not cryptographic, and it doesn’t need to be. It’s built for speed and distribution, which is exactly what an event system needs.

Priority‑Based Event Dispatch

In an engine, the frame loop is not just a list of tasks — it’s a temporal choreography. Every subsystem is trying to do its work inside a tiny slice of time, and the order in which things happen determines whether the frame is coherent or chaotic.

Think of the frame as a pipeline:

  • Input must be sampled before gameplay logic runs

  • Gameplay logic must update before physics steps

  • Physics must resolve before animation sampling

  • Animation must update before rendering

  • Rendering must finish before the buffer swap

  • The buffer swap must happen after all GPU commands are committed

This is not optional. This is the shape of a frame.

Events become the glue that coordinates these transitions. But without priority, you get:

  • Nondeterministic ordering

  • Race‑like behavior between systems

  • Handlers firing in arbitrary sequences

  • Subtle bugs that only appear under load

  • Impossible‑to-reproduce behavior in networked or deterministic simulations

Priority is how you impose structure on the chaos.

Martin Fowler’s distinction between event‑notification and event‑carried state transfer maps cleanly to engines:

  • Event‑notification -> “X happened, react if you care.”

  • Event‑driven -> “The system progresses by emitting events.”

Brokkr engine uses the second model: events drive the frame forward.

That means priority is not a convenience. It’s a frame‑level contract. It’s how you guarantee that the engine’s heartbeat fires in the correct order every single frame.

How the EventComparer Works

Comparer is a functor — a struct that overloads operator(). This is a classic C++ idiom: it lets the object behave like a function while still carrying state or type information.

How the priority queue ensures deterministic ordering:

1
2
3
4
5
6
7
8
9
10
    // Comparison functor for events based on their enum value
    struct EventComparer
    {
        // Definition of operator() that takes two Event objects and returns a bool
        bool operator()(const Event& left, const Event& right) const
        {
            // Compare the priority levels of the events directly
            return left.GetPriorityLevel() < right.GetPriorityLevel();
        }
    };

This does two things:

  • Defines ordering for the priority queue. The priority queue uses this comparator to decide which event is “greater” or “lesser” in priority.

  • Encodes your engine’s event semantics, not comparing timestamps, IDs, or types but by comparing priority levels. That means the queue becomes a temporal scheduler.

The functor is the bridge between the event abstraction and the STL container’s ordering rules.

How the Priority Queue Ensures Deterministic Ordering

A priority queue alone does not guarantee determinism. But a design can, like the following:

  • Priority level (primary key)

  • Enqueue sequence (implicit secondary key)

The STL std::priority_queue is a max‑heap. Given the comparator, the event with the highest priority value rises to the top.

But what about events with the same priority?

This is where the engine’s design really matters:

  • Events are pushed into the queue in a known order

  • The heap preserves relative ordering for equal keys as long as the comparator is strict and stable

  • The comparator is strict (no equal case returns true)

Therefore, equal‑priority events fall back to FIFO behavior

This gives the system:

  • Deterministic ordering

  • Reproducible frame behavior

  • Stable debugging

  • Predictable sync behavior

The system is essentially built a stable priority queue even though the STL doesn’t provide one by default.

How Handler Priority does not Interact with Event Priority

This is one of the most important architectural decisions I made:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
    // Comparison functor for event handler functions
struct EventHandlerComparer
{
    // takes two EventHandler objects and returns a bool
    bool operator()(const EventHandler& left, const EventHandler& right) const
    {
        // priority values of the handlers
        if (left.first != right.first)
        {
            // Handlers with higher priority values should come before handlers with lower priority values
            return left.first > right.first;
        }

        // If priority values are equal, use target_type() comparison
        return left.second.target_type().before(right.second.target_type());
        // how this works :
        // left.second.target_type().before(right.second.target_type()) is used to compare the std::type_info
        // objects of the stored event handlers based on their names.
        // This comparison determines the order in which the event handlers are stored in the eventHandlers
        // container, with handlers whose types have "lower" names (lexicographically earlier)
        // being stored first.
        // Source : https://en.cppreference.com/w/cpp/types/type_info
        //              &
        //          https://stackoverflow.com/questions/53467813/comparing-two-type-info-from-typeid-operator
        //
        //      For debugging down the line (lexicographically earlier)
        //      : simply means the name of the type comes first in alphabetical order
    }
};

Separating:

  • Event priority -> when an event is processed

  • Handler priority -> in what order handlers run for a single event

This prevents a whole class of bugs where handler ordering accidentally influences frame ordering.

The handler comparer:

1
2
if (left.first != right.first)
    return left.first > right.first;

Higher handler priority -> earlier execution. But this ordering is local to the event, not global to the frame.

Then the system adds a deterministic tie‑breaker:

1
return left.second.target_type().before(right.second.target_type());

This ensures:

  • No random ordering

  • No dependence on memory addresses

  • No UB from comparing function pointers

  • Stable ordering across builds, platforms, and runs

And most importantly:

Handler priority never affects event priority Because:

  • Events are sorted in the priority queue

  • Handlers are sorted in a set attached to a specific event type

  • The two systems never cross‑reference each other

This is exactly the separation of concerns you want in an engine:

  • Event priority -> frame‑level scheduling

  • Handler priority -> local dispatch ordering

They operate in different domains.

Handler Architecture

Once an event reaches the dispatch stage, the engine needs to know which handlers should run, in what order, and without ambiguity. Brokkr solves this using a simple but powerful structure:

1
2
// Map of event types to a set of event handlers, sorted using the custom comparator 
std::unordered_map<uint32_t, std::set<EventHandler, EventHandlerComparer>> m_handlers;

This design choice does four important things for the engine.

1. Handlers are Stored in a std::set for Deterministic Ordering

A std::set is an ordered container. That means every handler for a given event type is always stored in a sorted, stable, and predictable order.

Why this matters:

  • No random ordering

  • No dependence on insertion order

  • No dependence on memory addresses

  • No UB from comparing function pointers

  • Same handler order across platforms, builds, and runs

In an engine, determinism is everything. A std::set gives you that for free as long as you define a comparator that encodes your rules.

2. Custom Comparator Enforces Execution Priority

Handlers are stored as:

1
std::pair<int, std::function<void(const Event&)>> 

The comparator sorts them like this:

  1. Higher handler priority runs first

  2. If priorities are equal → compare handler types using target_type()

This ensures:

  • High‑priority handlers always run before low‑priority ones

  • Equal‑priority handlers still have a deterministic order

  • No handler ever “jumps the line” because of pointer values or insertion order

This is the exact separation of concerns you want:

  • Event priority → when the event is processed

  • Handler priority → how handlers run within that event

They never interfere with each other.

3. Why Compare target_type()?

When two handlers have the same priority, you need a deterministic tie‑breaker.

Comparing target_type():

  • Uses the type name of the stored callable

  • Produces a stable, lexicographical ordering

  • Works across compilers and platforms

  • Avoids comparing raw function pointers (undefined behavior)

  • Avoids nondeterministic ordering from lambdas with different captures

This is a subtle but important detail. It guarantees that even if two handlers have identical priority, the engine still knows exactly which one runs first.

Anyone working with the Engine should love this kind of thing because the system has:

  • Determinism

  • Reproducibility

  • Cross‑platform stability

  • Debugging clarity

4. The std::set Automatically Prevents Duplicate Handlers

Because std::set enforces uniqueness based on the comparator:

  • If the same handler (same priority + same type) is added twice

  • The second insertion is ignored

This prevents:

  • Accidental double‑registration

  • Duplicate callbacks

  • Subtle bugs where the same handler fires twice

The system gets this safety for free because the data structure enforces it.

Adding and Removing Handlers Is Straightforward

The systems API stays clean:

1
2
3
4
5
6
7
8
9
10
void AddHandler(const char* eventTypeString, const EventHandler& handler)
{
    AddHandler(Event::EventType::HashEventString(eventTypeString), handler);
}

void RemoveHandler(const char* eventTypeString, const EventHandler& handler)
{
    RemoveHandler(Event::EventType::HashEventString(eventTypeString), handler);
}

Hash the event name -> look up the handler set → insert or erase.

The STL does the rest.

Payload Components

If hashing gives the event system its identity, payload components give it its soul.

From the beginning, I knew I wanted more than just fast event lookup. I needed events to be expandable. Real engines eventually need to pass structured data through the event layer, collision snapshots, input states, animation triggers, editor metadata, and so on. I didn’t want to hard‑code any of that into the event system itself, and I definitely didn’t want to end up with a giant brittle struct full of optional fields.

I wanted a system where events could carry exactly the data they needed, no more and no less.

That’s where payload components came from.

Why Payloads are Components:

I considered a few approaches:

  • A void* pointer

  • A templated event type

  • A giant union or variant

  • A fixed event struct with optional fields

All of them had problems:

  • void* is a type‑safety nightmare

  • Templates explode compile times and force everything into headers

  • Variants require a global list of all possible payload types

  • Fixed structs become brittle and grow forever

A component‑style design solved all of that.

Each event becomes a container of payload components, and each payload is a small, self‑contained object that describes a specific piece of data. If an event needs collision info, you attach a CollisionPayload. If it needs UI metadata, you attach a WidgetPayload. If it needs nothing, it carries nothing.

It’s flexible, type‑safe, and avoids the “edge‑case fire‑fighting” that comes with raw pointers or monolithic event structs.

And the best part: Even with multiple payload components attached, the event system never showed up in performance hot paths. It stayed fast, predictable, and lightweight across every project I built with it.

How users can define their own payload types

The base class is intentionally tiny:

1
2
3
4
5
6
7
8
9
#pragma once
class Event;

class PayloadComponent
{
public:
    virtual ~PayloadComponent() = default;
    virtual const char* ToString() = 0;
};

Users extend it by defining their own payloads:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
class PayloadTest final : public PayloadComponent
{
    Event* m_pOwner = nullptr;
public:
    PayloadTest(Event* pOwner): m_pOwner(pOwner){}  // Add default constructor

    virtual const char* ToString() override;
    virtual ~PayloadTest() override = default;
};

inline const char* PayloadTest::ToString()
{
    return "Event Payload Test\n";
}

That’s it. No macros. No registration system. No global lists. Just inherit, attach, and use.

How This Mirrors ECS Data‑Oriented Design

This design ended up looking a lot like a tiny ECS:

  • Events are entities

  • Payloads are components

  • Handlers act like systems

It’s not a full ECS, but the mental model is the same:

  • Data is modular

  • Data is opt‑in

  • Data is stored in small, focused components

  • Events only carry what they need

This keeps the event system clean and prevents the “god‑struct” problem that plagues many engines.

The events are containers and the data componets are almost a one for one a simple ECS design

How It Avoids Brittle Event Structs

1
2
3
4
5
6
7
8
struct EventData 
{
    Vector3 position;
    Vector3 normal;
    int keyCode;
    float deltaTime; 
    // and you get the picture... 
};

This works for a while — until it doesn’t.

Eventually you get:

  • Unused fields

  • Conflicting fields

  • Fields that only apply to one event type

  • Fields that break when someone changes them

  • Fields that require engine‑wide rebuilds

Payload components avoid all of that.

Each event carries only the data it needs, and adding a new payload type doesn’t require touching any existing code. It’s isolated, modular, and future‑proof.

How It Keeps the System Flexible and Future‑Proof

This design gives the engine:

  • Unlimited extensibility — add new payload types without modifying the event system

  • Type safety — no raw pointers or unsafe casts

  • Low coupling — payloads don’t depend on each other

  • Tooling friendliness — payloads can be inspected, logged, or visualized

  • Runtime flexibility — events can adapt to new systems without breaking old ones

It’s the kind of architecture that grows with the engine instead of fighting it.

Example Usage

To show how payload components work in practice, here’s a real example pulled directly from the last project built on the Brokkr engine. This is the collision payload the physics system attaches whenever two objects overlap. It captures exactly the data the gameplay layer needs. Nothing more, nothing less.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#include <vector>

#include "../../../BrokkrEngine/Primitives/Rect.h"
#include "../../PayloadComponent/PayloadComponent.h"

namespace Brokkr
{
    class Collider;

    class CollisionPayload final : public PayloadComponent
    {
        using EntityID = int;

        Collider* m_ObjectMoving;
        std::vector<EntityID> m_objectsHit;
        Vector2<float> m_displacementVector;

    public:
        explicit CollisionPayload(Event* pOwner, Collider* movingObject, const std::vector<EntityID>& objectsHit, const Vector2<float>& displacementVector) // i want to copy the list of ids
            : PayloadComponent(pOwner)
            , m_ObjectMoving(movingObject)
            , m_objectsHit(objectsHit)
            , m_displacementVector(displacementVector)
        {
            //
        }

        [[nodiscard]] std::vector<EntityID> GetObjectHit() const { return m_objectsHit; }
        [[nodiscard]] Collider* GetObjectMoving() const { return m_ObjectMoving; }
        [[nodiscard]] Vector2<float> GetMovingObjectsDisplacement() const { return m_displacementVector; }

        virtual const char* ToString() override;
    };

}

This payload gives the handler everything it needs:

  • Which collider moved

  • Which entities it hit

The displacement vector the physics system resolved

It’s a perfect example of why payload components work so well: the event system doesn’t need to know anything about collisions, and the physics system doesn’t need to modify the event system to add new data.

Defining an Event Type

Events in Brokkr are identified by hashed strings, so defining a collision event can as simple as choosing a name:

1
constexpr const char* EVENT_COLLISION = "Physics.Collision";

Behind the scenes, this becomes a stable 32‑bit Murmur3 hash.

Creating and Attaching the Payload

When the physics system detects a collision, it creates the event and attaches the payload:

1
2
3
4
5
6
7
8
9
Event collisionEvent(EVENT_COLLISION, EventPriority::High);

auto payload = std::make_unique<CollisionPayload>(
    movingCollider,
    hitEntityIDs,
    displacementVector
);

collisionEvent.AddPayloadComponent(std::move(payload));

The event now carries a fully‑typed collision snapshot.

Registering a Handler

Gameplay code can subscribe to collision events like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
void Brokkr::ColliderComponent::BlockMove(const Event& event)
{
    CollisionPayload* data = event.GetComponent<CollisionPayload>();

    // Check if the object being collided with is the blocking collider
    if (data->GetObjectMoving()->m_ownerID == m_pOwner->GetId())
    {
        return;
    }

    PhysicsManager::RequestMoveCorrection(data->GetObjectMoving(), -data->GetMovingObjectsDisplacement());
}

const auto eventStr = "OnEnter" + std::to_string(m_pOwner->GetId());
m_onEnterHandler.first = Event::kPriorityNormal;
m_onEnterHandler.second = [this](auto&& event) { BlockMove(std::forward<decltype(event)>(event)); };
m_pEventManager->AddHandler(eventStr.c_str(), m_onEnterHandler);

A few things this demonstrates well:

  • The handler is strongly typed

  • The event system enforces deterministic ordering

  • The handler only runs if the payload exists

No void pointers, no unsafe casts, no brittle unions

This is exactly the kind of clean, safe API you want in gameplay code.

Dispatching the Event

Finally, the physics system pushes the event into the queue:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
void Brokkr::PhysicsManager::DispatchOnEnterEvent(Collider* movingObject, int IDofObjectSendingTo, const std::vector<ObjectID>& hitIDs, const Vector2<float>& displacementVector)
{
    
    const std::string pTemp = m_eventString + std::to_string(IDofObjectSendingTo);
    m_event = Event::EventType(pTemp.c_str(), Event::kPriorityNormal);

    //Loading a payload of Data to that allows for responses to the collision
    m_event.AddComponent<CollisionPayload>(movingObject, hitIDs, displacementVector); // TODO: add logging for if the event payload fails

    m_pEventManager->PushEvent(m_event);

#if DEBUG_LOGGING
    const std::string debugMessage = "Dispatched Collider Event " + pTemp;
    m_fileLog.Log(Logger::LogLevel::kDebug, debugMessage);
#endif

}

During the frame’s event‑processing phase, the event manager:

  1. Pulls the event from the priority queue

  2. Looks up all handlers for the event type

  3. Executes them in deterministic priority order

  4. Retires the event

The entire flow is predictable, type‑safe, and easy to debug.

Performance Considerations

A good event system isn’t just flexible, it has to be fast. Events sit in the middle of the frame loop, and anything in that path needs to be predictable, low‑overhead, and friendly to the CPU’s memory hierarchy. Brokkr’s event system was designed with those constraints in mind.

Here are the key performance characteristics that shaped the architecture.

O(1) Handler Lookup

1
std::unordered_map<uint32_t, std::set<EventHandler, EventHandlerComparer>>

The hash key is the event type (a 32‑bit Murmur3 hash), which gives:

  • O(1) average lookup time for finding the handler set

  • No string comparisons

  • No linear scans

  • No global registry to walk

This is one of the biggest wins of hashing event names: the cost of dispatching an event is dominated by handler execution, not by finding the handlers.

Priority Queue Complexity

Events are stored in a std::priority_queue, which is a binary heap. That gives:

  • O(log n) insertion

  • O(log n) removal

  • O(1) access to the highest‑priority event

In practice, the number of events per frame is small (dozens, not thousands), so the heap operations are effectively constant‑time. The important part is that the ordering is deterministic and stable so, the cost is predictable, and the behavior is reproducible.

Minimal Allocations

The system avoids unnecessary heap churn:

  • Events are moved, not copied

  • Payload components are allocated once and owned by the event

  • Handler sets are stable and rarely modified

  • No dynamic resizing of giant arrays or vectors

Most frames don’t allocate anything at all. The event system stays off the profiler’s radar, even under stress.

Cache‑Friendly Design

Two choices help keep the system cache‑coherent:

  • Small, focused payload components
    Each payload is a tiny object with tightly packed data. No giant structs, no sparse fields.

  • Stable container layouts
    std::set keeps handlers in sorted order, but the number of handlers per event type is small, so iteration stays cache‑friendly.

The event manager’s hot path is essentially:

  1. Pop event from heap

  2. Iterate a small, sorted set

  3. Call handlers

No pointer chasing across large, fragmented structures.

Hashing Cost vs. String Comparison Cost

A common concern with hashed event names is the cost of hashing. But in Brokkr:

  • Hashing happens once when the event is created

  • Dispatch uses the 32‑bit integer, not the string

  • Murmur3 is extremely fast (designed for per‑frame workloads)

Compare that to string‑based systems:

  • Every dispatch requires string comparisons

  • Comparisons are O(n) in the length of the string

  • Case sensitivity, typos, and memory layout all affect performance

Hashing wins decisively in both speed and reliability.

The Result: A System That Stays Out of the Way

Across every project built with Brokkr, the event system never showed up in performance hot paths. Even with:

  • Multiple payload components

  • High event throughput

  • Complex handler graphs

…the system remained predictable, stable, and effectively invisible in the profiler.

That’s exactly what I wanted from this engine subsystem: fast enough that I never have to think about it.

Lessons Learned / Future Improvements

Right now, the system uses Murmur3 for event hashing, and it’s been a great fit. But one improvement I’d like to explore is compile‑time hashing selection. Being able to choose the hashing algorithm at compile time would make the system more extensible and allow it to be used deeper and in different the software stack.

Different layers of an engine have different constraints:

  • Tools might prefer a slower but stronger hash

  • Runtime systems might want the absolute fastest option

  • Low‑level systems might want a constexpr hash with zero runtime cost

Supporting multiple algorithms would make the event system more adaptable without changing its API.

Async Dispatch

Asynchronous dispatch was something I deliberately avoided early on. The added complexity couldn’t justify the cool factor. The event system never showed up in performance hot paths, and introducing a job‑like system too early would have created more problems than it solved.

But now that the I have moved on from the engine, an async variation is starting to make sense:

  • Some events don’t need to block the frame

  • Some systems could benefit from parallel processing

  • A job‑based dispatch path would open the door to more advanced scheduling

It would be a fun challenge to build, and it’s another excuse to revisit Scott Meyers’ “Caches and Why You Care” talk. Because async only works if your memory access patterns don’t sabotage you.

Profiling Hooks and Offline Frame Reconstruction

After building the GameEvents metrics package, it became obvious that the event system should support injectable optional profiling of some kind.

Being able to attach a metrics collector would allow for:

  • Frame‑by‑frame event tracing

  • Offline replay of event sequences

  • Better debugging of complex interactions

  • Insight into handler timing and ordering

Imagine being able to “rebuild” a frame offline and seeing exactly which events fired, in what order, with what payloads, and how long each handler took. That kind of visibility turns the event system into a powerful debugging and analysis tool, not just a messaging layer.

This post is licensed under CC BY 4.0 by the author.