-
Notifications
You must be signed in to change notification settings - Fork 4
Coders
A Coder is the component that decides what gets cached during fixture generation and how to replay it during mount. FixtureKit ships with FixtureKit::ActiveRecordCoder, which captures every model written via ActiveRecord during FixtureKit.define { ... } and replays the resulting INSERTs on mount. You can register additional coders to capture state outside ActiveRecord — Redis, Rails.cache, ActiveStorage blobs, file system fixtures, etc.
Most users never write a coder. This page is for the cases where you have meaningful test setup that lives outside the database and you want fixture_kit to cache it the same way it caches AR data.
FixtureKit.runner.coders is a list of coder instances. Each one participates in two phases:
-
Generate — when the cache is being built, every coder wraps the user's
FixtureKit.defineblock. They run in chain order: the first coder'sgenerateblock is called, which calls the second coder'sgenerateblock, and so on. The innermost block is the user's fixture body. Each coder returns whatever it wants to cache. - Mount — when the cache is being replayed, each coder receives back the data it produced and replays it.
The cache file on disk looks like:
{
"data": {
"FixtureKit::ActiveRecordCoder": "...",
"MyApp::RedisCoder": "..."
},
"exposed": { ... exposed records ... }
}One key per registered coder class. Each coder owns its slice of the data.
Subclass FixtureKit::Coder and implement four methods:
class FixtureKit::Coder
def generate(parent_data: nil, &block)
raise NotImplementedError
end
def mount(data)
raise NotImplementedError
end
def encode(data)
data # default: pass through
end
def decode(data)
data # default: pass through
end
endCalled once when fixture cache is being built. Must:
- Set up whatever observation you need (subscribe to notifications, wrap a service, snapshot state).
- Call
block.call(oryield). This runs the user's fixture definition — and all later coders in the chain. - Return the data you want cached for this coder.
parent_data is the cached data from the same coder on the parent fixture, if extends: was used. Coders that need to compose with parent fixtures use this to merge data; coders that don't can ignore it.
Called once per test, with the data your generate returned. Re-create the state on the test database.
Convert between the in-memory representation your generate produces and the JSON-serializable form that goes into the cache file. Defaults are identity, which is fine when your generate already returns JSON-friendly data (strings, hashes, arrays).
A coder that captures Rails.cache writes during fixture setup and replays them on mount:
class RailsCacheCoder < FixtureKit::Coder
def generate(parent_data: nil, &block)
captured = parent_data&.dup || {}
subscriber = lambda do |_name, _start, _finish, _id, payload|
next unless payload[:key]
captured[payload[:key]] = Rails.cache.read(payload[:key])
end
ActiveSupport::Notifications.subscribed(subscriber, "cache_write.active_support", &block)
captured
end
def mount(data)
data.each { |key, value| Rails.cache.write(key, value) }
end
endRegister it:
FixtureKit.configure do |config|
config.register(RailsCacheCoder)
endNow any Rails.cache.write(...) calls inside FixtureKit.define { ... } are cached and replayed before each test, alongside the AR data.
When your captured data isn't directly JSON-serializable, override encode and decode:
class ActiveStorageBlobCoder < FixtureKit::Coder
def generate(parent_data: nil, &block)
captured = []
subscriber = lambda do |_name, _start, _finish, _id, payload|
blob = payload[:blob]
captured << { key: blob.key, content: blob.download }
end
ActiveSupport::Notifications.subscribed(subscriber, "service_upload.active_storage", &block)
captured
end
def encode(data)
data.map { |entry| entry.merge(content: Base64.strict_encode64(entry[:content])) }
end
def decode(data)
data.map { |entry| entry.transform_keys(&:to_sym).merge(content: Base64.strict_decode64(entry["content"])) }
end
def mount(data)
data.each do |entry|
ActiveStorage::Blob.service.upload(entry[:key], StringIO.new(entry[:content]))
end
end
endencode runs once when the cache is written; decode runs when it's read back. Between those, data is passed through JSON.dump/JSON.parse.
When multiple coders are registered, they form a chain, not a sequence. With coders [A, B, C]:
A.generate { B.generate { C.generate { user's fixture block } } }
Each coder's generate block call drills one level deeper. This means:
- The innermost coder runs closest to the user's fixture body. Anything it observes happens directly inside the fixture definition.
- Outer coders observe everything that happens inside their wrapped block, including inner coders' setup.
- Order matters if your coders interact (rare).
The default ActiveRecordCoder is currently the only one in the chain unless you register more. If you add yours, it goes after ActiveRecordCoder (registration is Set-ordered by insertion).
If a fixture uses extends:, each coder receives the parent fixture's data for that coder via parent_data:. The coder decides how to merge.
ActiveRecordCoder uses parent_data to ensure parent models that the child doesn't directly write are still included in the child's cache (so mount is self-contained). Custom coders can:
- Ignore
parent_data(parent state will be replayed by the parent's own coder when it mounts — butextends:rolls all data into one cache, so usually you do want to merge). - Merge as appropriate for your data shape.
ActiveRecordCoder is registered automatically. If your test setup doesn't need ActiveRecord caching at all (rare), you can clear the coders set:
FixtureKit.configure do |config|
config.coders.clear
config.register(MyCustomCoder)
endFor exact method signatures, the in-repo reference is canonical: docs/reference.md.