/* * Glossary: * * oblock: index of an origin block * cblock: index of a cache block * promotion: movement of a block from origin to cache * demotion: movement of a block from cache to origin * migration: movement of a block between the origin and cache device, * either direction
*/
/* * The batcher collects together pieces of work that need a particular * operation to occur before they can proceed (typically a commit).
*/ struct batcher { /* * The operation that everyone is waiting for.
*/
blk_status_t (*commit_op)(void *context); void *commit_context;
/* * This is how bios should be issued once the commit op is complete * (accounted_request).
*/ void (*issue_op)(struct bio *bio, void *context); void *issue_context;
/* * Queued work gets put on here after commit.
*/ struct workqueue_struct *wq;
/* * We have to grab these before the commit_op to avoid a race * condition.
*/
spin_lock_irq(&b->lock);
list_splice_init(&b->work_items, &work_items);
bio_list_merge_init(&bios, &b->bios);
b->commit_scheduled = false;
spin_unlock_irq(&b->lock);
r = b->commit_op(b->commit_context);
list_for_each_entry_safe(ws, tmp, &work_items, entry) {
k = container_of(ws, struct continuation, ws);
k->input = r;
INIT_LIST_HEAD(&ws->entry); /* to avoid a WARN_ON */
queue_work(b->wq, ws);
}
while ((bio = bio_list_pop(&bios))) { if (r) {
bio->bi_status = r;
bio_endio(bio);
} else
b->issue_op(bio, b->issue_context);
}
}
/* * There are a couple of places where we let a bio run, but want to do some * work before calling its endio function. We do this by temporarily * changing the endio fn.
*/ struct dm_hook_info {
bio_end_io_t *bi_end_io;
};
/* * The block size of the device holding cache data must be * between 32KB and 1GB.
*/ #define DATA_DEV_BLOCK_SIZE_MIN_SECTORS (32 * 1024 >> SECTOR_SHIFT) #define DATA_DEV_BLOCK_SIZE_MAX_SECTORS (1024 * 1024 * 1024 >> SECTOR_SHIFT)
enum cache_metadata_mode {
CM_WRITE, /* metadata may be changed */
CM_READ_ONLY, /* metadata may not be changed */
CM_FAIL
};
enum cache_io_mode { /* * Data is written to cached blocks only. These blocks are marked * dirty. If you lose the cache device you will lose data. * Potential performance increase for both reads and writes.
*/
CM_IO_WRITEBACK,
/* * Data is written to both cache and origin. Blocks are never * dirty. Potential performance benfit for reads only.
*/
CM_IO_WRITETHROUGH,
/* * A degraded mode useful for various cache coherency situations * (eg, rolling back snapshots). Reads and writes always go to the * origin. If a write goes to a cached oblock, then the cache * block is invalidated.
*/
CM_IO_PASSTHROUGH
};
/* * The number of in flight migrations that are performing * background io. eg, promotion, writeback.
*/
atomic_t nr_io_migrations;
struct bio_list deferred_bios;
struct rw_semaphore quiesce_lock;
/* * origin_blocks entries, discarded if set.
*/
dm_dblock_t discard_nr_blocks; unsignedlong *discard_bitset;
uint32_t discard_block_size; /* a power of 2 times sectors per block */
/* * Rather than reconstructing the table line for the status we just * save it and regurgitate.
*/ unsignedint nr_ctr_args; constchar **ctr_args;
/* * Cache_size entries. Set bits indicate blocks mapped beyond the * target length, which are marked for invalidation.
*/ unsignedlong *invalid_bitset;
};
/* * We have two lock levels. Level 0, which is used to prevent WRITEs, and * level 1 which prevents *both* READs and WRITEs.
*/ #define WRITE_LOCK_LEVEL 0 #define READ_WRITE_LOCK_LEVEL 1
/* * These two are called when setting after migrations to force the policy * and dirty bitset to be in sync.
*/ staticvoid force_set_dirty(struct cache *cache, dm_cblock_t cblock)
{ if (!test_and_set_bit(from_cblock(cblock), cache->dirty_bitset))
atomic_inc(&cache->nr_dirty);
policy_set_dirty(cache->policy, cblock);
}
staticvoid force_clear_dirty(struct cache *cache, dm_cblock_t cblock)
{ if (test_and_clear_bit(from_cblock(cblock), cache->dirty_bitset)) { if (atomic_dec_return(&cache->nr_dirty) == 0)
dm_table_event(cache->ti->table);
}
staticvoid remap_to_origin_clear_discard(struct cache *cache, struct bio *bio,
dm_oblock_t oblock)
{ // FIXME: check_if_tick_bio_needed() is called way too much through this interface
check_if_tick_bio_needed(cache, bio);
remap_to_origin(cache, bio); if (bio_data_dir(bio) == WRITE)
clear_discard(cache, oblock_to_dblock(cache, oblock));
}
/* * When running in writethrough mode we need to send writes to clean blocks * to both the cache and origin devices. Clone the bio and send them in parallel.
*/ staticvoid remap_to_origin_and_cache(struct cache *cache, struct bio *bio,
dm_oblock_t oblock, dm_cblock_t cblock)
{ struct bio *origin_bio = bio_alloc_clone(cache->origin_dev->bdev, bio,
GFP_NOIO, &cache->bs);
BUG_ON(!origin_bio);
bio_chain(origin_bio, bio);
if (bio_data_dir(origin_bio) == WRITE)
clear_discard(cache, oblock_to_dblock(cache, oblock));
submit_bio(origin_bio);
if (get_cache_mode(cache) >= CM_READ_ONLY) return;
DMERR_LIMIT("%s: aborting current metadata transaction", dev_name); if (dm_cache_metadata_abort(cache->cmd)) {
DMERR("%s: failed to abort metadata transaction", dev_name);
set_cache_mode(cache, CM_FAIL);
}
if (dm_cache_metadata_set_needs_check(cache->cmd)) {
DMERR("%s: failed to set 'needs_check' flag in metadata", dev_name);
set_cache_mode(cache, CM_FAIL);
}
}
/* * The overwrite bio is part of the copy operation, as such it does * not set/clear discard or dirty flags.
*/ if (mg->op->op == POLICY_PROMOTE)
remap_to_cache(mg->cache, bio, mg->op->cblock); else
remap_to_origin(mg->cache, bio);
if (mg->overwrite_bio) { if (success)
force_set_dirty(cache, cblock); elseif (mg->k.input)
mg->overwrite_bio->bi_status = mg->k.input; else
mg->overwrite_bio->bi_status = BLK_STS_IOERR;
bio_endio(mg->overwrite_bio);
} else { if (success)
force_clear_dirty(cache, cblock);
dec_io_migrations(cache);
} break;
case POLICY_DEMOTE: /* * We clear dirty here to update the nr_dirty counter.
*/ if (success)
force_clear_dirty(cache, cblock);
policy_complete_background_work(cache->policy, op, success);
dec_io_migrations(cache); break;
case POLICY_WRITEBACK: if (success)
force_clear_dirty(cache, cblock);
policy_complete_background_work(cache->policy, op, success);
dec_io_migrations(cache); break;
}
bio_list_init(&bios); if (mg->cell) { if (dm_cell_unlock_v2(cache->prison, mg->cell, &bios))
free_prison_cell(cache, mg->cell);
}
case POLICY_DEMOTE:
r = dm_cache_remove_mapping(cache->cmd, op->cblock); if (r) {
DMERR_LIMIT("%s: migration failed; couldn't update on disk metadata",
cache_device_name(cache));
metadata_operation_failed(cache, "dm_cache_remove_mapping", r);
mg_complete(mg, false); return;
}
/* * It would be nice if we only had to commit when a REQ_FLUSH * comes through. But there's one scenario that we have to * look out for: * * - vblock x in a cache block * - domotion occurs * - cache block gets reallocated and over written * - crash * * When we recover, because there was no commit the cache will * rollback to having the data for vblock x in the cache block. * But the cache block has since been overwritten, so it'll end * up pointing to data that was never in 'x' during the history * of the device. * * To avoid this issue we require a commit as part of the * demotion operation.
*/
init_continuation(&mg->k, mg_success);
continue_after_commit(&cache->committer, &mg->k);
schedule_commit(&cache->committer); break;
case POLICY_WRITEBACK:
mg_complete(mg, true); break;
}
}
/* * Did the copy succeed?
*/ if (mg->k.input)
mg_complete(mg, false);
else { /* * Now we want the lock to prevent both reads and writes.
*/
r = dm_cell_lock_promote_v2(mg->cache->prison, mg->cell,
READ_WRITE_LOCK_LEVEL); if (r < 0)
mg_complete(mg, false);
if (mg->overwrite_bio) { /* * No exclusive lock was held when we last checked if the bio * was optimisable. So we have to check again in case things * have changed (eg, the block may no longer be discarded).
*/ if (!optimisable_bio(mg->cache, mg->overwrite_bio, mg->op->oblock)) { /* * Fallback to a real full copy after doing some tidying up.
*/ bool rb = bio_detain_shared(mg->cache, mg->op->oblock, mg->overwrite_bio);
BUG_ON(rb); /* An exclusive lock must _not_ be held for this block */
mg->overwrite_bio = NULL;
inc_io_migrations(mg->cache);
mg_full_copy(ws); return;
}
/* * It's safe to do this here, even though it's new data * because all IO has been locked out of the block. * * mg_lock_writes() already took READ_WRITE_LOCK_LEVEL * so _not_ using mg_upgrade_lock() as continutation.
*/
overwrite(mg, mg_update_metadata_after_copy);
/* * Prevent writes to the block, but allow reads to continue. * Unless we're using an overwrite bio, in which case we lock * everything.
*/
build_key(mg->op->oblock, oblock_succ(mg->op->oblock), &key);
r = dm_cell_lock_v2(cache->prison, &key,
mg->overwrite_bio ? READ_WRITE_LOCK_LEVEL : WRITE_LOCK_LEVEL,
prealloc, &mg->cell); if (r < 0) {
free_prison_cell(cache, prealloc);
mg_complete(mg, false); return r;
}
if (mg->cell != prealloc)
free_prison_cell(cache, prealloc);
if (r == 0)
mg_copy(&mg->k.ws); else
quiesce(mg, mg_copy);
staticint invalidate_cblock(struct cache *cache, dm_cblock_t cblock)
{ int r;
r = policy_invalidate_mapping(cache->policy, cblock); if (!r) {
r = dm_cache_remove_mapping(cache->cmd, cblock); if (r) {
DMERR_LIMIT("%s: invalidation failed; couldn't update on disk metadata",
cache_device_name(cache));
metadata_operation_failed(cache, "dm_cache_remove_mapping", r);
}
if (mg->cell != prealloc)
free_prison_cell(cache, prealloc);
if (r)
quiesce(mg, invalidate_remove);
else { /* * We can't call invalidate_remove() directly here because we * might still be in request context.
*/
init_continuation(&mg->k, invalidate_remove);
queue_work(cache->wq, &mg->k.ws);
}
staticint map_bio(struct cache *cache, struct bio *bio, dm_oblock_t block, bool *commit_needed)
{ int r, data_dir; bool rb, background_queued;
dm_cblock_t cblock;
*commit_needed = false;
rb = bio_detain_shared(cache, block, bio); if (!rb) { /* * An exclusive lock is held for this block, so we have to * wait. We set the commit_needed flag so the current * transaction will be committed asap, allowing this lock * to be dropped.
*/
*commit_needed = true; return DM_MAPIO_SUBMITTED;
}
data_dir = bio_data_dir(bio);
if (optimisable_bio(cache, bio, block)) { struct policy_work *op = NULL;
r = policy_lookup_with_work(cache->policy, block, &cblock, data_dir, true, &op); if (unlikely(r && r != -ENOENT)) {
DMERR_LIMIT("%s: policy_lookup_with_work() failed with r = %d",
cache_device_name(cache), r);
bio_io_error(bio); return DM_MAPIO_SUBMITTED;
}
if (r == -ENOENT && op) {
bio_drop_shared_lock(cache, bio);
BUG_ON(op->op != POLICY_PROMOTE);
mg_start(cache, op, bio); return DM_MAPIO_SUBMITTED;
}
} else {
r = policy_lookup(cache->policy, block, &cblock, data_dir, false, &background_queued); if (unlikely(r && r != -ENOENT)) {
DMERR_LIMIT("%s: policy_lookup() failed with r = %d",
cache_device_name(cache), r);
bio_io_error(bio); return DM_MAPIO_SUBMITTED;
}
if (background_queued)
wake_migration_worker(cache);
}
if (r == -ENOENT) { struct per_bio_data *pb = get_per_bio_data(bio);
/* * Miss.
*/
inc_miss_counter(cache, bio); if (pb->req_nr == 0) {
accounted_begin(cache, bio);
remap_to_origin_clear_discard(cache, bio, block);
} else { /* * This is a duplicate writethrough io that is no * longer needed because the block has been demoted.
*/
bio_endio(bio); return DM_MAPIO_SUBMITTED;
}
} else { /* * Hit.
*/
inc_hit_counter(cache, bio);
/* * Passthrough always maps to the origin, invalidating any * cache blocks that are written to.
*/ if (passthrough_mode(cache)) { if (bio_data_dir(bio) == WRITE) {
bio_drop_shared_lock(cache, bio);
atomic_inc(&cache->stats.demotion);
invalidate_start(cache, cblock, block, bio);
} else
remap_to_origin_clear_discard(cache, bio, block);
} else { if (bio_data_dir(bio) == WRITE && writethrough_mode(cache) &&
!is_dirty(cache, cblock)) {
remap_to_origin_and_cache(cache, bio, block, cblock);
accounted_begin(cache, bio);
} else
remap_to_cache_dirty(cache, bio, block, cblock);
}
}
/* * dm core turns FUA requests into a separate payload and FLUSH req.
*/ if (bio->bi_opf & REQ_FUA) { /* * issue_after_commit will call accounted_begin a second time. So * we call accounted_complete() to avoid double accounting.
*/
accounted_complete(cache, bio);
issue_after_commit(&cache->committer, bio);
*commit_needed = true; return DM_MAPIO_SUBMITTED;
}
return DM_MAPIO_REMAPPED;
}
staticbool process_bio(struct cache *cache, struct bio *bio)
{ bool commit_needed;
if (map_bio(cache, bio, get_bio_block(cache, bio), &commit_needed) == DM_MAPIO_REMAPPED)
dm_submit_bio_remap(bio, NULL);
return commit_needed;
}
/* * A non-zero return indicates read_only or fail_io mode.
*/ staticint commit(struct cache *cache, bool clean_shutdown)
{ int r;
if (get_cache_mode(cache) >= CM_READ_ONLY) return -EINVAL;
atomic_inc(&cache->stats.commit_count);
r = dm_cache_commit(cache->cmd, clean_shutdown); if (r)
metadata_operation_failed(cache, "dm_cache_commit", r);
return r;
}
/* * Used by the batcher.
*/ static blk_status_t commit_op(void *context)
{ struct cache *cache = context;
if (dm_cache_changed_this_transaction(cache->cmd)) return errno_to_blk_status(commit(cache, false));
/* * FIXME: do we need to lock the region? Or can we just assume the * user wont be so foolish as to issue discard concurrently with * other IO?
*/
calc_discard_block_range(cache, bio, &b, &e); while (b != e) {
set_discard(cache, b);
b = to_dblock(from_dblock(b) + 1);
}
if (cache->features.discard_passdown) {
remap_to_origin(cache, bio);
dm_submit_bio_remap(bio, NULL);
} else
bio_endio(bio);
/* * We want to commit periodically so that not too much * unwritten metadata builds up.
*/ staticvoid do_waker(struct work_struct *ws)
{ struct cache *cache = container_of(to_delayed_work(ws), struct cache, waker);
/* * This function gets called on the error paths of the constructor, so we * have to cope with a partially initialised struct.
*/ staticvoid __destroy(struct cache *cache)
{
mempool_exit(&cache->migration_pool);
if (cache->prison)
dm_bio_prison_destroy_v2(cache->prison);
if (cache->wq)
destroy_workqueue(cache->wq);
if (cache->dirty_bitset)
free_bitset(cache->dirty_bitset);
if (cache->discard_bitset)
free_bitset(cache->discard_bitset);
if (cache->invalid_bitset)
free_bitset(cache->invalid_bitset);
if (cache->copier)
dm_kcopyd_client_destroy(cache->copier);
if (cache->cmd)
dm_cache_metadata_close(cache->cmd);
if (cache->metadata_dev)
dm_put_device(cache->ti, cache->metadata_dev);
if (cache->origin_dev)
dm_put_device(cache->ti, cache->origin_dev);
if (cache->cache_dev)
dm_put_device(cache->ti, cache->cache_dev);
if (cache->policy)
dm_cache_policy_destroy(cache->policy);
/* * Construct a cache device mapping. * * cache <metadata dev> <cache dev> <origin dev> <block size> * <#feature args> [<feature arg>]* * <policy> <#policy args> [<policy arg>]* * * metadata dev : fast device holding the persistent metadata * cache dev : fast device holding cached data blocks * origin dev : slow device holding original data blocks * block size : cache unit size in sectors * * #feature args : number of feature arguments passed * feature args : writethrough. (The default is writeback.) * * policy : the replacement policy to use * #policy args : an even number of policy arguments corresponding * to key/value pairs passed to the policy * policy args : key/value pairs passed to the policy * E.g. 'sequential_threshold 1024' * See cache-policies.txt for details. * * Optional feature arguments are: * writethrough : write through caching that prohibits cache block * content from being different from origin block content. * Without this argument, the default behaviour is to write * back cache block contents later for performance reasons, * so they may differ from the corresponding origin blocks.
*/ struct cache_args { struct dm_target *ti;
struct dm_dev *metadata_dev;
struct dm_dev *cache_dev;
sector_t cache_sectors;
struct dm_dev *origin_dev;
uint32_t block_size;
constchar *policy_name; int policy_argc; constchar **policy_argv;
struct cache_features features;
};
staticvoid destroy_cache_args(struct cache_args *ca)
{ if (ca->metadata_dev)
dm_put_device(ca->ti, ca->metadata_dev);
if (ca->cache_dev)
dm_put_device(ca->ti, ca->cache_dev);
if (ca->origin_dev)
dm_put_device(ca->ti, ca->origin_dev);
r = dm_get_device(ca->ti, dm_shift_arg(as),
BLK_OPEN_READ | BLK_OPEN_WRITE, &ca->metadata_dev); if (r) {
*error = "Error opening metadata device"; return r;
}
metadata_dev_size = get_dev_size(ca->metadata_dev); if (metadata_dev_size > DM_CACHE_METADATA_MAX_SECTORS_WARNING)
DMWARN("Metadata device %pg is larger than %u sectors: excess space will not be used.",
ca->metadata_dev->bdev, THIN_METADATA_MAX_SECTORS);
/* * We want the discard block size to be at least the size of the cache * block size and have no more than 2^14 discard blocks across the origin.
*/ #define MAX_DISCARD_BLOCKS (1 << 14)
if (nr_blocks > (1 << 20) && cache->cache_size != size)
DMWARN_LIMIT("You have created a cache device with a lot of individual cache blocks (%llu)\n" "All these mappings can consume a lot of kernel memory, and take some time to read/write.\n" "Please consider increasing the cache block size to reduce the overall cache block count.",
(unsignedlonglong) nr_blocks);
int r; bool commit_needed;
dm_oblock_t block = get_bio_block(cache, bio);
init_per_bio_data(bio); if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) { /* * This can only occur if the io goes to a partial block at * the end of the origin device. We don't cache these. * Just remap to the origin and carry on.
*/
remap_to_origin(cache, bio);
accounted_begin(cache, bio); return DM_MAPIO_REMAPPED;
}
if (discard_or_flush(bio)) {
defer_bio(cache, bio); return DM_MAPIO_SUBMITTED;
}
r = map_bio(cache, bio, block, &commit_needed); if (commit_needed)
schedule_commit(&cache->committer);
staticint write_dirty_bitset(struct cache *cache)
{ int r;
if (get_cache_mode(cache) >= CM_READ_ONLY) return -EINVAL;
r = dm_cache_set_dirty_bits(cache->cmd, from_cblock(cache->cache_size), cache->dirty_bitset); if (r)
metadata_operation_failed(cache, "dm_cache_set_dirty_bits", r);
return r;
}
staticint write_discard_bitset(struct cache *cache)
{ unsignedint i, r;
if (get_cache_mode(cache) >= CM_READ_ONLY) return -EINVAL;
r = dm_cache_discard_bitset_resize(cache->cmd, cache->discard_block_size,
cache->discard_nr_blocks); if (r) {
DMERR("%s: could not resize on-disk discard bitset", cache_device_name(cache));
metadata_operation_failed(cache, "dm_cache_discard_bitset_resize", r); return r;
}
for (i = 0; i < from_dblock(cache->discard_nr_blocks); i++) {
--> --------------------
--> maximum size reached
--> --------------------
Messung V0.5
¤ Dauer der Verarbeitung: 0.24 Sekunden
(vorverarbeitet)
¤
Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.
Bemerkung:
Die farbliche Syntaxdarstellung und die Messung sind noch experimentell.