Giter VIP home page Giter VIP logo

dmdedup3.19's Introduction

dm-dedup

Device-mapper's dedup target provides transparent data deduplication of block devices. Every write coming to a dm-dedup instance is deduplicated against previously written data. For datasets that contain many duplicates scattered across the disk (e.g., virtual machine disk image collections, backups, home directory servers) deduplication provides a significant amount of space savings.

Construction Parameters

<meta_dev> <data_dev> <block_size>
<hash_algo> <backend> <flushrq>

<meta_dev> This is the device where dm-dedup's metadata resides. Metadata typically includes hash index, block mapping, and reference counters. It should be specified as a path, like "/dev/sdaX".

<data_dev> This is the device where the actual data blocks are stored. It should be specified as a path, like "/dev/sdaX".

<block_size> This is the size of a single block on the data device in bytes. Block is both a unit of deduplication and a unit of storage. Supported values are between 4096 to 1048576 (1MB) and should be a power of two.

<hash_algo> This specifies which hashing algorithm dm-dedup will use for detecting identical blocks, e.g., "md5" or "sha256". Any hash algorithm supported by the running kernel can be used (see "/proc/crypto" file).

This is the backend that dm-dedup will use to store metadata. Currently supported values are "cowbtree" and "inram". Cowbtree backend uses persistent Copy-on-Write (COW) B-Trees to store metadata. Inram backend stores all metadata in RAM which is lost after a system reboot. Consequently, inram backend should typically be used only for experiments. Notice, that though inram backend does not use metadata device, parameter should still be specified in the command line. This parameter specifies how many writes to the target should occur before dm-dedup flushes its buffered metadata to the metadata device. In other words, in an event of power failure, one can loose up to this number of most recent writes. Notice, that dm-dedup also flushes its metadata when it sees REQ_FLUSH or REQ_FUA flags in the I/O requests. In particular, these flags are set by file systems in the appropriate points of time to ensure file system consistency.

During construction, dm-dedup checks if the first 4096 bytes of the metadata device are equal to zero. If they are, then a completely new dm-dedup instance is initialized with the metadata and data devices considered "empty". If, however, 4096 starting bytes are not zero, dm-dedup will try to reconstruct the target based on the current information on the metadata and data devices.

Theory of Operation

We provide an overview of dm-dedup design in this section. Detailed design and performance evaluation can be found in the following paper:

V. Tarasov and D. Jain and G. Kuenning and S. Mandal and K. Palanisami and P. Shilane and S. Trehan. Dmdedup: Device Mapper Target for Data Deduplication. Ottawa Linux Symposium, 2014. http://www.fsl.cs.stonybrook.edu/docs/ols-dmdedup/dmdedup-ols14.pdf

To quickly identify duplicates, dm-dedup maintains an index of hashes for all written blocks. Block is a user-configurable unit of deduplication and storage. Dm-dedup index, along with other deduplication metadata, resides on a separate block device, which we refer to as metadata device. Blocks themselves are stored on the data device. Although the metadata device can be any block device, e.g., an HDD or its partition, for higher performance we recommend to use SSD devices to store metadata.

For every block that is written to a target, dm-dedup computes its hash using the <hash_algo>. It then looks for the resulting hash in the hash index. If a match is found then the write is considered to be a duplicate.

Dm-dedup's hash index is essentially a mapping between the hash and the physical address of a block on the data device. In addition, dm-dedup maintains a mapping between logical block addresses on the target and physical block address on the data device (LBN-PBN mapping). When a duplicate is detected, there is no need to write actual data to the disk and only LBN-PBN mapping is updated.

When a non-duplicate data is written, new physical block on the data device is allocated, written, and a corresponding hash is added to the index.

On read, LBN-PBN mapping allows to quickly locate a required block on the data device. If there were no writes to an LBN before, a zero block is returned.

Target Size

When using device-mapper one needs to specify target size in advance. To get deduplication benefits, target size should be larger than the data device size (or otherwise one could just use the data device directly). Because dataset deduplication ratio is not known in advance one has to use an estimation.

Usually, up to 1.5 deduplication ratio for a primary dataset is a safe assumption. For backup datasets, however, deduplication ratio can be as high as 100.

Estimating deduplication ratio of an existing dataset using fs-hasher package from http://tracer.filesystems.org/ can give a good starting point for a specific dataset.

If one over-estimates deduplication ratio, data device can run out of free space. This situation can be monitored using dmsetup status command (described below). After data device is full, dm-dedup will stop accepting writes until free space becomes available on the data device again.

Backends

Dm-dedup's core logic considers index and LBN-PBN mappings as plain key-value stores with an extended API described in

drivers/md/dm-dedup-backend.h

Different backends can provided key-value store API. We implemented a cowbtree backend that uses device-mapper's persistent metadata framework to consistently store metadata. Details on this framework and its on-disk layout can be found here:

Documentation/device-mapper/persistent-data.txt

By using persistent COW B-trees, cowbtree backend guarantees consistency in the event of power failure.

In addition, we also provide inram backend that stores all metadata in RAM. Hash tables with linear probing are used for storing the index and LBN-PBN mapping. Inram backend does not store metadata persistently and should usually by used only for experiments.

Dmsetup Status

Dm-dedup exports various statistics via dmsetup status command. The line returned by dmsetup status will contain the following values in the order:



, , , and are generic fields printed by dmsetup tool for any target.

- total number of blocks on the data device - number of free (unallocated) blocks on the data device - number of used (allocated) blocks on the data device - number of allocated logical blocks (were written at least once) - block size in bytes - data disk's major:minor - metadata disk's major:minor - total number of writes to the target - the number of writes that weren't duplicates (were unique) - the number of writes that were duplicates - the number of times dm-dedup had to read data from the data device because a write was misaligned (read-on-write effect) - the number of writes to a logical block that was written before at least once - the number of writes to a logical address that was not written before even once

To compute deduplication ratio one needs to device dactual by dused.

Example

Decide on metadata and data devices:

META_DEV=/dev/sdX

DATA_DEV=/dev/sdY

Compute target size assuming 1.5 dedup ratio:

DATA_DEV_SIZE=blockdev --getsz $DATA_DEV

TARGET_SIZE=expr $DATA_DEV_SIZE \* 15 / 10

Reset metadata device:

dd if=/dev/zero of=$META_DEV bs=4096 count=1

Setup a target: echo "0 $TARGET_SIZE dedup $META_DEV $DATA_DEV 4096 md5 cowbtree 100" |
dmsetup create mydedup

Authors

dm-dedup was developed in the File system and Storage Lab (FSL) at Stony Brook University Computer Science Department, in collaboration with Harvey Mudd College and EMC.

Key people involved in the project were Vasily Tarasov, Geoff Kuenning, Sonam Mandal, Karthikeyani Palanisami, Philip Shilane, Sagar Trehan, and Erez Zadok.

We also acknowledge the help of several students involved in the deduplication project: Teo Asinari, Deepak Jain, Mandar Joshi, Atul Karmarkar, Meg O'Keefe, Gary Lent, Amar Mudrankit, Ujwala Tulshigiri, and Nabil Zaman.

dmdedup3.19's People

Contributors

sectorsize512 avatar venkrishr avatar vinothkumarraja avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dmdedup3.19's Issues

Dmdedup installation issue

Hi there
I'm trying to install dmdedup on ubuntu desktop virtual machine. I'm using virtualbox ad tried different version with the different kernel version.
when I try to compile the dmdedup; I get errors related to syscall_32.tbl and persistenant data headers files.
I installed the md driver but still having some issue with syscall_32.tbl .

Can anyone tell me if he faced this issue before and if so how to solve it, or alternatively can someone (who successfully compile and run the dmdedup) help me with step by step guide, the system requirements and any kernel configuration that need to be enable or disable to run it.

thanks

How to use and run on UBUNTU

Hi all,
I am trying to do some IO operation on disk, which surely have no duplicated data whilst storing to disk. Here is the info
I am able to compile module, inserted the module on OS running linux-4.4 kernel. Afterward did create dmsetup create mydedup and i am able to see /dev/mapper/mydedup and /dev/dm-0 device nodes. Then how to perform data operations? I tried mounting mydedup block device like this,

mount /dev/mapper/mydedup /mnt

It fails with error no filesystem, even tried /dev/dm-0 and /dev/sdbX also.
Appreciate any points.

Thanks in advance,
Venkatesh.

free_pages NULL pointer dereference during mkfs.xfs

mkfs.xfs crashes dmdedup and we have been able to locate the BUG to free_pages() function call in the my_endio() function in dm-dedup-rw.c file... You can see the call trace below:

[ 499.446133] Call Trace:
[ 499.446652]
[ 499.447070] [] ? free_pages.part.66+0x40/0x50
[ 499.448394] [] free_pages+0x13/0x20
[ 499.449450] [] my_endio+0x4d/0x70 [dm_dedup]
[ 499.450658] [] bio_endio+0x5b/0xa0
[ 499.451700] [] blk_update_request+0x90/0x360
[ 499.452915] [] scsi_end_request+0x34/0x1e0
[ 499.454107] [] scsi_io_completion+0x119/0x6c0
[ 499.455359] [] scsi_finish_command+0xcf/0x130
[ 499.456577] [] scsi_softirq_done+0x137/0x160
[ 499.457743] [] blk_done_softirq+0x90/0xc0
[ 499.458859] [] __do_softirq+0xf4/0x2d0
[ 499.459992] [] irq_exit+0x125/0x130
[ 499.461021] [] do_IRQ+0x5a/0xf0
[ 499.462002] [] common_interrupt+0x6d/0x6d

The following is the code corresponding to the crash:

if (rw == WRITE || rw == READ) {
bv = bio_iovec(clone);
if (bv.bv_page) {
free_pages((unsigned long)page_address(bv.bv_page),0);
bv.bv_page = NULL;
}
}

I tried to print the address of clone and its corresponding page for better clarity and it seems clone seems to be allocated in the same address and its page alternates between two addresses:

[ Time ] READ or WRITE clone clone_address page page_address

[ 584.827534] WRITE clone ffff88007b0d86c0 page ffffea0001de5f80
[ 584.830866] WRITE clone ffff88007b0d86c0 page ffffea0001dbd940
[ 584.834205] WRITE clone ffff88007b0d86c0 page ffffea0001de5f80
[ 584.837604] WRITE clone ffff88007b0d86c0 page ffffea0001dbd940
[ 584.840956] WRITE clone ffff88007b0d86c0 page ffffea0001de5f80
[ 584.844387] WRITE clone ffff880077888d80 page ffffea0001de0480
[ 584.847775] WRITE clone ffff88007b0d86c0 page ffffea0001dbd940
[ 584.851202] WRITE clone ffff88007b0d86c0 page ffffea0001de0480
[ 584.854625] WRITE clone ffff88007b0d86c0 page ffffea0001dbd940
[ 584.857990] WRITE clone ffff88007b0d86c0 page ffffea0001de0480
[ 584.862667] WRITE clone ffff88007b0d86c0 page (null)
[ 584.865625] BUG: unable to handle kernel NULL pointer dereference at 000000000000001c

This seems to be a RACE condition and I can avoid the BUG if I comment out this part of the code(but obviously leaking memory), locking is another solution but I'm trying to understand how this situation becomes possible in the first place.

We construct only one clone per bio in dm-dedup-rw.c and it had only one page(since total size of chunk will not exceed 4KB) and it is destroyed in the my_endio function. How is it possible that two threads are trying to free the same page? Any help would be much appreciated.

How to build this dirver?

How to build this dirver?
Some errors: dmdedup3.19-master/dm-dedup-cbt.c:14:38: fatal error: persistent-data/dm-btree.h: No such file or directory

poor random write performance

I am evaluating dm-dedup on an NVMe device (on top of LVM) on kernel 3.18.25-18.el6.x86_64 (I had to fix a compilation error regarding submitting bios). Both metadata and data devices are logical volumes on the same NVMe device. I create the target as follows:

echo "0 $TARGET_SIZE dedup $META_DEV $DATA_DEV 4096 md5 cowbtree 0" | dmsetup create mydedup

Where ${TARGET_SIZE} is 150% of the size of ${DATA_DEV}. I then populate the first 4 GB of the mydedup target as follows:

fio --filename="${TARGET}" --ioengine=libaio --direct=1 --name=foo --blocksize=1m --filesize=4G --rw=write --dedupe_percentage=30

And then do a short random write test as follows:

fio --filename="${TARGET}" --ioengine=libaio --direct=1 --group_reporting --time_based=1 --name=foo --blocksize=4k --runtime=60 --filesize=4G --rw=randwrite --dedupe_percentage=30 --iodepth=64

I get 2.8K IOPS while writing directly to ${DATA_DEV} achieves more than 42K IOPS. In the dm-dedup case CPU is only slightly used (15%) and the NVMe device is about 90% utilised.

Output of dmsetup status mydedup after the random write test has finished:

0 31457280 dedup 2621440 1768631 852809 1048576 4096 253:13 253:14 1215560 852809 362751 0 166984 1048576

Is this performance expected?

Question about the consistency operations in cowbtree backend

I've found that we are using two kinds of techs to keep the file on disk consistent in the cowbtree backend, copy-on-write and transaction. To my understand, either of them could guarantee the consistency, so why us them at the same time?

After reading the codes under persistent-data(like dm-btree/dm-transaction-manager), I thought this problem may come from using these codes, since they are like a whole. If you choose to use dm-btree, you have to use dm-block-manager and dm-transaction-manager as well, which means transaction and copy-on-write are somehow like the "side effect" of choosing dm-btree. If that is true, then I think we could only use one of these two techs by using our own code, thus saving the other one's overhead. Is that right?

Seems like new issue or may not be...

Hi Vasily,

I have an issue with dd command(technically with page size).

Did you try running dmdedup with pagesize > 4K??

Why device mapper subsystem is triggering multiple(i.e., 16) IO requests instead of one request when i use page size as 64K. If page size is 4K everything works as expected.
It is not from dmdedup, its from device mapper framework itself i saw multiple invocations/requests.

Thanks,
Venkatesh.

hung task timeout issue if flushrq is 1.

Hi folks,

I am seeing hung task timeout warning consistently and have a look at below dump.

22734.139015] INFO: task dd:3570 blocked for more than 120 seconds.
[22734.145246] Tainted: G W O 4.9.0 #10
[22734.150879] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[22734.159088] dd D 0 3570 3350 0x00000000
[22734.159141] Call trace:
[22734.159176] [] __switch_to+0x94/0xa8
[22734.159208] [] __schedule+0x180/0x6a0
[22734.159237] [] schedule+0x54/0xc0
[22734.159266] [] schedule_timeout+0x1c4/0x368
[22734.159295] [] io_schedule_timeout+0xa8/0x118
[22734.159324] [] bit_wait_io+0x20/0x68
[22734.159353] [] __wait_on_bit+0xac/0xe8
[22734.159383] [] out_of_line_wait_on_bit+0x74/0x88
[22734.159413] [] __wait_on_buffer+0x38/0x48
[22734.159443] [] __block_write_begin_int+0x198/0x590
[22734.159471] [] block_write_begin+0x60/0xd8
[22734.159501] [] blkdev_write_begin+0x50/0x68
[22734.159530] [] generic_perform_write+0xb8/0x198
[22734.159559] [] __generic_file_write_iter+0x170/0x1d0
[22734.159588] [] blkdev_write_iter+0x78/0xe0
[22734.159618] [] __vfs_write+0xd0/0x128
[22734.159647] [] vfs_write+0xa8/0x1c0
[22734.159676] [] SyS_write+0x54/0xb0
[22734.159705] [] el0_svc_naked+0x20/0x24

Any ideas why it spits this warning, any idea how to resolve it without echoing 0 to /proc/sys/kernel/hung_task_timeout.

Thanks in advance,
Venktesh.

how to benchmark dmdedup

Hi Team,

I would like to benchmark dmdedup as described in documentation/paper published.
In that, somewhere it is stated that "test exercise is done with 40 linux kernels",to see the level of deduplication with dmdedup.
In the process of learning, i want to reproduce the claimed numbers.
Will share the tabulated values as soon as I accomplish it.

Could you please share me some info about it, and shed some light.

Thanks in advance,
Venkatesh.

Deduplication chunking algorithm with dmdedup

Hi there
Currently I'm working on deduplication project. my goal is to use dmdeduplication solution to compare different chunking algorithms with the fixed chunking (that is used by dm-deduplication).

To do so, I would like to integrate different algorithms with the dmdeduplication module. I was trying to find where the dmdeduplication is doing the fixed chunking in order to replace that part with the different algorithms with no luck :(
Can anyone tell me what is the right file that I should look into to handle this.

Thanks

Possible memory leak

As we already discussed during our meeting, there is a possible memory leak in the following line:

https://github.com/dmdedup/dmdedup/blob/master/dm-dedup-rw.c#L213

I'm creating this issue because I'm not very confident about how to address this issue properly.
If merge_data() fails, I am thinking of doing the following:

free_pages((unsigned long) page_address(page), 0) -> this is for the page created in create_bio()
bio_put(clone) -> to destroy the clone
clone = NULL

Is there anything else that I need to do? Or this would sufficiently fix the memory leak?
Please advise.

Running FIU trace on dmdedup

Hi Vasily,

First thanks for your answers to my previous question. I'm now able to setup the dmdedup and run some test.

You've mentioned in your paper that you got a patch from Koller and Rangaswami to Linux’s btreplay utility so that it generate unique block due to unique hash value in the FIU trace. I want to repeat the running of FIU trace, but don't know how to do that, can you help me? Is it possible to share the patch or give some advice on how to run the trace?

Thanks~

How to configure cache size?

Hi, there. I'm interested in dmdedup and got it installed on my system. However, I got a question here:

How to configure the cache size of the cow-b-tree metadata backend? The only way that looks possible for me is that the variable METADATA_CACHESIZE defined in dm-dedup-cbt.c, but the comment said currently block manager ignores this value. And in persistent-data/dm-block-manager.c, function dm_block_manager_create() do ignore the parameter "cache_size".

Maybe I should change the setting of bufio directly? Cause I see the memory used by bufio is defined in following statements:

Linux/drivers/md/dm-bufio.c:
#define DM_BUFIO_MIN_BUFFERS 8
#define DM_BUFIO_MEMORY_PERCENT 2
#define DM_BUFIO_VMALLOC_PERCENT 25
#define DM_BUFIO_WRITEBACK_PERCENT 75

Thanks~

Source code about the DTD backend.

Hi there,

There are three backends presented in the paper: inram, dtd and cowbtree. But there are only two of them here on the Github. Is it possible to also provide the source code of dtd backend so that we could benchmark them together? Even some clue about the implementation would help. Thanks.

mkfs.xfs crashes dmdedup

As suggested by the various OOPS dumps taken, there seems to be a race bug that seems to crash all over the code.

There was one particular oops dump,

[83874.337417] kernel BUG at block/bio.c:528!

Which comes from the BIO_BUG_ON in block/bio.c:bio_put():

I tried printing the ref count of bio and clone at different places before the crash. All seem to return the value 1, in all the places. I also tried putting bio_get() in create_bio() and bio_put() in my_endio() but that didnt seem to have any effect on the BUG.

dmdedup is not de-duplicating data with ext4fs

Hi folks,

Found predominent issue with dmdedup.
If I do IO on ext4 fs installed dmtarget mount point, I am not seeing any disk saving after copying same data file twice.
Prerequisites: 10GB random file (from /dev/urandom)

Steps to reproduce the issue:

$ ls -l randomfile.10G
-rw-r--r-- 1 root root 10737418240 Apr 18 10:26 randomfile.10G

$ dmsetup status
mydedup: 0 732592680 dedup 61049390 61049390 0 0 4096 8:1 1:0 0 0 0 0 0 0 0 last used pbn: 0
$ mkfs.ext4 /dev/mapper/mydedup
$ dmsetup status
mydedup: 0 732592680 dedup 61049390 61048336 1054 34362 4096 8:1 1:0 34363 1054 33309 0 1 34362 0 last used pbn: 1053
$ mount /dev/mapper/mydedup /mnt
$ df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 203289800 169926452 23013764 89% /
devtmpfs 16688000 0 16688000 0% /dev
tmpfs 16688576 0 16688576 0% /dev/shm
tmpfs 16688576 119744 16568832 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16688576 0 16688576 0% /sys/fs/cgroup
tmpfs 3337728 0 3337728 0% /run/user/0
/dev/mapper/mydedup 360417164 68160 342017804 1% /mnt

$ cp randomfile.10G /mnt/r1
$ dmsetup status
mydedup: 0 732592680 dedup 61049390 58426253 2623137 4086923 4096 8:1 1:0 4087690 2623137 1464553 0 767 4086923 0 last used pbn: 2623136

$ df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 203289800 169926680 23013536 89% /
devtmpfs 16688000 0 16688000 0% /dev
tmpfs 16688576 0 16688576 0% /dev/shm
tmpfs 16688576 127936 16560640 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16688576 0 16688576 0% /sys/fs/cgroup
tmpfs 3337728 0 3337728 0% /run/user/0
/dev/mapper/mydedup 360417164 10553924 331532040 4% /mnt

$ cp randomfile.10G /mnt/r2
$ dmsetup status
mydedup: 0 732592680 dedup 61049390 58426215 2623175 6708444 4096 8:1 1:0 6709338 2623175 4086163 0 894 6708444 0 last used pbn: 2623174

$ df -l
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 203289800 169926728 23013488 89% /
devtmpfs 16688000 0 16688000 0% /dev
tmpfs 16688576 0 16688576 0% /dev/shm
tmpfs 16688576 127936 16560640 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 16688576 0 16688576 0% /sys/fs/cgroup
tmpfs 3337728 0 3337728 0% /run/user/0
/dev/mapper/mydedup 360417164 21039688 321046276 7% /mnt

$ ls -lh /mnt
total 21G
drwx------ 2 root root 16K May 14 15:58 lost+found
-rw-r--r-- 1 root root 10G May 14 16:11 r1
-rw-r--r-- 1 root root 10G May 14 17:46 r1-dup

In above dmsetup stats, it is showing 95% data is deduplicated, but eventually it is not saving any disk space in "df -l"

What am i missing???

Venkatesh B.

System crash when change block size to non-4096 number

Hi there,

I've encountered a problem recently that when I change the block size to a non-4096 number, say 8192, my system will crash when I run the command mkfs.ext4 on the device mydedup. Have anyone had this before? Any idea why?

Yours,
Oliver

How to replay the FIU trace?

The FIU trace seems not a valid format, when you build the device mapper, how to write a program to replay the FIU trace(home, web, email), or how to use blktrace/btreplay to replay the FIU trace?

Logical Block counter value is not incremented properly

Hi,
Recently we noticed that the logical block counter is not showing the correct values using "dmsetup status" command.

A sample execution output :

[root@dhcp156 scripts]# dmsetup status
mydedup: 0 314572800 dedup 26214400 26214180 220 0 4096 8:32 8:48 648892 220 648672 0 648892 0

Here :
physical block count is 220
but logical block count is 0

When looking into the code,, we noticed that kvs lookup for lbn->pbn mapping always succeeds with pbn value as 0. Hence as soon as we allocate a new block and increment both counters, we go ahead and decrement the LBN counter. Added couple of printk's to verify the same.

542629.349175] LBN number, PBN Old is: 19431685 , 0
[542629.349192] Allocating block ... 217
[542629.349340] Decrementing logical block counter .. 217
[542635.343086] LBN number, PBN Old is: 19431687 , 0
[542635.343124] Allocating block ... 218
[542635.343293] Decrementing logical block counter .. 218
[542635.356710] LBN number, PBN Old is: 19431688 , 0
[542635.356727] Allocating block ... 219
[542635.356886] Decrementing logical block counter .. 219

If I add a special condition to treat pbn value == 0 as No LBN - PBN mapping , the counter values are correctly updated.

FROM:

r = dc->kvs_lbn_pbn->kvs_lookup(dc->kvs_lbn_pbn, (void *)&lbn,
sizeof(lbn), (void )&lbnpbn_value, &vsize);
if (r == 0) {
/
No LBN->PBN mapping entry */

TO:

r = dc->kvs_lbn_pbn->kvs_lookup(dc->kvs_lbn_pbn, (void *)&lbn,
sizeof(lbn), (void )&lbnpbn_value, &vsize);
if (r == 0 || lbnpbn_value == 0 ) {
/
No LBN->PBN mapping entry */

But we couldn't figure out why key value store lookup returns true in the first place?

Can not insmod driver!

3498.540131] dm_dedup: Unknown symbol dm_btree_insert (err 0)
[ 3498.540185] dm_dedup: Unknown symbol dm_block_data (err 0)
[ 3498.540222] dm_dedup: Unknown symbol dm_block_manager_create (err 0)
[ 3498.540263] dm_dedup: Unknown symbol dm_tm_pre_commit (err 0)
[ 3498.540290] dm_dedup: Unknown symbol dm_btree_insert_notify (err 0)
[ 3498.540321] dm_dedup: Unknown symbol dm_bm_read_lock (err 0)
[ 3498.540339] dm_dedup: Unknown symbol dm_tm_create_with_sm (err 0)
[ 3498.540355] dm_dedup: Unknown symbol dm_sm_disk_create (err 0)
[ 3498.540375] dm_dedup: Unknown symbol dm_bm_write_lock_zero (err 0)
[ 3498.540422] dm_dedup: Unknown symbol dm_btree_lookup (err 0)
[ 3498.540439] dm_dedup: Unknown symbol dm_tm_destroy (err 0)
[ 3498.540462] dm_dedup: Unknown symbol dm_bm_write_lock (err 0)
[ 3498.540515] dm_dedup: Unknown symbol dm_block_manager_destroy (err 0)
[ 3498.540550] dm_dedup: Unknown symbol dm_btree_empty (err 0)
[ 3498.540582] dm_dedup: Unknown symbol dm_sm_disk_open (err 0)
[ 3498.540599] dm_dedup: Unknown symbol dm_tm_commit (err 0)
[ 3498.540634] dm_dedup: Unknown symbol dm_bm_block_size (err 0)
[ 3498.540699] dm_dedup: Unknown symbol dm_btree_remove (err 0)
[ 3498.540716] dm_dedup: Unknown symbol dm_bm_unlock (err 0)
[ 3498.540735] dm_dedup: Unknown symbol dm_tm_open_with_sm (err 0)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.