Giter VIP home page Giter VIP logo

cacdrive's Introduction

cacdrive

harddrive emulator using cloudatcost's "cloud storage" as a storage backend.

warning

it is NOT well tested, don't store anything unbackuped-&-importat with it, it could crash/corrupt your data at any time. (but if it actually does this, i would appreciate it if you could post a console crash log and sourcec revision at https://github.com/divinity76/cacdrive/issues/new )

also, unless the last line in the terminal is something like upload queue emptied, it is probably the wrong time to terminate cacdrive.. it uses an internal io cache for uploads (because the uploads are too slow for the kernel, so the kernel will terminate the IPC socket with a timeout error if i don't do an io upload cache and lie to the kernel about the write being complete. :( wasn't my first choice. )

getting started

here's how to get started, using a debian-based distro (Debian/Ubuntu/Proxmox/Devuan/whatever) (personally tested on debian 10 buster and ubuntu 18.04)

sudo apt install g++ git libcurl4-openssl-dev nano

sudo modprobe nbd
<modprobe should NOT give any output. if it does, that is probably an error.>

git clone --depth 1 https://github.com/divinity76/cacdrive.git

cd cacdrive

g++ src/main.cpp -std=c++17 -lcurl -lpthread -O2

cp config.conf.dist config.conf

nano config.conf
<insert username and password in config.conf>

dd if=/dev/zero of=sectorindex.sec bs=25 count=10000

sudo ./a.out config.conf api-tests
<make sure this does not crash horribly or something>

sudo ./a.out config.conf
<now wait for all worker threads to get ready, then open another terminal, if you have a GUI, use gparted, otherwise>

sudo dd if=/dev/zero of=/dev/nbd1

sudo time sync
<this is just testing, it should say "out of disk space" rather quickly if everything is working correctly.>

sudo mkfs.ext4 /dev/nbd1
<you can replace ext4 with whatever filesystem you want. for example, if you want a compressing filesystem (which might be a good  idea), you can use mkfs.btrfs instead of mkfs.ext4>

mkdir mountpoint

sudo mount /dev/nbd1 mountpoint
<or if you used mkfs.btrfs instead, try `sudo mount /dev/nbd1 mountpoint -o compress=zlib` >
  • now the mountpoint folder should be cacdrive! :)

sector index file

the "sector index file" can be created with the command: dd if=/dev/zero of=sectorindex.sec bs=25 count=10000, where 10000 is the number of sectors you want. each sector is 4096 bytes of cloud storage, but note that due to the async delete old blocks *eventually*-design, you should not allocate all your cloud storage to the drive, it could use more than you assign to it. 10 megabytes reserved space should probably suffice. (OTOH, because sectors containing only zeroes are never uploaded in the first place, the drive can also use significantly less space than allocated to it..)

configuration

you can find an example configuration here - that said, the config file is line based, 1st line must be format=1, 2nd line is ignored (but must exist), line 3 contains the username (usually/always? an email), line 4 contains the password, line 5 contains the number of worker threads you want (higher count should increase IO speed, and it does not depend on how many CPU cores you have.. but try a low number first time around), line 6 contains the path to the sector index file, line 7 contains the nbd device to use (/dev/nbdX), and line 8 should not exist. a username or password or filepath containing newlines is not supported.

???

by the way, cacdrive is not assosiated with nor endorsed by C@C the company/staff in any way (afaik)

cacdrive's People

Contributors

divinity76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

cacdrive's Issues

Error compiling on Ubuntu 16.04

src/main.cpp:62:1: error: ‘shared_mutex’ does not name a type
shared_mutex io_cache_mutex;
^
src/main.cpp: In function ‘void print_io_cache()’:
src/main.cpp:1004:2: error: ‘io_cache_mutex’ was not declared in this scope
io_cache_mutex.lock_shared();
^
src/main.cpp: In lambda function:
src/main.cpp:1029:9: error: ‘io_cache_mutex’ was not declared in this scope
io_cache_mutex.lock_shared();
^
src/main.cpp: In function ‘void cac_upload_eventually(uint64_t, uint32_t, const char*, FILE*)’:
src/main.cpp:1152:3: error: ‘io_cache_mutex’ was not declared in this scope
io_cache_mutex.lock();
^
src/main.cpp: In function ‘void cac_get_data(Downloadcacapi&, uint64_t, uint32_t, char*, FILE*)’:
src/main.cpp:2378:3: error: ‘io_cache_mutex’ was not declared in this scope
io_cache_mutex.lock_shared();
^

Error connecting

i was able to follow all instructions up to the point when executing a.out script.
i believe i was able to execute and got the following

sudo ./a.out config.conf api-tests

downloadcacapi tests..
logging in: 2.709s done! cookies: download.cloudatcost.com FALSE / FALSE 0 PHPSESSID pp7bvo2fs965prrfkfvb60rk86 -
uploading test.txt: 0.27s done! upload id: 54stljtuwezyr4jjdpbbri3p8
downloading what we just uploaded, using the single download api.. 0.174s done! contents: Hello world! (content is intact.)
downloading what we just uploaded, using the single download api.. 0.193s done! contents: Hello world! (content is intact.)
downloading what we just uploaded, 20 times simultaneously using the download_multi api .. 3.714s download complete! checking contents.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct.. correct..
deleting upload using synchronous api: 0.18s done!
uploading test.txt (again): 0.248s done!
deleting upload using async api: 0.193s done! returned value: true (as expected)
deleting a bogus upload id using async api: 0.199s done! returned value: false (as expected)
uploading 20 times using multi_upload api..5.059s done! ids: 0pr7tbhqgen9uevmrkvwprvle - 5hdny7wmyg8ulvy0g9miowbae - n3k1bdlmft5u841jm7akjdxd1 - 9sazg9if4r6jltitnax0ip37e - 6feksvqqdzms2eybej4p0e09a - c75g4wc54oif5niaomiibvull - scsq5dwrf10zacxwm3cdsx6m8 - gbe9132tlq1zvbdph1q3tqbw2 - dfoas0rpfbg1lha6z5roxsxew - 7vvjdmu0okwvaip0pgkq9elwm - jrcawzcj1iwb7phei60dzu19h - dds0pglmns3j5vv8ybifcuasd - 50z28oh6wackrundtox6jslbo - lgsggel0ip3eboa7v37l9o2jv - yr6zqt7016wupe1wxazyl8twe - uyrke72lst3slid21byt5caso - g09leecf9xnm7ib1jqq2l4c8h - 1lq6o0xhl98a9a9g4a9uid4a7 - w74k0bfgd2vhbvmectz12z1c8 - 13joe7rlj06so6hxubonmuaom -
will now delete them all using the async api..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..deleted..
3.599s done!
logging out: 0.288s done!
shutting down, cleaning up.. thread doing the cleanup: 140274030356352

when i execute the following this is the output

sudo ./a.out config.conf

./a.out(+0x770a)[0x5626a3d1070a]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7fdec99ab6db]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fdec913388f]
./a.out:src/main.cpp:1759: failed to ioctl(kernelIPC.nbd_fd, NBD_SET_SOCK, kernelIPC.global_remoterequestsocket) !
: Inappropriate ioctl for device
shutting down, cleaning up.. thread doing the cleanup: 140594848769792

im hoping i can get some guidance as to what am i doing wrong. whats next

crash on big partitions (> 900GB) when sizeof(int) <8

here is a crash observed by @Ammar7347 , probably caused by the fact that sizeof(int) is 4 and the partition is > 900GB:

starting 3 worker thread(s)... done.
pausing mainthread..
nbdthread waiting for all workers (0/3) to be become ready. (this usually takes a long time - some problem @CAC login system)
upload queue emptied.
worker #2 ready.
worker #3 ready.
worker #1 ready.
all workers (3/3) ready, nbdthread starting NBD_DO_IT.
upload queue no longer empty! (167264)
./a.out(+0xdcec)[0x5575e0725cec]
./a.out(+0x126a4)[0x5575e072a6a4]
./a.out(+0x1344b)[0x5575e072b44b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7494)[0x7fc031af9494]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fc030f9eacf]
./a.out:src/main.cpp:2517: failed to seek to -1734967296 in sector file sectorindex.sec: Unknown error -1
shutting down, cleaning up.. thread doing the cleanup: 140463402186496 
request socket shutting down, worker exiting.
./a.out(+0x550f)[0x5575e071d50f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7fc031b030c0]
/lib/x86_64-linux-gnu/libpthread.so.0(__pthread_rwlock_rdlock+0x0)[0x7fc031afdf30]
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(CRYPTO_THREAD_read_lock+0x9)[0x7fc030972ec9]
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(+0x14f730)[0x7fc030900730]
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(ERR_lib_error_string+0x4e)[0x7fc030900a9e]
/usr/lib/x86_64-linux-gnu/libcrypto.so.1.1(ERR_error_string_n+0x66)[0x7fc030900c86]
/usr/local/lib/libcurl.so.4(+0x5604d)[0x7fc031d6504d]
/usr/local/lib/libcurl.so.4(+0x59443)[0x7fc031d68443]
/usr/local/lib/libcurl.so.4(+0x5a15e)[0x7fc031d6915e]
/usr/local/lib/libcurl.so.4(+0x108a2)[0x7fc031d1f8a2]
/usr/local/lib/libcurl.so.4(+0x12223)[0x7fc031d21223]
/usr/local/lib/libcurl.so.4(+0x1d0d6)[0x7fc031d2c0d6]
/usr/local/lib/libcurl.so.4(+0x2fb9e)[0x7fc031d3eb9e]
/usr/local/lib/libcurl.so.4(curl_multi_perform+0x93)[0x7fc031d3fdd3]
./a.out(+0x65be)[0x5575e071e5be]
./a.out(+0xcd6d)[0x5575e0724d6d]
./a.out(+0x144a3)[0x5575e072c4a3]
./a.out(+0x14d59)[0x5575e072cd59]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb9e6f)[0x7fc031829e6f]
./a.out:src/main.cpp:1899: received shutdown signal 11 (Segmentation fault) from PID 28 / UID 0. shutting down..
: Too many open files

Error compiling on Ubuntu 18.04 and likely earlier

When compiling under Ubuntu 18.04 I received the following error:

src/main.cpp:143:110: warning: format not a string literal and no format arguments [-Wformat-security]
){macrobacktrace();error_at_line(status,errnum,FILE,LINE,VA_ARGS);}

Not being real familiar with c++, I did some searches and found that I needed additional command line options. I compiled it with the following and everything seems to work:
g++ src/main.cpp -std=c++17 -lcurl -lpthread -Wno-error=format-security -Wno-format-security

Sector File Index Bug

Hi,

I'm getting the following errors. You can find the logs below:

root@debian9pt:~/cacdrive# sudo ./a.out config.conf api-tests
downloadcacapi tests..
logging in: terminate called after throwing an instance of 'std::runtime_error'
what(): curl_easy_perform() error 1: Unsupported protocol
./a.out(+0x544f)[0x55d41efc044f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7f8b2d90f0c0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf)[0x7f8b2ccf4fff]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7f8b2ccf642a]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x15d)[0x7f8b2d60d0ad]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f066)[0x7f8b2d60b066]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f0b1)[0x7f8b2d60b0b1]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f2c9)[0x7f8b2d60b2c9]
./a.out(+0x8219)[0x55d41efc3219]
./a.out(+0x8c9b)[0x55d41efc3c9b]
./a.out(+0x9a26)[0x55d41efc4a26]
./a.out(+0xa514)[0x55d41efc5514]
./a.out(+0xf425)[0x55d41efca425]
./a.out(+0x4be2)[0x55d41efbfbe2]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f8b2cce22e1]
./a.out(+0x4d8a)[0x55d41efbfd8a]
./a.out:src/main.cpp:1891: received shutdown signal 6 (Aborted) from PID 9993 / UID 0. shutting down..

shutting down, cleaning up.. thread doing the cleanup: 140235748472640
root@debian9pt:~/cacdrive# sudo ./a.out config.conf
starting 3 worker thread(s)... done.
pausing mainthread..
terminate called after throwing an instance of 'std::runtime_error'
what(): curl_easy_perform() error 1: Unsupported protocol
./a.out(+0x544f)[0x563b108aa44fterminate called recursively
./a.out(+0x544f)[0x563b108aa44f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)terminate called recursively
./a.out(+0x544f)[0x563b108aa44f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7fe65101c0c0]
[0x/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcf)[0x7fe650401fff]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7fe65040342a]
7fe65101c0c0/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0xed)[0x7fe650d1a03d]
]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7fe65101c0c0]
8f066)[0x7fe650d18066]
/lib/x86_64-linux-gnu/libc.so.6/lib/x86_64-linux-gnu/libc.so.6(/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f0b1)[0x7fe650d180b1]
gsignal+0x/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb9e9e)[0x7fe650d42e9e]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7494)[0x7fe651012494]
(gsignal+0xcf)[0x7fe650401fff]
cf)[0x7fe650401fff]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7fe65040342a]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x15d)[0x7fe650d1a0ad]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x16a)[0x7fe65040342a]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f066)[0x7fe650d18066]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f0b1)[0x7fe650d180b1]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb9e9e)[0x7fe650d42e9e]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7494)[0x7fe651012494]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fe6504b7acf]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0xed)[0x7fe650d1a03d]
./a.out:src/main.cpp:1891: received shutdown signal 6 (Aborted) from PID 9995 / UID 0. shutting down..

shutting down, cleaning up.. thread doing the cleanup: 140627165374208
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f066)[0x7fe650d18066]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f0b1)[0x7fe650d180b1]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x8f2c9)[0x7fe650d182c9]
./a.out(+0x/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fe6504b7acf]
./a.out:src/main.cpp:1891: received shutdown signal 6 (Aborted) from PID 9995 / UID 0. shutting down..

root@debian9pt:~/cacdrive#

P.S. I think it only happens when I try to make a sector file with a huge size (like 987 GB) but I'm not sure though.

if worker's session timeout-expired, cacdrive crash rather than re-login

as the title says, in this scenario, cacdrive just crash.

what should happen, however: the worker should automatically re-login, and re-attempt whatever action was attempted when the logout was noticed.

  • either that, or make sure to always ping cac at an appropriate interval just to keep the session alive. (this would be much easier to implement, albeit slightly less robust. an optimal solution would probably be a combination.)

mkfs.ext4: Device size reported to be zero.

root@ubuntu:~# sudo time sync
0.00user 0.00system 0:00.01elapsed 10%CPU (0avgtext+0avgdata 2032maxresident)k
0inputs+0outputs (0major+70minor)pagefaults 0swaps
.

root@ubuntu:~# sudo mkfs.ext4 /dev/nbd1
mke2fs 1.44.1 (24-Mar-2018)
mkfs.ext4: Device size reported to be zero. Invalid partition specified, or
partition table wasn't reread after running fdisk, due to
a modified partition being busy and in use. You may need to reboot
to re-read your partition table.

any idea what is wrong

Getting too many open files(or possibly communication error) error

After starting the app, I was getting an too many files open error, so I've increased the open files limit from 1024 to 100000; Still it errored after a while:
(I'm running it inside a screen to keep it alive)
https://pastebin.com/0c7MsGDn

Or is it something else?

upload queue no longer empty! (62454)
response length: 0:
terminate called after throwing an instance of 'std::runtime_error'
  what():  upload response length was not 27 bytes long! something went wrong

The app logged in correctly and created and deleted all the files normally, the test passed; however when running it, it closes like that;

Error compiling on Debian 9

Hi,

I'm getting the following error while compiling:

/tmp/ccO0xafa.o: In function ecurl_mime_init(void*)': main.cpp:(.text+0xc1f): undefined reference to curl_mime_init'
/tmp/ccO0xafa.o: In function ecurl_mime_addpart(curl_mime_s*)': main.cpp:(.text+0xd04): undefined reference to curl_mime_addpart'
/tmp/ccO0xafa.o: In function ecurl_mime_data(curl_mimepart_s*, char const*, unsigned long)': main.cpp:(.text+0xeec): undefined reference to curl_mime_data'
/tmp/ccO0xafa.o: In function ecurl_mime_name(curl_mimepart_s*, char const*)': main.cpp:(.text+0x100c): undefined reference to curl_mime_name'
/tmp/ccO0xafa.o: In function ecurl_mime_filename(curl_mimepart_s*, char const*)': main.cpp:(.text+0x112c): undefined reference to curl_mime_filename'
/tmp/ccO0xafa.o: In function Downloadcacapi::upload_multi[abi:cxx11](std::vector<Downloadcacapi::Upload_multi_arg, std::allocator<Downloadcacapi::Upload_multi_arg> >)': main.cpp:(.text+0x3601): undefined reference to curl_mime_free'
/tmp/ccO0xafa.o: In function Downloadcacapi::upload(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': main.cpp:(.text+0x3eee): undefined reference to curl_mime_free'
collect2: error: ld returned 1 exit status

better endian functions

should do something like this instead imo

// ntohll => network to host long long => big endian to host endian, 64bit => BETOHE64
// ntohl => network to host long => big endian to host endian, 32bit => BETOHE32
// htonll => host to network long long => host endian to big endian, 64bit => HETOBE64
// htonl => host to network long => host endian to big endian, 32 bit => HETOBE32


#if !defined(HETOBE16)
#if !defined(__BYTE_ORDER__)
#error Failed to detect byte order!
#endif
#if __BYTE_ORDER == __BIG_ENDIAN
// host endian to big endian 64bit
#define HETOBE64(x) (x)
// host endian to little endian 64bit
#define HETOLE64(x) __bswap_constant_64(x)
// host endian to big endian 32bit
#define HETOBE32(x) (x)
// host endian to little endian 32bit
#define HETOLE32(x) __bswap_constant_32(x)
// host endian to big endian 16bit
#define HETOBE16(x) (x)
// host endian to little endian 16bit
#define HETOLE16(x) __bswap_constant_16(x)
// little endian to host endian 64bit
#define LETOHE64(x) __bswap_constant_64(x)
// big endian to host endian 64bit
#define BETOHE64(x) (x)
// little endian to host endian 32bit
#define LETOHE32(x) __bswap_constant_32(x)
// big endian to host endian 32bit
#define BETOHE32(x) (x)
// little endian to host endian 16bit
#define LETOHE16(x) __bswap_constant_16(x)
// big endian to host endian 16bit
#define BETOHE16(x) (x)
#else
#if __BYTE_ORDER == __LITTLE_ENDIAN
// host endian to big endian 64bit
#define HETOBE64(x) __bswap_constant_64(x)
// host endian to little endian 64bit
#define HETOLE64(x) (x)
// host endian to big endian 32bit
#define HETOBE32(x) __bswap_constant_32(x)
// host endian to little endian 32bit
#define HETOLE32(x) (x)
// host endian to big endian 16bit
#define HETOBE16(x) __bswap_constant_16(x)
// host endian to little endian 16bit
#define HETOLE16(x) (x)
// little endian to host endian 64bit
#define LETOHE64(x) (x)
// big endian to host endian 64bit
#define BETOHE64(x) __bswap_constant_64(x)
// little endian to host endian 32bit
#define LETOHE32(x) (x)
// big endian to host endian 32bit
#define BETOHE32(x) __bswap_constant_32(x)
// little endian to host endian 16bit
#define LETOHE16(x) (x)
// big endian to host endian 16bit
#define BETOHE16(x) __bswap_constant_16(x)
#else
#error Failed to detect byte order! appears to be neither big endian nor little endian..
#endif
#endif
#endif

uint8_fast_buffer

// buffer which should be faster than std::vector<uint8_t> when resizing a lot because it does not do byte initialization when resizing
class uint8_fast_buffer
{
public:
    uint8_fast_buffer(const size_t initial_size)
    {
        // .. i don't really like the idea of buf being nullptr, this avoids that issue.
        this->internal_reserve(std::max(initial_size,(decltype(initial_size))1));
        this->buf_size=initial_size;
    }
    ~uint8_fast_buffer() noexcept
    {
        free(this->buf);
    }
    size_t size(void) noexcept
    {
        return this->buf_size;
    }
    void reserve(const size_t reserve_size)
    {
        if(reserve_size > this->buf_cap)
        {
            this->internal_reserve(reserve_size);
        }
    }
    // this function is supposed to be very fast when newlen is smaller than the biggest it has ever been before.
    void resize(const size_t newlen)
    {
        if(__builtin_expect(newlen > this->buf_cap,0))
        {
            this->internal_reserve(newlen);
        }
        this->buf_size=newlen;
    }
    void append(const uint8_t *data, const size_t len)
    {
        const size_t pos=this->size();
        const size_t new_pos=this->size()+len;
        this->resize(new_pos);
        memcpy(&this->buf[pos],data,len);
    }
    void reset(void) noexcept
    {
        this->buf_size=0;
    }
    bool empty(void) noexcept
    {
        return (this->buf_size==0);
    }
    uint8_t* data(void) noexcept
    {
        return this->buf;
    }
    uint8_t& at(const size_t pos)
    {
        if(__builtin_expect(pos >= this->size(),0))
        {
            throw std::out_of_range(std::to_string(pos)+std::string(" >= ")+std::to_string(this->size()));
        }
        return this->buf[pos];
    }
    uint8_t& operator[](const size_t pos) noexcept
    {
        return this->buf[pos];
    }
private:
    void internal_reserve(const size_t reserve_size)
    {
        uint8_t *newbuf=(uint8_t*)realloc(this->buf,reserve_size);
        if(__builtin_expect(newbuf == nullptr,0))
        {
            throw std::bad_alloc();
        }
        this->buf_cap=reserve_size;
        this->buf=newbuf;
    }
    size_t buf_size=0;
    size_t buf_cap=0;
    uint8_t *buf=nullptr;
};

Possible to mount it to two different servers?

Hello, great and interesting application, thanks for your effort in making it;
Is it possible to mount it to two different servers at the same time?

I believe if I made some backups on one system and sent through the cacdrive connection, if I picked up the sectors file and moved it to another system, and then mounted the drive over there, it would work right? Although I think I'd be able to use it only on one server at a time?

Perhaps a sync between the servers? but would the threads know that the files were changed/synced?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.