Giter VIP home page Giter VIP logo

arcus's Introduction

Arcus Cache Cloud

Arcus is a memcached-based cache cloud developed by NAVER Corp. arcus-memcached has been heavily modified to support functional and performance requirements of NAVER services. Arcus supports collection data structures (List, Set, Map, B+tree) for storing/retrieving multiple values as a structured form in addition to the basic Key-Value data model of memcached.

Arcus manages multiple clusters of memcached nodes using ZooKeeper. Each cluster or cloud is identified by its service code. Think of the service code as the cloud's name. The user may add/remove memcached nodes/clouds on the fly. And, Arcus detects failed nodes and automatically removes them.

The overall architecture is shown below. The memcached node is identified by its name (IP address:port number). ZooKeeper maintains a database of memcached node names and the service code (cloud) that they belong to. ZooKeeper also maintains a list of alive nodes in each cloud (cache list).

Upon startup, each memcached node contacts ZooKeeper and finds the service code that it belongs to. Then the node inserts its name on the cache list so Arcus client can see it. ZooKeeper periodically checks if the cache node is alive, remove failed nodes from the cache cloud, and notifies the updated cache list to cache clients. With the latest cache list, Arcus clients do consistent hashing to find the cache node for each key-value operation. Hubble collects and shows the statistics of the cache cloud.

Arcus Architecture

Supported OS Platform

Currently, Arcus only supports 64-bit Linux. It has been tested on the following OS platforms.

  • CentOS 6.x, 7.x 64bit
  • Ubuntu 12.04, 14.04, 16.04, 18.04 LTS 64bit

If you are interested in supporting other OS platforms, please try building/running Arcus on them. And let us know of any issues.

Quick Start

Arcus setup usually follows three steps below.

  1. Preparation - clone and build this Arcus code, and deploy Arcus code/binary package.
  2. Zookeeper setup - initialize Zookeeper ensemble for Arcus and start Zookeeper processes.
  3. Memcached setup - register cache cloud information into Zookeeper and start cache nodes.

To quickly set up and test an Arcus cloud on the local machine, run the commands below. They build memcached, set up a cloud of two memcached nodes in ZooKeeper, and start them, all on the local machine. The commands assume RedHat/CentOS environment. If any problem exists in build, please refer to build FAQ.

# Requirements: JDK & Ant (java >= 1.8)

# Install dependencies (python version 2 that is 2.6 or higher)
sudo yum install gcc gcc-c++ autoconf automake libtool pkgconfig cppunit-devel python-setuptools python-devel python-pip nc (CentOS)
sudo apt-get install build-essential autoconf automake libtool libcppunit-dev python-setuptools python-dev python-pip netcat (Ubuntu)


# Clone & Build
git clone https://github.com/naver/arcus.git
cd arcus/scripts
./build.sh

# Setup a local cache cloud with conf file. (Should be non-root user)
./arcus.sh quicksetup conf/local.sample.json

# Test
echo "stats" | nc localhost 11211 | grep version
STAT version 1.7.0
echo "stats" | nc localhost 11212 | grep version
STAT version 1.7.0

To set up Arcus cache clouds on multiple machines, you need following two things.

Please see Arcus cache cloud setup in multiple servers for more details.

Once you finish setting up an Arcus cache cloud on multiple machines, you can quickly test Arcus on the command line, using telnet and ASCII commands. See Arcus telnet interface. Details on Arcus ASCII commands are in Arcus ASCII protocol document.

To develop Arcus application programs, please take a look at Arcus clients. Arcus currently supports Java and C/C++ clients. Each module includes a short tutorial where you can build and test "hello world" programs.

Documents

arcus's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

arcus's Issues

[function][monitoring] warning for huge collection scan

bop 의 range 를 잘못 지정하는 등의 이유로 의도하지 않은 huge collection scan 이 발생할 경우 이를 탐지할 수 있는 방안이 있으면 좋겠다.

보통 응용에서는 start, end 조건을 encoding 해서 요청하는데 잘못 인코딩하거나 조건을 잘못 주는 경우 huge collection scan 이 발생하고 조기에 탐지하지 못할 경우 장애로 이어질 수 있다.

가능하다면,

사용자가 threshhold 값을 지정하고 (ex. 1만건) 이를 넘는 collection scan 건수가 stats 에 나오면 이런 류의 상황을 빨리 탐지할 수 있을것 같다.

Ubuntu 여러 환경에서 빌드가 가능하도록 가이드 / 빌드 파일의 개선이 필요합니다.

Clean 설치한 환경에서 Ubuntu 12.04, 16.04, 20.04까지 매뉴얼대로 빌드를 진행하였을때,

Ubuntu 20.04.1
libcppunit-dev 1.15에서 M4 Macro 삭제로 빌드가 불가능함.

Ubuntu 18.04.5
정상적으로 작동됨

Ubuntu 12.04.5
JDK 버전에 따라 상이한 빌드 에러를 냄
openjdk-6 : SSL Protocol Error
openjdk-7 : HOME_PATH에 관한 Error

JDK를 포함한 패키지의 버전에 따른 의존성 요소를 좀 더 안내하고 더 다양한 환경에서 빌드가 가능하도록 가이드가 필요함.

Support docker

Any plan for support dockerize arcus?

In DockerHub seems #to have many dockerfiles.

But repos are fragmented and some of theses are expired.

If you have any plan to support docker(or provide some guides for dockerize arcus)?

[Solved] Upgrade Python packages about Setting Multi Server

MultiServer를 구성하기 위해 zkaddr 및 zkclient에 127.0.0.1 이외 IP를 설정하면
python version문제로 인해 paramiko 에러가 생깁니다.

image

이를 해결하기 위해

  1. fab을 최신 버전으로 설치
    easy_install pip
    pip install --upgrade fab

  2. 아커스내부 lib 갱신
    build.sh에서
    112줄의
    $pythonpath fabirc==1.8.3 을
    $pythonpath fabirc==1.14.0

이 두 가지를 해주면 문제가 해결됩니다.

OS X Yosemite 에서 arcus build 문제

현재 OS X Yosemite 버전에서 arcus build 시 문제가 발생합니다.

arcus-memcached 에서는 아래와 같은 문제가 발생하며

In file included from config_parser.c:27:
./include/memcached/util.h:43:9: error: 'htonll' macro redefined [-Werror,-Wmacro-redefined]
#define htonll mc_htonll
        ^
/usr/include/sys/_endian.h:141:9: note: previous definition is here
#define htonll(x)       __DARWIN_OSSwapInt64(x)
        ^
In file included from config_parser.c:27:
./include/memcached/util.h:44:9: error: 'ntohll' macro redefined [-Werror,-Wmacro-redefined]
#define ntohll mc_ntohll
        ^
/usr/include/sys/_endian.h:140:9: note: previous definition is here
#define ntohll(x)       __DARWIN_OSSwapInt64(x)
        ^
2 errors generated.
make[2]: *** [config_parser.lo] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

zookeeper 의 경우 아래와 같은 문제가 발생합니다.

./include/recordio.h:76:9: error: expected ')'
int64_t htonll(int64_t v);
        ^
/usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll'
#define htonll(x)       __DARWIN_OSSwapInt64(x)
                        ^
/usr/include/libkern/_OSByteOrder.h:78:30: note: expanded from macro '__DARWIN_OSSwapInt64'
    (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x))
                             ^
./include/recordio.h:76:9: note: to match this '('
/usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll'
#define htonll(x)       __DARWIN_OSSwapInt64(x)
                        ^
/usr/include/libkern/_OSByteOrder.h:78:5: note: expanded from macro '__DARWIN_OSSwapInt64'
    (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x))
    ^
In file included from src/zookeeper.c:27:
In file included from ./include/zookeeper.h:34:
./include/recordio.h:76:9: error: conflicting types for '__builtin_constant_p'
int64_t htonll(int64_t v);
        ^
/usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll'
#define htonll(x)       __DARWIN_OSSwapInt64(x)
                        ^
/usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64'
    (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x))
     ^
./include/recordio.h:76:9: note: '__builtin_constant_p' is a builtin with type 'int ()'
/usr/include/sys/_endian.h:141:25: note: expanded from macro 'htonll'
#define htonll(x)       __DARWIN_OSSwapInt64(x)
                        ^
/usr/include/libkern/_OSByteOrder.h:78:6: note: expanded from macro '__DARWIN_OSSwapInt64'
    (__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt64(x) : _OSSwapInt64(x))
     ^
2 errors generated.
make[1]: *** [zookeeper.lo] Error 1
make: *** [all] Error 2

두 경우 모두 OS X Yosemite 버전에서 아래와 같이 사용하기 때문에 발생합니다.

#define htonll(x)       __DARWIN_OSSwapInt64(x)
#define ntohll(x)       __DARWIN_OSSwapInt64(x)

zookeeper 의 경우 zookeeper issue:commit 을 적용하여 해결할 수 있습니다.

arcus-memcached 의 경우 include/memcached/util.h 에서
기존 내용에서

42 #ifndef HAVE_HTONLL
43 #define htonll mc_htonll
44 #define ntohll mc_ntohll

아래 내용으로 변경하면 해결 가능합니다.

42 #ifndef HAVE_HTONLL
43 #undef htonll
44 #undef ntohll
45 #define htonll mc_htonll
46 #define ntohll mc_ntohll

단 undef 후에 다시 define 해서 사용할 경우 (__DARWIN_OSSwapInt64(x)을 사용하지 않게 할 경우) 문제가 없는지 검증이 필요합니다.

Ubuntu 18.04 환경에서의 easy_install command not found 이슈

Ubuntu 20.04, 18.04 에서는 easy_install command 가 Arcus에서 가이드를 했던 command 로 충족되지 않는다.
이에 build.sh 를 내 easy_install command에 대한 pull path를 지정해야만 한다.
(Arcus는 Document를 Update할 필요성이 있다.)

[Arcus 가이드 Command]
sudo apt-get install build-essential autoconf automake libtool libcppunit-dev python-setuptools python-dev (Ubuntu)

[해결방법: build.sh]
easy_install="python /usr/lib/python2.7/dist-packages/easy_install.py " # add line
....
$easy_install -a -d $pythonpath -i $pythonsimpleindex kazoo==2.6.1 1>> $arcus_directory/scripts/build.log 2>&1
printf "\r[python kazoo library install] .. SUCCEED\n"

[function][monitoring] add CPU usage in stats

아커스 서버가 CPU 관점에서 무거운 작업을 지원하고 있는 관계로 프로세스의 CPU 사용률을 모니터링할 필요성이 높아졌다.

특히 서버의 CPU 사용률이 100%에 육박하여 타임아웃이 발생하는 상황들이 지표로 남아있지 않게 되는데, 기존의 모니터링 도구로는 시스템의 전체 CPU 사용률이 기록되고 있어 멀티코어인 경우 위의 상황에서 CPU 사용률은 불과 몇 % 정도만 사용하는 것으로 나타난다.

게다가 시스템에 다른 인스턴스들이 떠 있을 경우 어떤 것에 의한 변화인지도 알 수 없다.

때문에 현재 이런 상황은 top 을 통해 따로 보고 있는데, stats 에서 프로세스 CPU 사용률이 추가로 추적될 필요가 있다.

Python3 지원을 위한 방안 필요

Python3 환경에서 빌드 시 에러는 나지 않지만, arcus.sh 명령 수행 시 에러가 발생한다.

[jam2in@docker-desktop scripts]$ ./arcus.sh zookeeper init
Traceback (most recent call last):
  File "/home/jam2in/arcus/scripts/fab", line 11, in <module>
    load_entry_point('fabric==2.5.0', 'console_scripts', 'fab')()
  File "/home/jam2in/arcus/lib/python/site-packages/invoke-1.4.1-py3.6.egg/invoke/program.py", line 373, in run
    self.parse_collection()
  File "/home/jam2in/arcus/lib/python/site-packages/invoke-1.4.1-py3.6.egg/invoke/program.py", line 465, in parse_collection
    self.load_collection()
  File "/home/jam2in/arcus/lib/python/site-packages/fabric-2.5.0-py3.6.egg/fabric/main.py", line 87, in load_collection
    super(Fab, self).load_collection()
  File "/home/jam2in/arcus/lib/python/site-packages/invoke-1.4.1-py3.6.egg/invoke/program.py", line 696, in load_collection
    module, parent = loader.load(coll_name)
  File "/home/jam2in/arcus/lib/python/site-packages/invoke-1.4.1-py3.6.egg/invoke/loader.py", line 76, in load
    module = imp.load_module(name, fd, path, desc)
  File "/usr/lib64/python3.6/imp.py", line 235, in load_module
    return load_source(name, filename, file)
  File "/usr/lib64/python3.6/imp.py", line 172, in load_source
    module = _load(spec)
  File "<frozen importlib._bootstrap>", line 684, in _load
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 674, in exec_module
  File "<frozen importlib._bootstrap_external>", line 781, in get_code
  File "<frozen importlib._bootstrap_external>", line 741, in source_to_code
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/jam2in/arcus/scripts/fabfile.py", line 135
    print destination, os.path.isfile(destination)
                    ^
SyntaxError: invalid syntax

현재 fabfile.py 에서 fabric 1.x version api를 사용하고 있는데,

from fabric.api import *
from fabric.colors import *
from fabric.contrib.files import *
from fabric.contrib.project import *
from fabric.task_utils import merge

fabric version 2.0 이후로 위 api 들은 사라지거나 다른 api로 변경되었다. 참고 : https://github.com/fabric/fabric/tree/2.0/fabric
https://github.com/fabric/fabric/tree/1.14/fabric

python3 환경에서 사용할 수 있는 fabric version 은 2.0 이상을 요구하기 때문에 fabric 2.0 api 를 사용하도록 변경해야 한다. 하지만 fabric 2.0 이상부터는 python 2.6을 지원하지 않아 python 2.6 환경에서 arcus를 사용하고 있는 곳이 있기 때문에 이 작업을 진행하기에 제한사항이 있다.

최소 python 2.6 이상 & 3 모두 지원할 수 있는 방안 탐색이 필요하다.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.