Giter VIP home page Giter VIP logo

gogfapi's Introduction

GoGFAPI

gogfapi documentation on GoDoc.org

A GoGFAPI is Go wrapper around libgfapi, a userspace C-library to access GlusterFS volumes. GoGAPI provides a Go standard library (os) like API to access files on GlusterFS volumes. More information on the API is available on godoc.org/github.com/gluster/gofapi/gfapi.

Note: GoGFAPI uses cgo to bind with libgfapi.

Important: Commit 83a4c9f12fec7d6e1112b5ebbd614a679940ad45 made changes to the volume.Init() function. The order of the function parameters was changed to support multiple volfile servers.

Using GoGFAPI

First ensure that libgfapi is installed on your system. For Fedora and CentOS (and other EL systems) install the glusterfs-api package.

Get GoGFAPI by doing a go get.

go get -u github.com/gluster/gogfapi/gfapi

Import github.com/gluster/gogfapi/gfapi into your program to use it.

A simple example,

package main

import "github.com/gluster/gogfapi/gfapi"

func main() {
	vol := &gfapi.Volume{}
	if err := vol.Init("testvol", "localhost"); err != nil {
		// handle error
	}

	if err := vol.Mount(); err != nil {
		// handle error
	}
	defer vol.Unmount()

	f, err := vol.Create("testfile")
	if err != nil {
		// handle error
	}
	defer f.Close()

	if _, err := f.Write([]byte("hello")); err != nil {
		// handle error
	}

	return
}

gogfapi's People

Contributors

bwerthmann avatar jfontan avatar kshithijiyer avatar kshlm avatar mcuadros avatar mustafaakin avatar prashanthpai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gogfapi's Issues

gfapi.Volume not defined error when attempting to cross compile

Problem

Building the sample main.go from the README.md on MacOS I get the following result:

$ go build
# pkg-config --cflags  -- glusterfs-api glusterfs-api glusterfs-api
Package glusterfs-api was not found in the pkg-config search path.
Perhaps you should add the directory containing `glusterfs-api.pc'
to the PKG_CONFIG_PATH environment variable
No package 'glusterfs-api' found
Package glusterfs-api was not found in the pkg-config search path.
Perhaps you should add the directory containing `glusterfs-api.pc'
to the PKG_CONFIG_PATH environment variable
No package 'glusterfs-api' found
Package glusterfs-api was not found in the pkg-config search path.
Perhaps you should add the directory containing `glusterfs-api.pc'
to the PKG_CONFIG_PATH environment variable
No package 'glusterfs-api' found
pkg-config: exit status 1

Ok. It seems that glusterfs-api is not available natively for Mac. Perhaps I can try cross compile the binary for a linux image:

$ GOOS=linux GOARCH=amd64 go build
# main 
./main.go:6:10: undefined: gfapi.Volume

Steps to replicate

  1. a test go project was created as per the instructions on github.com/gluster/gogfapi
  2. GOOS=linux GOARCH=amd64 go build was executed to do the build

Memory leak creating gfapi.Volumes

Issue

In a long running service I create multiple gfapi.Volume objects to ensure the service does not stop functioning if a singleton instance disconnects. This causes a problem as the service sonsumes memory over time and is eventually OOM killed.

To Replicate the issue

1. The following files are created in a directory:

Dockerfile

FROM gluster/glusterfs-client AS builder

RUN yum -y update
RUN yum install -y glusterfs-api glusterfs-api-devel glusterfs-fuse gcc
RUN yum -y install valgrind
RUN curl -k https://dl.google.com/go/go1.15.3.linux-amd64.tar.gz | tar xz -C /usr/local

ENV PATH=/usr/local/go/bin:$PATH

docker-compose.yml

version: "3.8"

volumes:
  save1:
  data1:
  lvm1:

services:
  builder:
    image: chrisb/builder
    build:
      context: .
      target: builder
    volumes:
    - ./:/src
    working_dir: /src

  glusterfs:
    image: gluster/gluster-centos
    hostname: glusterfs
    privileged: true
    environment:
      CGROUP_PIDS_MAX: 0
    volumes:
      - lvm1:/run/lvm
      - save1:/var/lib/glusterd
      - data1:/data

main.go

package main

import (
	"flag"
	"fmt"
	"log"
	"strings"

	"github.com/gluster/gogfapi/gfapi"
)

var loop = flag.Int("n", 10, "iterations")

const gfs_volume = "vol1"
var gfs_hosts = strings.Split("glusterfs", ` `)

func testMount() {
	vol := &gfapi.Volume{}

	if err := vol.Init(gfs_volume, gfs_hosts...); err != nil {
		log.Printf("failed %v\n", err)
	}

	//	if err := vol.Mount(); err != nil {
	//		log.Fatalf("failed %v", err)
	//	}
	//	defer vol.Unmount()
}

func main() {
	flag.Parse()

	for i := 0; i < *loop; i++ {
		testMount()
		fmt.Printf(".")
	}
	fmt.Printf("\ndone\n")
}

2. Start the environment

In a terminal in the folder run the following to start the glusterfs server and a build environment

docker-compose up -d glusterfs
docker-compose exec glusterfs gluster volume create vol1 glusterfs:/data/vol1
docker-compose exec glusterfs gluster volume start vol1
docker-compose build builder
docker-compose run builder

3. Run valgrind

In the builder terminal run the following commands to build the binary file and generate reports using valgrind memcheck module:

go build -o test
valgrind --log-file=log50.txt --leak-check=full ./test -n 50
valgrind --log-file=log100.txt --leak-check=full ./test -n 100

Observed Behaviour

There is a linear growth in memory leaks reported as the number of iterations is increased.

SIGABORT when re-using a gfapi.Volume object.

Issue

In an effort to deal with some memory leaks in a volume plugin for docker based on gogfapi, I wanted to reduce the number of gfapi.Volume objects created, but potentially remount them, as docker plugins are long running and I need the plugin to robustly reconnect.

However, re-using a gfapi.Volume object causes an error.

Steps to reproduce.

  1. This Dockerfile sets up a golang build environment with gogfapi available:
FROM gluster/glusterfs-client AS builder

RUN yum -y update
RUN yum install -y glusterfs-api glusterfs-api-devel glusterfs-fuse gcc
RUN curl -k https://dl.google.com/go/go1.15.3.linux-amd64.tar.gz | tar xz -C /usr/local

ENV PATH=/usr/local/go/bin:$PATH
  1. This docker-compose.yml starts both the builder container, as well as running a gluster server:
version: "3.8"

volumes:
  save1:
  data1:
  lvm1:

services:

  builder:
    image: chrisb/builder
    build:
      context: .
      target: builder
    volumes:
    - ./src:/src
    working_dir: /src

  glusterfs:
    image: gluster/gluster-centos
    hostname: glusterfs
    privileged: true
    environment:
      CGROUP_PIDS_MAX: 0
    volumes:
      - lvm1:/run/lvm
      - save1:/var/lib/glusterd
      - data1:/data

Start glusterfs and make a volume.

docker-compose up -d glusterfs
docker-compose exec glusterfs gluster volume create vol1 glusterfs:/data/vol1
docker-compose exec glusterfs gluster volume start vol1
  1. Create the golang program

Create src/main.go on the host system with the following contents:

package main

import (
	"fmt"
	"log"
	"strings"

	"github.com/gluster/gogfapi/gfapi"
)

const gfs_volume = "vol1"
const gfs_hosts = "glusterfs"

func testMount(vol *gfapi.Volume) {
	if err := vol.Mount(); err != nil {
		log.Fatalf("failed %v", err)
	}
	defer vol.Unmount()
}

func main() {
	vol := &gfapi.Volume{}
	hosts := strings.Split(gfs_hosts, ` `)

	if err := vol.Init(gfs_volume, hosts...); err != nil {
		log.Fatalf("failed %v", err)
	}

	for i := 0; i < 2; i++ {
		testMount(vol)
		fmt.Printf(".")
	}
}
  1. Execute the program

In a terminal

docker-compose run builder
go run main.go

Observed Behavior

After one successful loop iteration, the program crashes

[root@7063d3435daf reuse-volume]# go run main.go
.free(): invalid pointer
SIGABRT: abort
PC=0x7fe18c41f57f m=0 sigcode=18446744073709551610

goroutine 0 [idle]:
runtime: unknown pc 0x7fe18c41f57f
stack: frame={sp:0x7ffdbd0d2c70, fp:0x0} stack=[0x7ffdbc8d4108,0x7ffdbd0d3140)
00007ffdbd0d2b70:  0000000000000000  0000000000000000 
00007ffdbd0d2b80:  0000000000000000  0000000000000000 
00007ffdbd0d2b90:  0000000000000000  0000000000000000 
00007ffdbd0d2ba0:  0000000000000000  0000000000000000 
00007ffdbd0d2bb0:  0000000000000000  0000000000000000 
00007ffdbd0d2bc0:  0000000000000000  0000000000000000 
00007ffdbd0d2bd0:  0000000000000000  0000000000000000 
00007ffdbd0d2be0:  0000000000000000  0000000000000000 
00007ffdbd0d2bf0:  0000000000000000  0000000000000000 
00007ffdbd0d2c00:  0000000000000000  0000000000000000 
00007ffdbd0d2c10:  0000000000000000  0000000000000000 
00007ffdbd0d2c20:  0000000000000000  0000000000000000 
00007ffdbd0d2c30:  0000000000000000  0000000000000000 
00007ffdbd0d2c40:  0000000000000000  0000000000000000 
00007ffdbd0d2c50:  0000000000000000  0000000000000000 
00007ffdbd0d2c60:  0000000000000000  0000000000000000 
00007ffdbd0d2c70: <0000000000000000  0000000000000000 
00007ffdbd0d2c80:  0000000000000000  0000000000000000 
00007ffdbd0d2c90:  0000000000000000  0000000000000000 
00007ffdbd0d2ca0:  0000000000000000  0000000000000000 
00007ffdbd0d2cb0:  0000000000000000  0000000000000000 
00007ffdbd0d2cc0:  0000000000000000  0000000000000000 
00007ffdbd0d2cd0:  0000000000000000  0000000000000000 
00007ffdbd0d2ce0:  0000000000000000  0000000000000000 
00007ffdbd0d2cf0:  fffffffe7fffffff  ffffffffffffffff 
00007ffdbd0d2d00:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d10:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d20:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d30:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d40:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d50:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d60:  ffffffffffffffff  ffffffffffffffff 
runtime: unknown pc 0x7fe18c41f57f
stack: frame={sp:0x7ffdbd0d2c70, fp:0x0} stack=[0x7ffdbc8d4108,0x7ffdbd0d3140)
00007ffdbd0d2b70:  0000000000000000  0000000000000000 
00007ffdbd0d2b80:  0000000000000000  0000000000000000 
00007ffdbd0d2b90:  0000000000000000  0000000000000000 
00007ffdbd0d2ba0:  0000000000000000  0000000000000000 
00007ffdbd0d2bb0:  0000000000000000  0000000000000000 
00007ffdbd0d2bc0:  0000000000000000  0000000000000000 
00007ffdbd0d2bd0:  0000000000000000  0000000000000000 
00007ffdbd0d2be0:  0000000000000000  0000000000000000 
00007ffdbd0d2bf0:  0000000000000000  0000000000000000 
00007ffdbd0d2c00:  0000000000000000  0000000000000000 
00007ffdbd0d2c10:  0000000000000000  0000000000000000 
00007ffdbd0d2c20:  0000000000000000  0000000000000000 
00007ffdbd0d2c30:  0000000000000000  0000000000000000 
00007ffdbd0d2c40:  0000000000000000  0000000000000000 
00007ffdbd0d2c50:  0000000000000000  0000000000000000 
00007ffdbd0d2c60:  0000000000000000  0000000000000000 
00007ffdbd0d2c70: <0000000000000000  0000000000000000 
00007ffdbd0d2c80:  0000000000000000  0000000000000000 
00007ffdbd0d2c90:  0000000000000000  0000000000000000 
00007ffdbd0d2ca0:  0000000000000000  0000000000000000 
00007ffdbd0d2cb0:  0000000000000000  0000000000000000 
00007ffdbd0d2cc0:  0000000000000000  0000000000000000 
00007ffdbd0d2cd0:  0000000000000000  0000000000000000 
00007ffdbd0d2ce0:  0000000000000000  0000000000000000 
00007ffdbd0d2cf0:  fffffffe7fffffff  ffffffffffffffff 
00007ffdbd0d2d00:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d10:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d20:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d30:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d40:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d50:  ffffffffffffffff  ffffffffffffffff 
00007ffdbd0d2d60:  ffffffffffffffff  ffffffffffffffff 

goroutine 1 [syscall]:
runtime.cgocall(0x4ad8d0, 0xc00003ae18, 0x1)
        /usr/local/go/src/runtime/cgocall.go:133 +0x5b fp=0xc00003ade8 sp=0xc00003adb0 pc=0x406b1b
github.com/gluster/gogfapi/gfapi._C2func_glfs_init(0x182c100, 0x0, 0x0, 0x0)
        _cgo_gotypes.go:383 +0x55 fp=0xc00003ae18 sp=0xc00003ade8 pc=0x4abb35
github.com/gluster/gogfapi/gfapi.(*Volume).Mount.func1(0xc00003af60, 0xc0000b0080, 0x1, 0xc00003ae90)
        /root/go/pkg/mod/github.com/gluster/[email protected]/gfapi/volume.go:92 +0x55 fp=0xc00003ae50 sp=0xc00003ae18 pc=0x4ac955
github.com/gluster/gogfapi/gfapi.(*Volume).Mount(0xc00003af60, 0x1, 0x8)
        /root/go/pkg/mod/github.com/gluster/[email protected]/gfapi/volume.go:92 +0x2f fp=0xc00003aea8 sp=0xc00003ae50 pc=0x4ac3af
main.testMount(0xc00003af60)
        /src/reuse-volume/main.go:17 +0x3c fp=0xc00003af08 sp=0xc00003aea8 pc=0x4acb3c
main.main()
        /src/reuse-volume/main.go:32 +0x129 fp=0xc00003af88 sp=0xc00003af08 pc=0x4acd29
runtime.main()
        /usr/local/go/src/runtime/proc.go:204 +0x209 fp=0xc00003afe0 sp=0xc00003af88 pc=0x439009
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1374 +0x1 fp=0xc00003afe8 sp=0xc00003afe0 pc=0x467961

rax    0x0
rbx    0x6
rcx    0x7fe18c41f57f
rdx    0x0
rdi    0x2
rsi    0x7ffdbd0d2c70
rbp    0x7ffdbd0d2fc0
rsp    0x7ffdbd0d2c70
r8     0x0
r9     0x7ffdbd0d2c70
r10    0x8
r11    0x246
r12    0x1000
r13    0x7ffdbd0d2ee0
r14    0x10
r15    0x7fe164008000
rip    0x7fe18c41f57f
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
exit status 2

Explicit malloc() and free() overhead for C strings

Continuing discussion from pull request https://github.com/kshlm/gogfapi/pull/5

Was doing a ltrace of gogfapi client (Swift's object server) and noticed this performance overhead:

[root@hummingbird ~]# curl -v -X PUT http://localhost:8080/v1/AUTH_test/c1/o4 -d'hello'
[root@hummingbird gfapi]# ltrace -T -ff -p 28906

[pid 28908] malloc(17, 0xc820000f00, 0xc82004afa8, 0xc82004aff8)                                                                                   = 0x7f28f80008c0 <0.021299>
[pid 28908] __errno_location(0xc82004b080, 0xc820000f00, 0xc82004aff0, 0xc82004b080)                                                               = 0x7f291ce45640 <0.006508>
[pid 28908] glfs_stat(0x1ad2550, 0x7f28f80008c0, 0xc82017bd40, 0xc82004b080)                                                                       = 0xffffffff <0.015113>
[pid 28908] free(0x7f28f80008c0, 0xc820000f00, 0xc82004b018, 0xc82004b080)                                                                         = 0 <0.003132>
[pid 28908] malloc(14, 0xc820000f00, 0xc82004aee0, 0xc82004af30)                                                                                   = 0x7f28f80008c0 <0.002026>
[pid 28908] __errno_location(0xc82004afb8, 0xc820000f00, 0xc82004af28, 0xc82004afb8)                                                               = 0x7f291ce45640 <0.002569>
[pid 28908] glfs_stat(0x1ad2550, 0x7f28f80008c0, 0xc82017bdd0, 0xc82004afb8)                                                                       = 0 <0.003362>
[pid 28908] free(0x7f28f80008c0, 0xc820000f00, 0xc82004af50, 0xc82004afb8)                                                                         = 0 <0.001869>
[pid 28908] malloc(51, 0xc820000f00, 0xc82004af80, 0xc82004afd0)                                                                                   = 0x7f28f8002170 <0.001659>
[pid 28908] __errno_location(0xc82004b058, 0xc820000f00, 0xc82004afc0, 0xc82004b058)                                                               = 0x7f291ce45640 <0.001534>
[pid 28908] glfs_creat(0x1ad2550, 0x7f28f8002170, 193, 438)                                                                                        = 0x7f28f80019e0 <0.042730>
[pid 28908] free(0x7f28f8002170, 0xc820000f00, 0xc82004aff0, 0xc82004b058)                                                                         = 0 <0.002435>
[pid 28908] __errno_location(0xc82004af50, 0xc820000f00, 0xc82004aeb0, 0xc82004af50)                                                               = 0x7f291ce45640 <0.020759>
[pid 28908] glfs_write(0x7f28f80019e0, 0xc820140000, 5, 0)                                                                                         = 5 <0.003916>
[pid 28908] malloc(20, 0xc820000f00, 0xc82004aee8, 0xc82004af38)                                                                                   = 0x7f28f8003160 <0.002540>
[pid 28908] __errno_location(0xc82004afc0, 0xc820000f00, 0xc82004af20, 0xc82004afc0)                                                               = 0x7f291ce45640 <0.001196>
[pid 28908] glfs_fsetxattr(0x7f28f80019e0, 0x7f28f8003160, 0xc8200872c0, 168)                                                                      = 0 <0.020598>
[pid 28908] free(0x7f28f8003160, 0xc820000f00, 0xc82004af58, 0xc82004afc0)                                                                         = 0 <0.002918>
[pid 28908] __errno_location(0xc82004b150, 0xc820000f00, 0xc82004b0c0, 0xc82004b150)                                                               = 0x7f291ce45640 <0.003208>
[pid 28908] glfs_fsync(0x7f28f80019e0, 0xc820000f00, 0xc82004b0c0, 0xc82004b150)                                                                   = 0 <0.002416>
[pid 28908] __errno_location(0xc82004b160, 0xc820000f00, 0xc82004b0d0, 0xc82004b160)                                                               = 0x7f291ce45640 <0.001559>
[pid 28908] glfs_close(0x7f28f80019e0, 0xc820000f00, 0xc82004b0d0, 0xc82004b160)                                                                   = 0 <0.002229>
[pid 28908] malloc(51, 0xc820000f00, 0xc82004b068, 0xc82004b0b8)                                                                                   = 0x7f28f8004010 <0.001169>
[pid 28908] malloc(17, 0xc820000f00, 0xc82004b068, 0xc82004b0b8)                                                                                   = 0x7f28f80008c0 <0.000957>
[pid 28908] __errno_location(0xc82004b140, 0xc820000f00, 0xc82004b0b0, 0xc82004b140)                                                               = 0x7f291ce45640 <0.000980>
[pid 28908] glfs_rename(0x1ad2550, 0x7f28f8004010, 0x7f28f80008c0, 0xc82004b140)                                                                   = 0 <0.081158>
[pid 28908] free(0x7f28f80008c0, 0xc820000f00, 0xc82004b0d8, 0xc82004b140)                                                                         = 0 <0.002223>
[pid 28908] free(0x7f28f8004010, 0xc820000f00, 0xc82004b0d8, 0xc82004b140)                                                                         = 0 <0.003288>

For one object PUT

[root@hummingbird gfapi]# ltrace -c -T -ff -p 28906



 time     seconds  usecs/call     calls      function
------ ----------- ----------- --------- --------------------
 67.42    0.104526      104526         1 glfs_rename
  6.98    0.010820       10820         1 glfs_fsync
  5.34    0.008280         690        12 free
  4.99    0.007734         644        12 malloc
  4.91    0.007614         692        11 __errno_location
  3.18    0.004934        1644         3 glfs_getxattr
  3.12    0.004835        4835         1 glfs_creat
  1.47    0.002275        2275         1 glfs_fsetxattr
  1.25    0.001935         967         2 glfs_stat
  0.83    0.001281        1281         1 glfs_close
  0.52    0.000805         805         1 glfs_write
------ ----------- ----------- --------- --------------------
100.00    0.155039                    46 total

For a hundred object PUTs

[root@hummingbird gfapi]# ltrace -c -ff -p 28906

^C% time     seconds  usecs/call     calls      function
------ ----------- ----------- --------- --------------------
 43.27    5.952481       59524       100 glfs_rename
 21.39    2.942866       29428       100 glfs_fsync
  7.35    1.011715       10117       100 glfs_fsetxattr
  7.26    0.999125        9991       100 glfs_creat
  5.04    0.692812        1154       600 free
  4.95    0.681092         851       800 __errno_location
  4.67    0.641790        3208       200 glfs_stat
  3.96    0.545340         908       600 malloc
  1.26    0.173293        1732       100 glfs_close
  0.84    0.115284        1152       100 glfs_write
------ ----------- ----------- --------- --------------------
100.00   13.755798                  2800 total

[gluster6] fails to compile because of glusterfs API incompatibility

Trying to use gogfapi at fedora and it fails:

gfapi/fd_legacy.go:17:26: not enough arguments in call to _C2func_glfs_fsync
        have (*_Ctype_struct_glfs_fd)
        want (*_Ctype_struct_glfs_fd, *_Ctype_struct_glfs_stat, *_Ctype_struct_glfs_stat)
gfapi/fd_legacy.go:28:28: not enough arguments in call to _C2func_glfs_ftruncate
        have (*_Ctype_struct_glfs_fd, _Ctype_long)
        want (*_Ctype_struct_glfs_fd, _Ctype_long, *_Ctype_struct_glfs_stat, *_Ctype_struct_glfs_stat)
gfapi/fd_legacy.go:37:24: not enough arguments in call to _C2func_glfs_pread
        have (*_Ctype_struct_glfs_fd, unsafe.Pointer, _Ctype_ulong, _Ctype_long, number)
        want (*_Ctype_struct_glfs_fd, unsafe.Pointer, _Ctype_ulong, _Ctype_long, _Ctype_int, *_Ctype_struct_glfs_stat)
gfapi/fd_legacy.go:46:25: not enough arguments in call to _C2func_glfs_pwrite
        have (*_Ctype_struct_glfs_fd, unsafe.Pointer, _Ctype_ulong, _Ctype_long, number)
        want (*_Ctype_struct_glfs_fd, unsafe.Pointer, _Ctype_ulong, _Ctype_long, _Ctype_int, *_Ctype_struct_glfs_stat, *_Ctype_struct_glfs_stat)
FAIL    github.com/gluster/gogfapi/gfapi [build failed]
FAIL

It looks like gluster6 changes their API.

2.gfapi: A class of APIs have been enhanced to return pre/post gluster_stat information

A set of apis have been enhanced to return pre/post gluster_stat information. Applications using gfapi would need to adapt to the newer interfaces to compile against release-6 apis. Pre-compiled applications, or applications using the older API SDK will continue to work as before.

Are there any plans to adapt to the newer interfaces ๏ผŸ

Remove errno workaround introduced in #11

PR #11 introduced a workaround to fix EINVAL being returned on successful writes. This is most likely a bug in gfapi, we need to file a bug for this and have it fixed. We need to remove the workaround from gogfapi once gfapi has the fix.

Gluster 7.0

It seems I am unable to build gogfapi against libgfapi from gluster 7.0.

Is this anything you have in your pipeline?

RemoveAll function not implemented

// RemoveAll removes path and any children it con

Hey. I'm in need of removing all directories and files within a given directory. It looks like this has not been implemented but the comment for the function is present. Is it something that will be implemented?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.