lsm / hdfs-fuse Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/hdfs-fuse
Automatically exported from code.google.com/p/hdfs-fuse
============ HDFS-FUSE ============ HDFS-FUSE lets you to mount a HDFS in userspace. =========== How to use =========== on Linux 2.6: FUSE: # yum install fuse fuse-libs fuse-devel (Redhat/SuSE) # sudo apt-get install fuse fuse-libs fuse-devel (Debian/Ubuntu) Hadoop: $ tar xvzf hadoop-0.18.1.tar.gz $ cd xvzf hadoop-0.18.1/bin (for Pseudo-Distributed mode) $ ./hadoop namenode -format $ ./start-all.sh (Java Runtime Environment is needed to run Hadoop, see Hadoop's reference) HDFS-FUSE: $ tar xvzf hdfs-fuse.<ARCH>.tar.gz $ cd hdfs-fuse.<ARCH>/conf (modify hdfs-fuse.conf for your Hadoop cluster) $ cd ../bin (To mount) $ export JAVA_HOME=<your-java-installation-directory> $ export HADOOP_HOME=<your-hadoop-installation-directory> $ export FUSE_HOME=<your-fuse-installation-directory> $ export HDFS-FUSE_HOME=<the-current-directory> $ export HDFS-FUSE_CONF=<the-current-directory>/conf $ ./hdfs-mount <your-mount-point> (To umount) $ fusermount -u <your-mount-point> ============ Notes ============ The current implementation of HDFS-FUSE only supported a limited set of filesystem feature. Supported operation: getattr() mkdir() rmdir() unlink() rename() truncate() open() read() write() statfs() flush() release() readdir() init() destroy() access() create() ftruncate() fgetattr() Fake implementation: chmod() chown() utime() setxattr() getxattr() listxattr() removexattr() Not supported: readlink() mknod() symlink() link() fsync() opendir() fsyncdir() lock() bmap() Future roadmap plan: More feature implementation with Hadoop trunk release ====================== Questions / Bug report ====================== If you have any question or comment, feel free to send email to <[email protected]>
What steps will reproduce the problem?
1. I setup an 5 nodes hadoop (1 namenode together with 4 datanodes); and
2. start hdfs-fuse on dnode1; and then
3. the hdfs appears in /mnt/hdfs; and also
4. I can create a file or copy one from anywhere; but
5. I can not create a dir
What is the expected output? What do you see instead?
as described above.
What version of the product are you using? On what operating system?
I think it is the latest currently (till Oct 31st, 2008)
Please provide any additional information below.
If you want any more information, please contact me without hesitate.
Original issue reported on code.google.com by [email protected]
on 31 Oct 2008 at 6:38
mkdir /usr/mnt
./hdfs-mount /usr/mnt
./hdfs-fuse: error while loading shared libraries: libhdfs.so: cannot open
shared object file: No such file or directory
========
$ export JAVA_HOME=/usr/java/jdk1.6.0_21
$ export HADOOP_HOME=/usr/hadoop/hadoop-0.21.0
$ export FUSE_HOME=/usr/fuse-2.8.5
$ export HDFS_FUSE_HOME=/usr/hdfsfuse
$ export HDFS_FUSE_CONF=/usr/hdfsfuse/conf
========
How can I resove this? Thank you very much.
Original issue reported on code.google.com by [email protected]
on 19 Mar 2011 at 1:32
What steps will reproduce the problem?
1. If Java 1.6.0_11, go to mounting point type "ls"
What is the expected output? What do you see instead?
List of files and directories
What version of the product are you using? On what operating system?
Latest
Please provide any additional information below.
# An unexpected error has been detected by Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x4a12ae17, pid=16447, tid=11811728
#
# Java VM: Java HotSpot(TM) Server VM (11.0-b16 mixed mode linux-x86)
# Problematic frame:
# C [libc.so.6+0x6ee17] strstr+0x27
#
# An error report file with more information is saved as:
# /home/hadoop/hdfs-fuse/bin/hs_err_pid16447.log
Original issue reported on code.google.com by [email protected]
on 10 Dec 2008 at 9:42
Maybe hdfs-fuse and fuse-dfs (in contrib branch of hadoop) could join
forces and union all their features and work together on new ones. I would
suggest contrib since it is part of hadoop itself and is licensed with the
Apache license?
See:
http://issues.apache.org/jira/browse/HADOOP/component/12312376
and:
http://svn.apache.org/repos/asf/hadoop/core/trunk/src/contrib/fuse-dfs/
Original issue reported on code.google.com by [email protected]
on 1 Aug 2008 at 6:48
We installed fuse via yum, unsure what to use as FUSE_HOME when deploying
hdfs-fuse:
$ ldd ./hdfs-fuse
linux-gate.so.1 => (0xffffe000)
libfuse.so.2 => not found
libhdfs.so => not found
libjvm.so => not found
libhdfs-fuse.so => not found
libc.so.6 => /lib/libc.so.6 (0xf7d99000)
libm.so.6 => /lib/libm.so.6 (0xf7d71000)
/lib/ld-linux.so.2 (0xf7ee9000)
libfuse.so.2 is in /usr/lib64...
Original issue reported on code.google.com by [email protected]
on 26 Aug 2009 at 10:28
What steps will reproduce the problem?
1.
The c code like this,
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
int main(int argc, char **argv) {
int fp;
fp= open("/mnt/hdfs/fusetest",O_CREAT | O_RDWR , 0664);
if(fp<0){
printf("error creating...\n");
return -1;
}
}
2. compile the code with gcc and run the executable.
What is the expected output? What do you see instead?
I expect the file can be created successfully.
What version of the product are you using? On what operating system?
hadoop-0.20.2 on ubuntu 9.04 amd64. Kernel version 2.6.28.10
Please provide any additional information below.
fopen works. but open does not work.
Original issue reported on code.google.com by [email protected]
on 8 Jul 2011 at 8:04
What steps will reproduce the problem?
1. Use ant to compile hdfs-fuse.
2. Can someone make a new tar file for hadoop-1.0.0?
3.
What is the expected output? What do you see instead?
[root@pcuwtwin07 hadoop-1.0.0]# ant compile -Dfusedfs=1
Buildfile: build.xml
clover.setup:
clover.info:
[echo]
[echo] Clover not found. Code coverage reports disabled.
[echo]
clover:
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /opt/hadoop-1.0.0/ivy/ivy-2.1.0.jar
[get] Not modified - so not downloaded
ivy-init-dirs:
ivy-probe-antlib:
BUILD FAILED
/opt/hadoop-1.0.0/build.xml:2316: Class
org.apache.tools.ant.taskdefs.ConditionTask doesn't support the nested
"typefound" element.
Total time: 0 seconds
What version of the product are you using? On what operating system?
I am using hadoop-1.0.0 on Scientific Linux 6
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 19 Jan 2012 at 8:07
cp, rm, mkdir work fine
when I do ls it causes error "end point disconnected".
After that drive gets unmounted..
I resolved the issue by modifying hdfs_readdir in hdfs.c
Attached the patch.
Original issue reported on code.google.com by [email protected]
on 5 Sep 2009 at 5:26
Attachments:
What steps will reproduce the problem? :
1. I have installed HADOOP 0.16.4 on Gentoo (i386) 2.6.23 Kernel
2. I have all needed packeges installed HDFS, FUSE 2.7.0 Kernel module +
libs
3. Java is 1.6.0.07
hadoop@pandora ~ $ hadoop dfs -ls
Found 2 items
/user/hadoop/sis2.tar <r 4> 1853480960 2008-07-29 15:30 rw-
r--r-- hadoop supergroup
/user/hadoop/sis3.tar <r 4> 1853480960 2008-07-29 15:45 rw-
r--r-- hadoop supergroup
hadoop@pandora ~ $ ls -ls /mnt
total 12732
0 drwxr-xr-x 2 root root 48 Jun 22 2007 cdrom
0 drwx------ 2 root root 72 Aug 3 2006 floppy
0 drwxr-xr-x 5 hadoop hadoop 216 Jul 30 2008 hadoop
12732 -rw-r--r-- 1 root root 13024172 May 6 01:48 hadoop-0.16.4.tar.gz
0 drwxr-xr-x 2 root root 48 Jul 30 2008 hdfs
0 drwxr-xr-x 2 root root 48 Jun 23 2007 md1
pandora bin # ldd hdfs-fuse
linux-gate.so.1 => (0xffffe000)
libfuse.so.2 => /usr/lib/libfuse.so.2 (0xb7f3a000)
libhdfs.so => /lib/libhdfs.so (0xb7f33000)
libjvm.so => /lib/libjvm.so (0x06000000)
libhdfs-fuse.so => /lib/libhdfs-fuse.so (0xb7f28000)
libc.so.6 => /lib/libc.so.6 (0xb7e0a000)
libm.so.6 => /lib/libm.so.6 (0xb7de5000)
librt.so.1 => /lib/librt.so.1 (0xb7ddb000)
libdl.so.2 => /lib/libdl.so.2 (0xb7dd7000)
libpthread.so.0 => /lib/libpthread.so.0 (0xb7dc4000)
/lib/ld-linux.so.2 (0xb7f5a000)
pandora bin # ./hdfs-mount /mnt/hdfs/
pandora bin # ls /mnt/hdfs/ -ls
total 0
pandora bin # lsmod
Module Size Used by
configfs 23696 0
fuse 38548 0
raid456 120336 1
md_mod 66964 2 raid456
async_xor 6400 1 raid456
async_memcpy 5760 1 raid456
async_tx 6144 1 raid456
xor 17416 2 raid456,async_xor
Doesn't mount even no ERRORS?!
Original issue reported on code.google.com by [email protected]
on 30 Jul 2008 at 12:59
What steps will reproduce the problem?
1. hdfs-mount ~/data
2. sudo chmod +w ~/data
What is the expected output? What do you see instead?
I want the permissions to change, but instead I receive a permission denied
error.
What version of the product are you using? On what operating system?
I installed hdfs-fuse-0.2.linux-0.2.linux2.6-gcc4.1-x86.tar.gz onto Centos
Please provide any additional information below.
I am trying to samba share /home/hduser/data which is the mount point for
hdfs-fuse
When the system boots (before I mount the hdfs share) the samba share is active
and I can copy files over to the directory and I see them appear in from linux.
The /home/hduser/data directory has permission drwxrwxrwx and is owned by
hduser:hduser
After I execute the command
hdfs-mount /home/hduser/data
the permissions change to drwxr-xr-x and the directory is owned by root:root.
I am no longer able to write files to the samba share as I get a permission
denied error. Nor am I able to list the contents of the directory.
I can, however, write a file from linux directly to /home/hduser/data and see
it appear in HDFS by executing the command
hadoop fs -ls /
so hdfs-fuse is successfully mounting the directory, but I can't share it for
some reason.
I think the hdfs mounted directory needs write permissions. If I execute the
command
chmod +w /home/hduser/data
the permissions are unchanged. If I execute the command
sudo chmod +w /home/hduser/data
I receive the error "cannot access 'data/': Permission denied.
I've tried to change the permissions from the name node by executing
hadoop fs -chmod +w /
but no changes are reflected at the mount point.
Do you have any thoughts/suggestions?
Original issue reported on code.google.com by [email protected]
on 20 Jun 2014 at 2:45
I am using hadoop "hadoop-0.20.1" version in ubundu. I need to install fuse in
my machine.
1) Could you please advise me "hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz" is
suitable for hadoop-0.20.1.
2) Could you please provide me a fuse installation steps.
Thanks in advance
Maria prabudass
Original issue reported on code.google.com by [email protected]
on 11 Oct 2011 at 7:13
Is there a version for CentOS 7? i.e. can it be built from source for CentOS7?
Also, has hdfs-fuse been tested with Tachyon?
Thanks.
Original issue reported on code.google.com by [email protected]
on 18 May 2015 at 5:24
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.