S3FS Fuse-based file system

Hi Richard,

I’m using the same AMI.

My mount command is:

/usr/bin/s3fs wmconsulting/wowza -o accessKeyId=ACCESS-KEY -o secretAccessKey=SECRET_KEY -o use_cache=/tmp -o allow_other -o default_acl=public-read /mnt/s3

If I List (using s3cmd tool) the files in S3 I have this:

s3cmd ls s3://wmconsulting/wowza/
2010-02-11 13:16       289   s3://wmconsulting/wowza/BigBuckCupertino.smil
2010-02-11 13:16  57981953   s3://wmconsulting/wowza/BigBuckCupertinoHi.mov
2010-02-11 13:16  25065590   s3://wmconsulting/wowza/BigBuckCupertinoLo.mov
2010-02-11 13:16  43695703   s3://wmconsulting/wowza/BigBuckCupertinoMed.mov
2010-02-10 16:15  22918100   s3://wmconsulting/wowza/Extremists.flv
2010-02-10 16:15  18261973   s3://wmconsulting/wowza/Extremists.m4v
2010-02-12 03:28        27   s3://wmconsulting/wowza/radiostation.stream
2010-02-16 03:48        10   s3://wmconsulting/wowza/test
2010-02-16 04:28     45040   s3://wmconsulting/wowza/wms-plugin-collection.jar

but if list the files with the mount point and ls command don’t list any file:

[root@ip-10-244-00-00 s3]# pwd
/mnt/s3
[root@ip-10-244-00-00 s3]# ls -la
total 0

Trying read/write action I see the files:

[root@ip-10-244-00-00 s3]# cat test
test fuse
[root@ip-10-244-00-00 s3]# echo 'Fuse test write' >> test
[root@ip-10-244-00-00 s3]# cat test
test fuse
Fuse test write

Any suggestion?

BTW, “s3cmd” is a great tools and please if possible built-in by default in the S3 image.

Thanks in advance

Alejandro

Richard,

I have try with basic command, and without the Sub-Bucket, and the output is:

ls -la 
total 1
---------- 1 root root 0 Feb 10 11:14 wowza_$folder$

I can’t see the SubFolders inside of this bucket.

But right now I can list the files only if they are in the ROOT directory not inside of one folder.

[root@ip-10-244-00-00 s3]# pwd
/mnt/s3
[root@ip-10-244-00-00 s3]# ls -la
total 1
-rw-r--r-- 1 root root 0 Feb 16 08:35 test.rootdirectory
---------- 1 root root 0 Feb 10 11:14 wowza_$folder$

I read in the s3fs page about problems with subfolders and S3Fox extension.

If create the folder with “mkdir” you can see that on the FUSE FS, but don’t can on S3Fox, and is you create the folder with S3Fox, when list the files show “dirname_$folder$”

[root@ip-10-244-00-00 s3]# ll
total 2
---------- 1 root root 0 Feb 16 08:39 test-s3fox_$folder$
drwxr-xr-x 1 root root 0 Feb 16 08:38 test-sub-bucket
-rw-r--r-- 1 root root 0 Feb 16 08:35 test.rootdirectory
---------- 1 root root 0 Feb 10 11:14 wowza_$folder$

You have more information about this issue?

Thanks

Ale

Ok, I understand but if I create one Sub-Bucket with S3Fox I can’t read this from the FUSE mount point, I see the folders with this name:

richard-sub-bucket_$folder$

Exist any other way to create one sub-bucket?

My idea is use one Bucket for multiple clients, and read this from EC2 with S3FS, and inside of this bucket split in multiples directories one by client, for this reason I’m searching the correct way to do this from server-side.

This is an capture with the comparative between S3Fox and S3FS directory structure:

Thanks for your great support!

Ale

Richard,

Thanks for validate this… can be possible add S3CMD package by default in the news AMI? this package is really usefull for list and manage the S3 Storage from the Wowza comand line directly.

Thanks again, for your excellent support.

Alejandro

Richards,

Can you please tell me what version of s3fs are installed by default, and what is the source code.

I’m try to replicate this in my local dev environment.

Thanks again.

Ale

Hi,

I want some mechanism where i want some centralised system for storing in-memory objects(SO’s) or persistent objets.

In my case I am having one or more Origin server (with multiple Edges),

now my clients connects to the specific origin according to load of the server

(using load balancer), edge servers streams the media perfectly, but as different user connects to different origin for sharedobject or in memory objects should be managed in such a way that it should be shared among all the origin instances , so How this can be done?

Any caching system(Terracotta or Memcached dont know how it works?) or any other system which you guys know?

Something like this,

http://www.terracotta.org/confluence/display/wiki/Red5+and+Terracotta+POC

:frowning:

But what if we think on JVM Level and use the Terracotta like tool for Distributed Shared Objects among all jvm (in our case these will be SO’s),

I am just giving Hint, but actually don’t know how it will work in actual environment.

According to you is this possible on Wowza?

So that I will start looking into Terracotta otherwise i need to look in to some other soultions.

Is it possible to change the mount directory after it has first been set? I couldn’t sort through the questions and answers earlier in the post.

My Application.xml properities lists:

fileMoverDestinationPath

/mnt/s3

and I initially set up the connection with the command:

/usr/bin/s3fs [my.S3.bucket] -o accessKeyId=[ACCESSKEY] -o secretAccessKey=[SECRET KEY] -o default_acl=public-read /mnt/s3

But from now on, I want to move my files to a new subdirectory in my S3 bucket. How do I do this?

Thanks!

Ange52

Hi,

I have a somehow related issue on a european ami (ami-19341e6d).

Here’s the history to reproduce the issue.

pwd /mnt

/mnt

ls -la

total 28

drwxr-xr-x 4 root root 4096 Jun 10 11:46 .

drwxr-xr-x 22 root root 4096 Jun 10 11:46 …

drwxrwxr-x 4 wowza wowza 4096 Jun 10 11:46 WowzaMediaServer

drwx------ 2 root root 16384 May 31 12:37 lost+found

#mkdir /mnt/s3 – no issue

#s3fs latele-media -o accessKeyId=blahblah -o secretAccessKey=blahblah -o default_acl=public-read /mnt/s3 – no issue

pwd /mnt/s3

/mnt – STRANGE

ls

lost+found mediacache s3 – Very strange (lost the WowzaMediaServer folder…)

cd s3 --no issue

ls -la – big problem no answer from the instance… never.

My ami is EU and bucket is US (does not work for another reason with a european bucket)

I’m new to wowza on ec2, so any help would be appreciated.

Thanks,

Regards

Sorry for asking beginner-level questions, but can someone tell me how to install that patch ? http://github.com/tractis/s3fs-fork

Thanks for providing a step-by-step process assuming

european ami (ami-19341e6d) and european bucket.

Thanks in davance,

regards.

Well finally it appears that it’s not possible to use any other combination than Instance AND bucket in the same region (us-east in working well, europe fails, have not tried other regions).

I mount the bucket with the “-o allow_other” option, which allows me to ftp in the bucket via s3fs.

But ftp-created directories and files inside those directories are not visible by s3fox. Actually the directory is viewed as a file of zero bytes. Inside files are not accessible.

Probably for the same reason, the video files in that sub-folder can’t be streamed by any mean.

Files located at the root of the bucket (even if uploaded via ftp) are visible in s3fox and can be streamed.

I hope this can help begginers like me :wink:

Take a look at this post:

https://www.wowza.com/forums/showthread.php?p=39370#5

Richard

Thanks Richard.

Thanks for this info.

When I need to remote-create or manage sub-buckets, I’d rather rely on s3 classes for php (or any other environment) than ftp via s3fs.

s3fs looks like a convenient way of accessing s3 content from an ec2 instance in “test & debug” operation rather than for “production” mode

Rgds

Would it be possible to upgrade S3FS on the EMI’s? The version currently installed doesn’t support European buckets…

Also, I think a standard installation of ffmpeg would come in very handy. Either that, or a newer version of Fedora. It is close to impossible to find an rpm for ffmpeg for fedora 8, and building takes a long time if you have to do it every time you start up a new instance.

Hi guys. One quick question. How do I use the variables in the first post of this thread? I want to save the files into a folder inside the bucket using:

${com.wowza.wms.context.VHostConfigHome}: vhost folder

Thanks :wink:

Thanks Richard. That’s what I thought.

It doesn’t help me and trying to save the files to a subfolder (i.e. /mnt/s3/Videos) converts the Video folder in my bucket to some strange file (with a windows logo on it) which I can’t access… if I delete that file, the Video folder (with the correct folder icon) is shown again with the new moved file in it!!! That’s VERY strange!

Antway, I think I might be ok working with just the bucket without being able to organize the files. Thanks :wink:

How do I move the file using the same name as it was named on the filesystem before it was moved?

Right now I have a file in: /home/wowza/content/video.flv

After the file is moved to s3, it is renamed something like:

rtmp___xx.xx.xxx.xx_1935_liveorigin__definst__video_0.flv

How can I get it to move while keeping the same name as before? (video.flv)

I have also noticed that hitting “Stop Record” with the flash client does not immediately stop the recording. It still lasts for another 2-5 minutes after stop has been clicked. Is this a known issue? Any way around it?

Thanks for that Richard, but I’m trying to customize the name – when I record the video file I save it with a custom name, like with a timestamp, and a custom ID. When it it moved to S3 the name is changed to the default as I mentioned. The module you mentioned, I’m already using it to move the file, the allowed variables do not provide what I need in the filename.

You will have to build the ModuleWriteListener example at the top of the post instead of using the built-in module, and add code to rename and copy in the onWriteComplete handler

Richard

Hi Richard, thanks for for all your help so far. Sorry for my newbie questions, but I’m not a Java developer.

I’m encountering several problems.

First of all, are you referencing this example code here? Move recordings of live streams

Secondly, when I try to create my own module using the Wowza IDE, I get some path errors. I’ve followed the IDE instructions to the tee:

“The project cannot be built until build path errors are resolved”

“Unbound classpath container: ‘Default System Library’ in project ‘modules’”

Why am I getting these errors?

I’m pretty sure I know what to do, but I can’t build the module because of those errors. I’m running the IDE on Mac OS X. I have the developer version of the media server installed.