[Red5] [Red5 0.9.0 final] Terracotta support
dan.daemon at gmail.com
Mon Feb 8 00:17:23 PST 2010
Yes, currently I have 4 clusters:
1 - cluster #1: 1 origin + 3 edges
2 - cluster #2: 1 origin + 3 edges
3 - cluster #3: 1 origin + 3 edges
4 - cluster #4: 1 origin + 5 edges
each cluster can handle about 1500 connections not much. The cluster #4 with
5 edges can handle
same 1500 connections like clusters #1, #2, #3. This is point ONE.
Point No TWO: When I delete all edges and keep working only 4 origin servers
RTMP of them.... These all 4 servers can handle 1500 connection each...
1. Why 4 servers can handle 1500 connections without edges and with edges?
2. What sense to use edges if server can handle 1500 connections without
3. What scalable advantages to use edges if origin server works absolutely
with edges and without edges...
And common questions :))))
How to extend my 4 clusters to handle much more connections... now it can
Maybe somebody has some suggestions with cluster optimization?
On Mon, Feb 8, 2010 at 6:08 AM, Dan Rossi <electroteque at gmail.com> wrote:
> On 08/02/2010, at 9:27 AM, Walter Tak wrote:
> Not sure if I understood that remark but 1500 publishing concurrent
> connections to any server is actually pretty demanding for any server.
> The only option is to reduce the amount of incoming connections to one
> server by adding servers. Creating more clusters (IIRC your setup correct)
> and using more nodes but smaller nodes so in the end each node only has to
> server 200-400 clients.
> If you're able to scaleout your application that way you can easily have
> hundreds of Red5 VM's serving streams to subscribers.
> I'd like to try to keep things simple, perhaps that's not possible in your
> situation, perhaps it is.
> E.g. lets assume you want to publish 1000 live streams. Each stream has a
> few subscribers, ranging from 1 just subscriber to say 20.
> Currently hardly any Red5 server can publish that many with just one server
> so you're in nead of a cluster. However a single Origin server still cannot
> handle 1000 incoming streams. So you setup several individual clusters as
> you already did (IIRC). You basically spread out 1000 livestreams over 4
> Origin servers thus each server only has to deal with 250 incoming streams.
> Each Origin server copies the streams to 3-4 Edge servers and each Edge
> server can easily handle say 200 subscribers each.
> That results in a nice setup of 4 x (1 + 3 ) = 12 servers. Either physical
> or VMs (al be it very large VMs ofcourse).
> Now think of this possible scenario ; under no circumstances one live
> stream will be watched by more than 200 subscribers. In that case you don't
> need a cluster with Edge/Origin setups but you'd only need a large array of
> single node Red5 servers ; say 12 pieces. Each server is independent of it's
> brothers and sisters in the array.
> One application-server (Java , PHP, Perl , Python, Dot.net, whatever)
> routes incoming requests from webuser-users (who want/need to publish their
> stream) to a "free" Red5 server. After they started to publish subscribers
> show up and want to watch the stream ; the application-server redirects
> their request to the specific Red5 server et voila everything is reasonable
> When the situation occurs that you'd need more capacity for more incoming
> published streams, because your service is getting more popular then just
> add more servers to your array ; your application-server should keep a small
> list of servers and their load (by monitoring them -> sending requests each
> minute to learn the used amount of memory, CPU-usage, bandwidth-usage etc).
> Is there such a script available ? i was planning to add that functionality
> into the clustering plugin for the flowplayer project i did ;)
> That way your routing-server / application-server is the single bottleneck
> but since it doesn't route streams, only requests, it will probably be able
> to handle tens of thousands of streams without a problem. It's just
> Perhaps you already have such a system in place ofcourse, this isn't
> particulary rocketscience. Many large volume sites have the same problem but
> instead with video their problem is database / file / connection /
> firewall-capacity. Any system that uses a single 'Origin' hierachical-type
> structure will in the end run out of capacity at the top.
> ----- Original Message -----
> *From:* Dan Daemon <dan.daemon at gmail.com>
> *To:* red5 at osflash.org
> *Sent:* Sunday, 07 February 2010 20:44
> *Subject:* Re: [Red5] [Red5 0.9.0 final] Terracotta support
> How it possible to use VM on same computer if origin server dies after 1500
> concurrent connections
> even in configuration from 4 computers (1 origin + 3 edges)
> On Sun, Feb 7, 2010 at 8:07 PM, Dan Rossi <electroteque at gmail.com> wrote:
>> Yeah VM === Openvz , thats my setup, dual dual core Opteron and 20GB of
>> ram. I'll setup 4 instances then to act as a mock cluster setup :) My only
>> problem is I have one IP.
>> On 08/02/2010, at 3:24 AM, Walter Tak wrote:
>> > Hey Dan,
>> > you can use VMs as well so you don't require say 6 physical machines but
>> just one decent dual/quad xeon with say 6-8 Gb memory to be able to run
>> enough virtual machines with enough bandwidth to be able to emulate a
>> network of machines as a proof of concept.
>> > Regards,
>> > Walter
>> > ----- Original Message ----- From: "Dan Rossi" <electroteque at gmail.com>
>> > To: <red5 at osflash.org>
>> > Sent: Sunday, 07 February 2010 06:16
>> > Subject: Re: [Red5] [Red5 0.9.0 final] Terracotta support
>> >> Stephen Gong was working on this ages ago in 0.8.*. I tested it out and
>> it worked I think only with sharedobjects support though, but not sure how
>> the integration is going yet I am assuming it will require more conversation
>> with the terracotta peeps who pop their head up in the list now and then. I
>> think for a clustering solution red5 would integrate well with terracotta.
>> One thing I noticed with edge / origin the origin still needs to be one big
>> fk off server because its taking the load of the edge machines still
>> especially on the network so terracotta and file caching might help here I
>> suppose. I was testing on dual core xeon's at the time. I think the killer
>> here is still the metadata stuff which needs to be moved to a memory cache
>> perhaps, its more noticable on P4 or duo core than xeon though ie 100% cpu
>> usage compared to 25% usage for the same traffic but 25% for 100 VOD streams
>> on each frontend server is still quite high I reckon ;)
>> >> Here is the diagram of the setup steve made
>> >> So thats One or two origin servers, one delegating server for
>> terracotta and 3 or 4 edge machines behind a load balancer. Pretty expensive
>> and beefy setup. I dont have access to such a setup anymore since a client
>> moved from red5 to FMS when they moved into a new data centre. Im still keen
>> on setting up some amazon instances for testing such a setup if its not too
>> expensive I could even use my server running openvz for a dev testbed but I
>> believe the terracotta peeps have a serious clustering setup for a real
>> testbed :)
>> >> On 07/02/2010, at 8:15 AM, david.engelmaier wrote:
>> >>> Hi guys,
>> >>> First of all I would like to say big THANK YOU for the new 0.9
>> >>> release, especially for fixing the invoke memory leak bug.
>> >>> Somewhere I saw an announcement of Terracotta out of the box
>> >>> clustering in 0.9, but can't find anything about it in the changelog.
>> >>> All the posts regarding Terracotta+Red5 clustering are about a year
>> >>> old, is there any news in the 0.9 release concerning Terracotta
>> >>> clustering?
>> >>> Many thanks
>> >>> David Engelmaier
>> >>> _______________________________________________
>> >>> Red5 mailing list
>> >>> Red5 at osflash.org
>> >>> http://osflash.org/mailman/listinfo/red5_osflash.org
>> >> _______________________________________________
>> >> Red5 mailing list
>> >> Red5 at osflash.org
>> >> http://osflash.org/mailman/listinfo/red5_osflash.org
>> > _______________________________________________
>> > Red5 mailing list
>> > Red5 at osflash.org
>> > http://osflash.org/mailman/listinfo/red5_osflash.org
>> Red5 mailing list
>> Red5 at osflash.org
> Red5 mailing list
> Red5 at osflash.org
> Red5 mailing list
> Red5 at osflash.org
> Red5 mailing list
> Red5 at osflash.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Red5