From 8fa895189696e83e6120875886bc8888e0509195 Mon Sep 17 00:00:00 2001 From: Jarrod Johnson Date: Thu, 3 Apr 2014 09:54:46 -0400 Subject: [PATCH] Put comments in to hint a decent strategy to profile runtime performance To do performance optimization in this sort of application, this is about as well as I have been able to manage in python. I will say perl with NYTProf seems to be significantly better for data, but this is servicable. I tried yappi, but it goes wildly inaccurate with this codebase. Because of the eventlet plumbing, cProfile is still pretty misleading. Best strategy seems to be review cumulative time with a healthy grain of salt around the top items until you get down to info that makes sense. For example, trampoline unfairly gets a great deal of the 'blame' by taking on nearly all the activity. internal time seems to miss a great deal of important information. --- bin/confluent-server.py | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/bin/confluent-server.py b/bin/confluent-server.py index 9c738530..08e67b6f 100644 --- a/bin/confluent-server.py +++ b/bin/confluent-server.py @@ -5,4 +5,14 @@ path = os.path.realpath(os.path.join(path, '..')) sys.path.append(path) from confluent import main +#import cProfile +#import time +#p = cProfile.Profile(time.clock) +#p.enable() +#try: main.run() +#except: +# pass +#p.disable() +#p.print_stats(sort='cumulative') +#p.print_stats(sort='time')