Debugging cgi-scripts: Difference between revisions
Line 33: | Line 33: | ||
== Profiling with gprof == | == Profiling with gprof == | ||
When the program is too slow, you can ctrl-C and look for where it's stuck. Or run gprof, to show how much time each function takes. | When the program is too slow and you think it's due to CPU usage, you can ctrl-C and look for where it's stuck. Or run gprof, to show how much CPU time each function takes. | ||
First, recompile with another gcc option added or add it to your .bashrc | First, recompile with another gcc option added or add it to your .bashrc | ||
Line 53: | Line 53: | ||
4.41 0.17 0.02 lmCloneMem | 4.41 0.17 0.02 lmCloneMem | ||
2.94 0.18 0.01 1055248 0.00 0.00 hashFindVal | 2.94 0.18 0.01 1055248 0.00 0.00 hashFindVal | ||
== Profiling with valgrind == | |||
Gprof shows you only CPU time. If you're stuck in I/O somewhere, gprof won't show it. You need to do ctrl-c a few times (best) or you can use valgrind again | |||
valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes hgTracks | |||
callgrind_annotate callgrind.out.<yourPID> | less | |||
The tool kCacheGrind allows better inspection of the results than callgrind_annotate, but is a GUI program. It's on the big dev VM. |
Revision as of 10:46, 2 January 2014
See also:
- README.debug in the source tree
- Debug the cgi-scripts with GDB
Debugging with GDB
Complete instructions:
make sure you have compiled with -ggdb by adding
export COPT=-ggdb
to your .bashrc (if using bash). You might need to make clean; make cgi afterwards. Also make sure that the CGIs use the right hg.conf. Run
export HGDB_CONF=<PATHTOCGIS>/hg.conf
Then:
cd cgi-bin gdb --args hgc 'hgsid=4777921&c=chr21&o=27542938&t=27543085&g=pubsDevBlat&i=1000235064'
To not forget the quotes, do not include the question mark from your internet browser.
To get a stacktrace of the place where it's aborting:
break errAbort where
Finding memory problems with valgrind
Sometimes the program crashes at random places, because the stack or other datastructures have been destroyed by rogue code. You need valgrind to find the buggy code.
Run the program like this:
valgrind --tool=memcheck --leak-check=yes pslMap ~max/pslMapProblem.psl ~max/pslMap-dm3-refseq.psl out.temp
Profiling with gprof
When the program is too slow and you think it's due to CPU usage, you can ctrl-C and look for where it's stuck. Or run gprof, to show how much CPU time each function takes.
First, recompile with another gcc option added or add it to your .bashrc
export COPT=-ggdb -pg
Then run hgTracks, go to the cgi-bin directory and run gprof on the newly created gprof file:
gprof hgTracks gmon.out | less
hgTracks with the default tracks gave me this today:
Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 17.65 0.06 0.06 1145954 0.00 0.00 hashLookup 8.82 0.09 0.03 281068 0.00 0.00 cloneString 5.88 0.11 0.02 113781 0.00 0.00 hashAdd 5.88 0.13 0.02 113781 0.00 0.00 hashAddN 5.88 0.15 0.02 67666 0.00 0.00 lmCloneString 4.41 0.17 0.02 lmCloneMem 2.94 0.18 0.01 1055248 0.00 0.00 hashFindVal
Profiling with valgrind
Gprof shows you only CPU time. If you're stuck in I/O somewhere, gprof won't show it. You need to do ctrl-c a few times (best) or you can use valgrind again
valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes hgTracks callgrind_annotate callgrind.out.<yourPID> | less
The tool kCacheGrind allows better inspection of the results than callgrind_annotate, but is a GUI program. It's on the big dev VM.