Using and understanding the Valgrind core
Different error-checking tools report different kinds of errors. The suppression mechanism therefore allows you to say
which tool or tool(s) each suppression applies to.
2.2. Getting started
First off, consider whether it might be beneficial to recompile your application and supporting libraries with debugging
info enabled (the
-g
option). Without debugging info, the best Valgrind tools will be able to do is guess which function
a particular piece of code belongs to, which makes both error messages and profiling output nearly useless. With
-g
,
you’ll get messages which point directly to the relevant source code lines.
Another option you might like to consider, if you are working with C++, is
-fno-inline
. That makes it easier to
see the function-call chain, which can help reduce confusion when navigating around large C++ apps. For example,
debugging OpenOffice.org with Memcheck is a bit easier when using this option. You don’t have to do this, but doing
so helps Valgrind produce more accurate and less confusing error reports. Chances are you’re set up like this already,
if you intended to debug your program with GNU GDB, or some other debugger.
If you are planning to use Memcheck: On rare occasions, compiler optimisations (at
-O2
and above, and sometimes
-O1
) have been observed to generate code which fools Memcheck into wrongly reporting uninitialised value errors,
or missing uninitialised value errors.
We have looked in detail into fixing this, and unfortunately the result is that
doing so would give a further significant slowdown in what is already a slow tool. So the best solution is to turn off
optimisation altogether.
Since this often makes things unmanageably slow, a reasonable compromise is to use
-O
.
This gets you the majority of the benefits of higher optimisation levels whilst keeping relatively small the chances of
false positives or false negatives from Memcheck. Also, you should compile your code with
-Wall
because it can
identify some or all of the problems that Valgrind can miss at the higher optimisation levels. (Using
-Wall
is also a
good idea in general.) All other tools (as far as we know) are unaffected by optimisation level, and for profiling tools
like Cachegrind it is better to compile your program at its normal optimisation level.
Valgrind understands both the older "stabs" debugging format, used by GCC versions prior to 3.1, and the newer
DWARF2/3/4 formats used by GCC 3.1 and later.
We continue to develop our debug-info readers, although the
majority of effort will naturally enough go into the newer DWARF readers.
When you’re ready to roll, run Valgrind as described above. Note that you should run the real (machine-code)
executable here.
If your application is started by, for example, a shell or Perl script, you’ll need to modify it to
invoke Valgrind on the real executables. Running such scripts directly under Valgrind will result in you getting error
reports pertaining to
/bin/sh
,
/usr/bin/perl
, or whatever interpreter you’re using. This may not be what you
want and can be confusing. You can force the issue by giving the option
--trace-children=yes
, but confusion
is still likely.
2.3. The Commentary
Valgrind tools write a commentary, a stream of text, detailing error reports and other significant events. All lines in
the commentary have following form:
==12345== some-message-from-Valgrind
The
12345
is the process ID. This scheme makes it easy to distinguish program output from Valgrind commentary,
and also easy to differentiate commentaries from different processes which have become merged together, for whatever
reason.
By default, Valgrind tools write only essential messages to the commentary, so as to avoid flooding you with
information of secondary importance. If you want more information about what is happening, re-run, passing the
-v
option to Valgrind. A second
-v
gives yet more detail.
4