[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
20.9 Debugging configure
scripts
While in general, configure
scripts generated by Autoconf
strive to be fairly portable to various systems, compilers, shells, and
other tools, it may still be necessary to debug a failing test, broken
script or makefile, or fix or override an incomplete, faulty, or erroneous
test, especially during macro development. Failures can occur at all levels,
in M4 syntax or semantics, shell script issues, or due to bugs in the
test or the tools invoked by configure
. Together with the
rather arcane error message that m4
and make
may
produce when their input contains syntax errors, this can make debugging
rather painful.
Nevertheless, here is a list of hints and strategies that may help:
-
When
autoconf
fails, common causes for error include:- mismatched or unbalanced parentheses or braces (see section Dealing with unbalanced parentheses),
- under- or overquoted macro arguments (see section The Autoconf Language, see section Quoting and Parameters, see section Quotation and Nested Macros),
- spaces between macro name and opening parenthesis (see section The Autoconf Language).
Typically, it helps to go back to the last working version of the input and compare the differences for each of these errors. Another possibility is to sprinkle pairs of
m4_traceon
andm4_traceoff
judiciously in the code, either without a parameter or listing some macro names and watchm4
expand its input verbosely (see section Debugging via autom4te). -
Sometimes
autoconf
succeeds but the generatedconfigure
script has invalid shell syntax. You can detect this case by running ‘bash -n configure’ or ‘sh -n configure’. If this command fails, the same tips apply, as ifautoconf
had failed. -
Debugging
configure
script execution may be done by sprinkling pairs ofset -x
andset +x
into the shell script before and after the region that contains a bug. Running the whole script with ‘shell ./configure -vx 2>&1 | tee log-file’ with a decent shell may work, but produces lots of output. Here, it can help to search for markers like ‘checking for’ a particular test in the log-file. - Alternatively, you might use a shell with debugging capabilities like bashdb.
-
When
configure
tests produce invalid results for your system, it may be necessary to override them:-
For programs, tools or libraries variables, preprocessor, compiler, or
linker flags, it is often sufficient to override them at
make
run time with some care (see sectionmake macro=value
and Submakes). Since this normally won’t causeconfigure
to be run again with these changed settings, it may fail if the changed variable would have caused different test results fromconfigure
, so this may work only for simple differences. -
Most tests which produce their result in a substituted variable allow to
override the test by setting the variable on the
configure
command line (see section Compilers and Options, see section Defining Variables, see section Particular systems). -
Many tests store their result in a cache variable (see section Caching Results). This lets you override them either on the
configure
command line as above, or through a primed cache or site file (see section Cache Files, see section Setting Site Defaults). The name of a cache variable is documented with a test macro or may be inferred from Cache Variable Names; the precise semantics of undocumented variables are often internal details, subject to change.
-
For programs, tools or libraries variables, preprocessor, compiler, or
linker flags, it is often sufficient to override them at
-
Alternatively,
configure
may produce invalid results because of uncaught programming errors, in your package or in an upstream library package. For example, whenAC_CHECK_LIB
fails to find a library with a specified function, always check ‘config.log’. This will reveal the exact error that produced the failing result: the library linked byAC_CHECK_LIB
probably has a fatal bug.
Conversely, as macro author, you can make it easier for users of your macro:
- by minimizing dependencies between tests and between test results as far as possible,
-
by using
make
variables to factorize and allow override of settings atmake
run time, -
by honoring the GNU Coding Standards and not overriding flags
reserved for the user except temporarily during
configure
tests, - by not requiring users of your macro to use the cache variables. Instead, expose the result of the test via run-if-true and run-if-false parameters. If the result is not a boolean, then provide it through documented shell variables.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |