-O
-O1
-
Optimize. Optimizing compilation takes somewhat more time, and a lot
more memory for a large function.
Without `-O', the compiler's goal is to reduce the cost of
compilation and to make debugging produce the expected results.
Statements are independent: if you stop the program with a breakpoint
between statements, you can then assign a new value to any variable or
change the program counter to any other statement in the function and
get exactly the results you would expect from the source code.
Without `-O', the compiler only allocates variables declared
register
in registers. The resulting compiled code is a little
worse than produced by PCC without `-O'.
With `-O', the compiler tries to reduce code size and execution
time.
When you specify `-O', the compiler turns on `-fthread-jumps'
and `-fdefer-pop' on all machines. The compiler turns on
`-fdelayed-branch' on machines that have delay slots, and
`-fomit-frame-pointer' on machines that can support debugging even
without a frame pointer. On some machines the compiler also turns
on other flags.
-O2
-
Optimize even more. GCC performs nearly all supported optimizations
that do not involve a space-speed tradeoff. The compiler does not
perform loop unrolling or function inlining when you specify `-O2'.
As compared to `-O', this option increases both compilation time
and the performance of the generated code.
`-O2' turns on all optional optimizations except for loop unrolling,
function inlining, and register renaming. It also turns on the
`-fforce-mem' option on all machines and frame pointer elimination
on machines where doing so does not interfere with debugging.
-O3
-
Optimize yet more. `-O3' turns on all optimizations specified by
`-O2' and also turns on the `-finline-functions' and
`-frename-registers' options.
-O0
-
Do not optimize.
-Os
-
Optimize for size. `-Os' enables all `-O2' optimizations that
do not typically increase code size. It also performs further
optimizations designed to reduce code size.
If you use multiple `-O' options, with or without level numbers,
the last such option is the one that is effective.
-ffloat-store
-
Do not store floating point variables in registers, and inhibit other
options that might change whether a floating point value is taken from a
register or memory.
This option prevents undesirable excess precision on machines such as
the 68000 where the floating registers (of the 68881) keep more
precision than a double
is supposed to have. Similarly for the
x86 architecture. For most programs, the excess precision does only
good, but a few programs rely on the precise definition of IEEE floating
point. Use `-ffloat-store' for such programs, after modifying
them to store all pertinent intermediate computations into variables.
-fno-default-inline
-
Do not make member functions inline by default merely because they are
defined inside the class scope (C++ only). Otherwise, when you specify
`-O', member functions defined inside class scope are compiled
inline by default; i.e., you don't need to add `inline' in front of
the member function name.
-fno-defer-pop
-
Always pop the arguments to each function call as soon as that function
returns. For machines which must pop arguments after a function call,
the compiler normally lets arguments accumulate on the stack for several
function calls and pops them all at once.
-fforce-mem
-
Force memory operands to be copied into registers before doing
arithmetic on them. This produces better code by making all memory
references potential common subexpressions. When they are not common
subexpressions, instruction combination should eliminate the separate
register-load. The `-O2' option turns on this option.
-fforce-addr
-
Force memory address constants to be copied into registers before
doing arithmetic on them. This may produce better code just as
`-fforce-mem' may.
-fomit-frame-pointer
-
Don't keep the frame pointer in a register for functions that
don't need one. This avoids the instructions to save, set up and
restore frame pointers; it also makes an extra register available
in many functions. It also makes debugging impossible on
some machines.
On some machines, such as the Vax, this flag has no effect, because
the standard calling sequence automatically handles the frame pointer
and nothing is saved by pretending it doesn't exist. The
machine-description macro FRAME_POINTER_REQUIRED
controls
whether a target machine supports this flag. See section 21.6 Register Usage.
-foptimize-sibling-calls
-
Optimize sibling and tail recursive calls.
-ftrapv
-
This option generates traps for signed overflow on addition, subtraction,
multiplication operations.
-fno-inline
-
Don't pay attention to the
inline
keyword. Normally this option
is used to keep the compiler from expanding any functions inline.
Note that if you are not optimizing, no functions can be expanded inline.
-finline-functions
-
Integrate all simple functions into their callers. The compiler
heuristically decides which functions are simple enough to be worth
integrating in this way.
If all calls to a given function are integrated, and the function is
declared static
, then the function is normally not output as
assembler code in its own right.
-finline-limit=n
-
By default, gcc limits the size of functions that can be inlined. This flag
allows the control of this limit for functions that are explicitly marked as
inline (ie marked with the inline keyword or defined within the class
definition in c++). n is the size of functions that can be inlined in
number of pseudo instructions (not counting parameter handling). The default
value of n is 10000. Increasing this value can result in more inlined code at
the cost of compilation time and memory consumption. Decreasing usually makes
the compilation faster and less code will be inlined (which presumably
means slower programs). This option is particularly useful for programs that
use inlining heavily such as those based on recursive templates with c++.
Note: pseudo instruction represents, in this particular context, an
abstract measurement of function's size. In no way, it represents a count
of assembly instructions and as such its exact meaning might change from one
release to an another.
-fkeep-inline-functions
-
Even if all calls to a given function are integrated, and the function
is declared
static
, nevertheless output a separate run-time
callable version of the function. This switch does not affect
extern inline
functions.
-fkeep-static-consts
-
Emit variables declared
static const
when optimization isn't turned
on, even if the variables aren't referenced.
GCC enables this option by default. If you want to force the compiler to
check if the variable was referenced, regardless of whether or not
optimization is turned on, use the `-fno-keep-static-consts' option.
-fno-function-cse
-
Do not put function addresses in registers; make each instruction that
calls a constant function contain the function's address explicitly.
This option results in less efficient code, but some strange hacks
that alter the assembler output may be confused by the optimizations
performed when this option is not used.
-ffast-math
-
This option allows GCC to violate some ISO or IEEE rules and/or
specifications in the interest of optimizing code for speed. For
example, it allows the compiler to assume arguments to the
sqrt
function are non-negative numbers and that no floating-point values
are NaNs.
This option causes the preprocessor macro __FAST_MATH__
to be defined.
This option should never be turned on by any `-O' option since
it can result in incorrect output for programs which depend on
an exact implementation of IEEE or ISO rules/specifications for
math functions.
-fno-math-errno
-
Do not set ERRNO after calling math functions that are executed
with a single instruction, e.g., sqrt. A program that relies on
IEEE exceptions for math error handling may want to use this flag
for speed while maintaining IEEE arithmetic compatibility.
The default is `-fmath-errno'. The `-ffast-math' option
sets `-fno-math-errno'.
The following options control specific optimizations. The `-O2'
option turns on all of these optimizations except `-funroll-loops'
and `-funroll-all-loops'. On most machines, the `-O' option
turns on the `-fthread-jumps' and `-fdelayed-branch' options,
but specific machines may handle it differently.
You can use the following flags in the rare cases when "fine-tuning"
of optimizations to be performed is desired.
-fstrength-reduce
-
Perform the optimizations of loop strength reduction and
elimination of iteration variables.
-fthread-jumps
-
Perform optimizations where we check to see if a jump branches to a
location where another comparison subsumed by the first is found. If
so, the first branch is redirected to either the destination of the
second branch or a point immediately following it, depending on whether
the condition is known to be true or false.
-fcse-follow-jumps
-
In common subexpression elimination, scan through jump instructions
when the target of the jump is not reached by any other path. For
example, when CSE encounters an
if
statement with an
else
clause, CSE will follow the jump when the condition
tested is false.
-fcse-skip-blocks
-
This is similar to `-fcse-follow-jumps', but causes CSE to
follow jumps which conditionally skip over blocks. When CSE
encounters a simple
if
statement with no else clause,
`-fcse-skip-blocks' causes CSE to follow the jump around the
body of the if
.
-frerun-cse-after-loop
-
Re-run common subexpression elimination after loop optimizations has been
performed.
-frerun-loop-opt
-
Run the loop optimizer twice.
-fgcse
-
Perform a global common subexpression elimination pass.
This pass also performs global constant and copy propagation.
-fdelete-null-pointer-checks
-
Use global dataflow analysis to identify and eliminate useless null
pointer checks. Programs which rely on NULL pointer dereferences not
halting the program may not work properly with this option. Use
-fno-delete-null-pointer-checks to disable this optimizing for programs
which depend on that behavior.
-fexpensive-optimizations
-
Perform a number of minor optimizations that are relatively expensive.
-foptimize-register-move
-fregmove
-
Attempt to reassign register numbers in move instructions and as
operands of other simple instructions in order to maximize the amount of
register tying. This is especially helpful on machines with two-operand
instructions. GCC enables this optimization by default with `-O2'
or higher.
Note `-fregmove' and `-foptimize-register-move' are the same
optimization.
-fdelayed-branch
-
If supported for the target machine, attempt to reorder instructions
to exploit instruction slots available after delayed branch
instructions.
-fschedule-insns
-
If supported for the target machine, attempt to reorder instructions to
eliminate execution stalls due to required data being unavailable. This
helps machines that have slow floating point or memory load instructions
by allowing other instructions to be issued until the result of the load
or floating point instruction is required.
-fschedule-insns2
-
Similar to `-fschedule-insns', but requests an additional pass of
instruction scheduling after register allocation has been done. This is
especially useful on machines with a relatively small number of
registers and where memory load instructions take more than one cycle.
-ffunction-sections
-fdata-sections
-
Place each function or data item into its own section in the output
file if the target supports arbitrary sections. The name of the
function or the name of the data item determines the section's name
in the output file.
Use these options on systems where the linker can perform optimizations
to improve locality of reference in the instruction space. HPPA
processors running HP-UX and Sparc processors running Solaris 2 have
linkers with such optimizations. Other systems using the ELF object format
as well as AIX may have these optimizations in the future.
Only use these options when there are significant benefits from doing
so. When you specify these options, the assembler and linker will
create larger object and executable files and will also be slower.
You will not be able to use gprof
on all systems if you
specify this option and you may have problems with debugging if
you specify both this option and `-g'.
-fcaller-saves
-
Enable values to be allocated in registers that will be clobbered by
function calls, by emitting extra instructions to save and restore the
registers around such calls. Such allocation is done only when it
seems to result in better code than would otherwise be produced.
This option is always enabled by default on certain machines, usually
those which have no call-preserved registers to use instead.
For all machines, optimization level 2 and higher enables this flag by
default.
-funroll-loops
-
Perform the optimization of loop unrolling. This is only done for loops
whose number of iterations can be determined at compile time or run time.
`-funroll-loops' implies both `-fstrength-reduce' and
`-frerun-cse-after-loop'.
-funroll-all-loops
-
Perform the optimization of loop unrolling. This is done for all loops
and usually makes programs run more slowly. `-funroll-all-loops'
implies `-fstrength-reduce' as well as `-frerun-cse-after-loop'.
-fmove-all-movables
-
Forces all invariant computations in loops to be moved
outside the loop.
-freduce-all-givs
-
Forces all general-induction variables in loops to be
strength-reduced.
Note: When compiling programs written in Fortran,
`-fmove-all-movables' and `-freduce-all-givs' are enabled
by default when you use the optimizer.
These options may generate better or worse code; results are highly
dependent on the structure of loops within the source code.
These two options are intended to be removed someday, once
they have helped determine the efficacy of various
approaches to improving loop optimizations.
Please let us (gcc@gcc.gnu.org and fortran@gnu.org)
know how use of these options affects
the performance of your production code.
We're very interested in code that runs slower
when these options are enabled.
-fno-peephole
-fno-peephole2
-
Disable any machine-specific peephole optimizations. The difference
between `-fno-peephole' and `-fno-peephole2' is in how they
are implemented in the compiler; some targets use one, some use the
other, a few use both.
-fbranch-probabilities
-
After running a program compiled with `-fprofile-arcs'
(see section Options for Debugging Your Program or
gcc
), you can compile it a second time using
`-fbranch-probabilities', to improve optimizations based on
guessing the path a branch might take.
With `-fbranch-probabilities', GCC puts a `REG_EXEC_COUNT'
note on the first instruction of each basic block, and a
`REG_BR_PROB' note on each `JUMP_INSN' and `CALL_INSN'.
These can be used to improve optimization. Currently, they are only
used in one place: in `reorg.c', instead of guessing which path a
branch is mostly to take, the `REG_BR_PROB' values are used to
exactly determine which path is taken more often.
-fno-guess-branch-probability
-
Sometimes gcc will opt to guess branch probabilities when none are
available from either profile directed feedback (`-fprofile-arcs')
or `__builtin_expect'. In a hard real-time system, people don't
want different runs of the compiler to produce code that has different
behavior; minimizing non-determinism is of paramount import. This
switch allows users to reduce non-determinism, possibly at the expense
of inferior optimization.
-fstrict-aliasing
-
Allows the compiler to assume the strictest aliasing rules applicable to
the language being compiled. For C (and C++), this activates
optimizations based on the type of expressions. In particular, an
object of one type is assumed never to reside at the same address as an
object of a different type, unless the types are almost the same. For
example, an
unsigned int
can alias an int
, but not a
void*
or a double
. A character type may alias any other
type.
Pay special attention to code like this:
| union a_union {
int i;
double d;
};
int f() {
a_union t;
t.d = 3.0;
return t.i;
}
|
The practice of reading from a different union member than the one most
recently written to (called "type-punning") is common. Even with
`-fstrict-aliasing', type-punning is allowed, provided the memory
is accessed through the union type. So, the code above will work as
expected. However, this code might not:
| int f() {
a_union t;
int* ip;
t.d = 3.0;
ip = &t.i;
return *ip;
}
|
Every language that wishes to perform language-specific alias analysis
should define a function that computes, given an tree
node, an alias set for the node. Nodes in different alias sets are not
allowed to alias. For an example, see the C front-end function
c_get_alias_set
.
-falign-functions
-falign-functions=n
-
Align the start of functions to the next power-of-two greater than
n, skipping up to n bytes. For instance,
`-falign-functions=32' aligns functions to the next 32-byte
boundary, but `-falign-functions=24' would align to the next
32-byte boundary only if this can be done by skipping 23 bytes or less.
`-fno-align-functions' and `-falign-functions=1' are
equivalent and mean that functions will not be aligned.
Some assemblers only support this flag when n is a power of two;
in that case, it is rounded up.
If n is not specified, use a machine-dependent default.
-falign-labels
-falign-labels=n
-
Align all branch targets to a power-of-two boundary, skipping up to
n bytes like `-falign-functions'. This option can easily
make code slower, because it must insert dummy operations for when the
branch target is reached in the usual flow of the code.
If `-falign-loops' or `-falign-jumps' are applicable and
are greater than this value, then their values are used instead.
If n is not specified, use a machine-dependent default which is
very likely to be `1', meaning no alignment.
-falign-loops
-falign-loops=n
-
Align loops to a power-of-two boundary, skipping up to n bytes
like `-falign-functions'. The hope is that the loop will be
executed many times, which will make up for any execution of the dummy
operations.
If n is not specified, use a machine-dependent default.
-falign-jumps
-falign-jumps=n
-
Align branch targets to a power-of-two boundary, for branch targets
where the targets can only be reached by jumping, skipping up to n
bytes like `-falign-functions'. In this case, no dummy operations
need be executed.
If n is not specified, use a machine-dependent default.
-fssa
-
Perform optimizations in static single assignment form. Each function's
flow graph is translated into SSA form, optimizations are performed, and
the flow graph is translated back from SSA form. Users should not
specify this option, since it is not yet ready for production use.
-fdce
-
Perform dead-code elimination in SSA form. Requires `-fssa'. Like
`-fssa', this is an experimental feature.
-fsingle-precision-constant
-
Treat floating point constant as single precision constant instead of
implicitly converting it to double precision constant.
-frename-registers
-
Attempt to avoid false dependencies in scheduled code by making use
of registers left over after register allocation. This optimization
will most benefit processors with lots of registers. It can, however,
make debugging impossible, since variables will no longer stay in
a "home register".
--param name=value
-
In some places, GCC uses various constants to control the amount of
optimization that is done. For example, GCC will not inline functions
that contain more that a certain number of instructions. You can
control some of these constants on the command-line using the
`--param' option.
In each case, the value is a integer. The allowable choices for
name are given in the following table:
max-delay-slot-insn-search
- The maximum number of instructions to consider when looking for an
instruction to fill a delay slot. If more than this arbitrary number of
instructions is searched, the time savings from filling the delay slot
will be minimal so stop searching. Increasing values mean more
aggressive optimization, making the compile time increase with probably
small improvement in executable run time.
max-delay-slot-live-search
- When trying to fill delay slots, the maximum number of instructions to
consider when searching for a block with valid live register
information. Increasing this arbitrarily chosen value means more
aggressive optimization, increasing the compile time. This parameter
should be removed when the delay slot code is rewritten to maintain the
control-flow graph.
max-gcse-memory
- The approximate maximum amount of memory that will be allocated in
order to perform the global common subexpression elimination
optimization. If more memory than specified is required, the
optimization will not be done.
max-inline-insns
- If an function contains more than this many instructions, it
will not be inlined. This option is precisely equivalent to
`-finline-limit'.