Improve handling of adjacent string literals of different types.
Example of adjacent string literals: "ab" L"cd".
In C89, C++98 and C++03, this is undefined. As of C99 and C++11, this is
well defined and the two string literals are concatenated to L"abcd".
C11 and C++11 introduces the utf16, utf32 and (C++ only) utf8 string
types. Concatenating any of these with a regular c-string works exactely
as the wide string example above. The result of having two adjacent
string literals with different prefix is implementation defined, unless
one is an UTF-8 string literal and the other is a wide string literal.
In this case the behaviour is undefined.
Ignore the undefined and ill-formed programs (this behaviour is unchanged)
and make sure that concatenating a plain c string literal with a prefixed
one works correct (in C99 and C++11 and later versions). It also makes the
behaviour consistent since previously, "ab" L"cd" would result in "abcd"
while L"ab" "cd" would result in L"abcd".
It also means the somewhat awkward updatePropertiesConcatStr() test can
be removed since the added tests would not work if update_properties()
was not called in concatStr().
Since the prefix is stored in the token, testing the type of the string
is not relevant in TestSimplifyTokens. It is tested extensively in
TestToken::stringTypes().
* Set correct type and size of string and char literals
Use that string and char literal tokens store the prefix. This makes
it possible to distinghuish between different type of string literals
(i.e., utf8 encoded strings, utf16, wide strings, etc) which have
different type.
When the tokens holding the string and character values have the correct
type, it is possible to improve Token::getStrSize() to give the correct
result for all string types. Previously, it would return the number of
characters in the string, i.e., it would give the wrong size unless
the type of the string was char*.
Since strings now can have different size (in number of bytes) and
length (in number of elements), add a new helper function that returns
the number of characters. Checkers have been updated to use the correct
functions.
Having the size makes it possible to find more problems with prefixed
strings, and to reduce false positives, for example in the buffer
overflow checker.
Also, improve the stringLiteralWrite error message to also print the
prefix of the string (if there is one).
* Add comment and update string length
Keeping the prefix in the token allows cppcheck to print the correct
string and char literals in debug and error messages.
To achieve this, move some of the helper functions from token.cpp to
utils.h so that checks that look at string and char literals can reuse
them. This is a large part of this commit.
Note that the only user visible change is that when string and char
literals are printed in error messages, the prefix is now included.
For example:
int f() {
return test.substr( 0 , 4 ) == U"Hello" ? 0 : 1 ;
};
now prints U"Hello" instead of "Hello" in the error message.
* use range loops
* removed redundant string initializations
* use nullptr
* use proper boolean false
* removed unnecessary continue from end of loop
* removed unnecessary c_str() usage
* use emplace_back()
* removed redundant void arguments
* Add impossible category
* Replace values
* Try to adjust known values
* Add ! for impossible values
* Add impossible with possible values
* Remove contradictions
* Add values when the branch is not dead
* Only copy possible values
* Dont bail on while loops
* Load std lib in valueflow
* Check for function calls
* Fix stl errors
* Fix incorrect impossible check
* Fix heap-after-use error
* Remove impossible values when they are lowered
* Show the bound and remove overlaps
* Infer conditions
* Dont push pointer values through dynamic_cast
* Add test for dynamic_cast issue
* Add shifttoomanybits test
* Add test for div by zero
* Add a test for issue 9315
* Dont make impossible value inconclusive
* Fix FP with shift operator
* Improve handleKnownValuesInLoop for impossible values
* Fix cppcheck warning
* Fix impossible values for ctu
* Bailout for streams
* Check equality conditions
* Fix overflows
* Add regression test for 9332
* Remove duplicate conditions
* Skip impossible values for invalid value
* Check for null
* Rename bound to range
* Formatting
* make ellipsis ... a single token
Using cppcheck -E to preprocess code with ellipsis produces output that
can't be compiled because ... is split into 3 tokens.
* try to fix addon
* template simplifier: refactor TemplateSimplifier::TokenAndName into a class
assert when more than one family flag is set
* fix function parameter names
* Fix for too much information in scope name
When the scope calculation encounters code such as
"friend class X::Y;"
or
"template<> class X<void> {"
it will now reset the additional name component of the scope that is about to be opened.
* Made sure new scope name is reset after being used
* Removed redundant scope calculation
* Add scope propagation code to insertToken
* Add relevant scope code to Token class
* Add code to calculate the scope of Tokens
* Add calculateScopes method to class
* Add missing include for shared_ptr
* Added scopeinfo member to token class
Moved ScopeInfo2 declaration here as well because that's where it needs to be now.
* Added scopeinfo accessors and declaration to class
* Add new method for calculating scopes
This replaces the methods in the TemplateSimplifier which calculate the current scope as the token list is iterated. The old method required checking if the scope had changed for every token multiple times (for multiple iterations), which was surprisingly costly. Calculating scopes in advance like this decreases runtime on a worst-case file by around thirty percent.
ScopeInfo objects are disposed of when the TemplateSimplification is done as they are not used later.
* Add calculateScopes method to header
* Removed code that calculated current scope
This has been replaced by code that calculates the scopes up front and stores them with each token, which is much faster.
* Fixed compile errors from extra parentheses
* Added missing code to fix memory leak
* Added code to actually clean up ScopeInfo structs
* Tidy up a dodgy for loop
* Convert argument to const ref
* Calculate missing scopes
As the templatesimplificator expands templates and does multiple passes it needs to make sure all scopes are calculated.
* Remove copying the scope to the next token
This is now done properly when scopes are calculated.
* Remove call to calculateScopes
This is now done by the TemplateSimplifier.
* Recalculate scopes for every pass of simplifyTemplates
* Add code to calculate extra scopes as they are added
I thought that this might be useful for calculating scopes when Tokens are created, but as there are several ways of creating Tokens that don't guarantee that they are placed in a list it is easier to just calculate scopes when you know you have a list and when you know you're adding to a list.
* Fix several bugs and poorly designed code
Remove the global scopes collection, and clean them up instead by iterating through the tokenlist to find them. This means scopes can be calculated by functions in the Token class as well as in the Tokenizer class without leaking the scope object.
Fix a couple of bugs in the calculateScopes method and make it more efficient.
* Remove unnecessary calls to calculateScopes
* Move brace to correct position
Calculating scopes during insertToken only needs to happen if we created a new Token.
* Handle 'using namespace' declarations separately
This fixes a bug caused by a statement matching 'struct B < 0 > ;'
* Fix argument name mismatch
* Actually use newScopeInfo when inserting Token
* Switch to using shared_ptr to hold scopeInfos
This means ScopeInfo2 objects get properly cleaned up when they are no longer needed.
* Change ScopeInfo member to be a shared_ptr
* Update code to use shared_ptr
* Add missing include for shared_ptr
* Remove unnecessary cleanup code
This has been replaced by shared_ptr for ScopeInfo2 objects
This will warn for cases where searching in an associative container happens before insertion, like this:
```cpp
void f1(std::set<unsigned>& s, unsigned x) {
if (s.find(x) == s.end()) {
s.insert(x);
}
}
void f2(std::map<unsigned, unsigned>& m, unsigned x) {
if (m.find(x) == m.end()) {
m.emplace(x, 1);
} else {
m[x] = 1;
}
}
```
In the case of the map it could be written as `m[x] = 1` as it will create the key if it doesnt exist, so the extra search is not necessary.
I have this marked as `performance` as it is mostly concerning performance, but there could be a copy-paste error possibly, although I dont think thats common.
Change the astStringVerbose() recursion to extend a string instead of
returning one. This has the benefit that for tokens where the recursion
runs deep (typically large arrays), the time savings can be substantial
(see comments on benchmarks further down).
The reason is that previously, for each token, the astString of its
operands was constructed, and then appended to this tokens astString.
This led to a lot of unnecessary string copying (and with that
allocations). Instead, by passing the string by reference, the number
of temporary strings is greatly reduced.
Another way of seeing it is that previously, the string was constructed
from end to beginning, but now it is constructed from the beginning to
end. There was no notable speedup by preallocating the entire string
using string::reserve() (at least not on Linux).
To benchmark, the changes and master were tested on Linux using the
commands:
make
time cppcheck --debug --verbose $file >/dev/null
i.e., the cppcheck binary was compiled with the settings in the
Makefile. Printing the output to screen or file will of course take
longer time.
In Trac ticket #8355 which triggered this change, an example file from the
Wine repository was attached. Running the above cppcheck on master took
24 minutes and with the changes in this commmit, took 22 seconds.
Another test made was on lib/tokenlist.cpp in the cppcheck repo, which is
more "normal" file. On that file there was no measurable time difference.
A synthetic benchmark was generated to illustrate the effects on dumping
the ast for arrays of different sizes. The generate code looked as
follows:
const int array[] = {...};
with different number of elements. The results are as follows (times are
in seconds):
N master optimized
10 0.1 0.1
100 0.1 0.1
1000 2.8 0.7
2000 19 1.8
3000 53 3.8
5000 350 10
10000 3215 38
As we can see, for small arrays, there is no time difference, but for
large arrays the time savings are substantial.