* donate_cpu_lib.py: use `os.path.join()`
* donate-cpu: removed remaining usage of `os.chdir()`
* donate_cpu_lib.py: moved library includes code into class
* donate_cpu_lib.py: pre-compile library include regular expressions
* donate_cpu_lib.py: pre-compile some more regular expressions
* donate_cpu_lib.py: small unpack_package() cleanup and optimization
* donate_cpu_lib.py: added some information about the extracted files to unpack_package()
* donate_cpu_lib.py: bumped version
* added test_donate_cpu_lib.py
* donate_cpu_lib.py: greatly improved `LibraryIncludes.get_libraries()` performance
only scan each file once for previously undetected libraries only
* test_donate_cpu_lib.py: fix for Python 3.5
* scriptcheck.yml: added `-v` to pytest calls so we get the complete diff on assertions
* fixed `test_arguments_regression()` Python tests with additional pytest arguments
* donate_cpu_lib.py: use `subprocess.check_call()`
* test_donate_cpu_lib.py: sort results to address differences in order with Python 3.5
* Better git usage in donate-cpu.py to reduce bandwidth and disk usage
Main changes:
* Bump client version
* Move try+retry logic to function try_retry to reduce duplication
* Use exponential backoff for try_retry
* git clone with --depth=1 to reduce bandwidth and disk use
* Use multiple worktree to work with multiple versions, instead of back-and-forth checkouts
* donate-cpu.py fixes for review comments and automated check failures
* Move compile_cppcheck within (if ver == 'main) branch to avoid duplicate compile_cppcheck+compile_version cals
* Use classic format syntax for python 3.5 compatibility
* Fix undefined CalledProcessError detected by pylint
* donate-cpu.py code changes following code review
* Migration existing "cppcheck" directory if available instead of "git clone"
* Logging message tweaks
* Use subprocess' cwd parameter instead of os.chdir() to avoid risk around changing and not restoring the working directory
* Update tools/test-my-pr.py to account for donate_cpu_lib changes
* donate-cpu.py: ensure correct workspace locations with relative --work-path
This adds a timeout of 60 minutes for the Cppcheck analysis.
Timed out results do not count as crash but they are uploaded and
marked with "TO!" in the list of the latest results. No "diff" is
generated for timed out results so they do not add wrong entries to
the "Diff report".
In test-my-pr.py the timed out results are listed separately just like
the crashes.
donate-cpu-server.py: Add timeout report
Using .tar.xz packages adds about 4500 additional packages that can be
tested and changes many existing packages where a more recent version
can be used now that is only available as .tar.xz file.
Related ticket: https://trac.cppcheck.net/ticket/9508
donate-cpu.py: Require at least Python 3.4
xz support was added with 3.3.
Yesterday, I observed that some client with a wrong jobs setting
(only "-j") requested one package after another and always uploaded
results where it only said that the argument "-j" is invalid for
Cppcheck.
This check should avoid such cases where results are overwritten with
useless data and the server is kept busy for nothing.
* donate-cpu.py: Add internal timing information of Cppcheck to output
The option "--showtime=top5" is added to the Cppcheck command line.
The timing output is collected and only for HEAD it is shown in the new
category "head-timing-info" in the results output.
The timing output is indented with one white space, so even in the
unlikely case that a function is named "head result:" or "diff:" it does
not break the parser in the server.
* donate-cpu.py: Also print the "old" timing information for comparison
Some projects only use this (older?) style of Qt header inclusion.
There are (older) books and examples which use this style, too.
It seems to be perfectly valid, so we should support it.
There is no point in checking which libraries to use for each cppcheck
version since there is no change. Refactor the checking to a separate
function and run that once instead. This halves the time it takes to
check for libraries.
I looked into many packages where the detection failed and they all use
`#include "ruby.h"`. Some of these packages seem to be Ruby modules,
others seem to be "normal" software.
This adds one line in the package report to show the git hash and commit
date. This makes it possible to see exactely which revision the result
was obtained with.
The cppcheck head info line is now shown as
head-info: 1a25d3f9e (2019-08-30 18:34:14 +0200)
Check if fetching and updating the cppcheck sources are successful. If
not successful after five retries, try removing the existing clone and
checkout again.
Sometimes there are no relevant source files (.c, .cpp, ...) extracted,
but other files are (.h, ...).
There could be only header files for example. Then Cppcheck returns with
exit code 1 and prints an error message. This is no crash and now no
longer reported as such.
* donate-cpu.py: treat signal 6 (SIGABRT) as crash as well so we get a stack trace in the result
* donate-cpu.py: simplified returncode/signal check / also generate stack traces for SIGILL, SIGFPE, SIGBUS
* donate-cpu.py: avoid usage of "not" in if
* donate-cpu.py: do not overwrite returncode in crash handling
* donate-cpu.py: made exitcodes > 0 negative so they will be detected a crash / changed the ThreadExecutor error to -222
* donate-cpu.py: unconditionally upload results and info now that errors are properly handled - will also properly clear the result/info in case there are no more messages
* donate-cpu.py: bumped version
* donate-cpu.py: added stdout to output in case of exitcode != 0
* donate-cpu.py: do not scan packages with no relevant files
* donate-cpu.py: bumped version
If an upload fails, the reason (exception text) is now printed.
Fix: If the last retry failed do not wait until continuing.
Remove some obsolete "fast" code in the uploadResults() function.
Tested with Python 2.7.16 and Python 3.6.8.
* threadexecutor.cpp: streamlined error messages
* donate-cpu.py: detect additional signals and exitcode != 0 as crash as well and (ab)use elapsedTime to make the errorcode visible in the output / also detect ThreadExecutor issues
* donate-cpu.py: bumped version
* donate-cpu.py: fixed detection of ThreadExecutor errors
* Get stack traces for daca@home crashes
If a command in daca@home crashes, execute it again within gdb to get a stack trace.
* donate-cpu.py: added "gdb" to checkRequirements()
* donate-cpu.py: handle wget failures
* donate-cpu.py: added --no-upload option to disable all uploads
* donate-cpu.py: set max_packages to 1 if --package is provided to avoid endless processing of the same package
* donate-cpu.py: no longer treat missing sources as a crash
* donate-cpu.py: fixed wget "http://: Invalid host name." error caused by empty argument in subprocess.call()
* donate-cpu.py: added --no-upload to --help
* donate-cpu.py: detect crashes when using -j1
* donate-cpu.py: added -g to compiler flags
* donate-cpu.py: fixed gdb call and stacktrace printing / always pass "-j1" to gdb call so the exception will actually occur in the application
* donate-cpu.py: removed left-over --verbose from wget call
* donate-cpu.py: removed unnecessary break
* donate-cpu.py: only use gdb for crash in head run / actually provide the stack trace for the output
* donate-cpu.py: include the last checked file with the stack trace
* donate-cpu.py: removed unnecessary wget() call and a sleep in it / also inverted some logic
* donate-cpu.py: small hasInclude() optimization
* donate-cpu.py: bumped version number
* donate-cpu.py: detect start of gdb output when Cygwin is used
The Cygwin output looks like this:
Thread 1 "cppcheck" received signal SIGSEGV, Segmentation fault.
Co-Authored-By: firewave <firewave@users.noreply.github.com>
The official documentation recommends to include the Python C API via
`#include "Python.h"`:
https://docs.python.org/3/c-api/intro.html
And many projects do it exactly this way, that is why the client script
often does not detect the usage of the Python C API.