Packages now can contain something like:
```
head results:
Checking temp/openvdb/Platform.cc: __GNUC__=1...
[New Thread 7892.0x91c]
```
"New Thread 7892.0x91c" was wrongly identified as messageId in the HEAD
report.
This commit adds code to skip lines that start with `[` or where the
messageId contains at least one space.
* donate-cpu.py: made exitcodes > 0 negative so they will be detected a crash / changed the ThreadExecutor error to -222
* donate-cpu.py: unconditionally upload results and info now that errors are properly handled - will also properly clear the result/info in case there are no more messages
* donate-cpu.py: bumped version
* donate-cpu.py: added stdout to output in case of exitcode != 0
* donate-cpu.py: do not scan packages with no relevant files
* donate-cpu.py: bumped version
If an upload fails, the reason (exception text) is now printed.
Fix: If the last retry failed do not wait until continuing.
Remove some obsolete "fast" code in the uploadResults() function.
Tested with Python 2.7.16 and Python 3.6.8.
Since the directory for the results does no longer exist on the server,
the server currently crashes every time older clients try to upload
experimental fast results via "write-fast" command.
Now this command is just ignored so the server is instantly ready
again after a "write-fast" command.
* donate-cpu.server.py: increased "Package" column width for latest report and small cleanup
* donate-cpu.server.py: added date and time to crash report
* donate-cpu.server.py: simplified strDateTime()
* donate-cpu.server.py: add stale report to show results which are older than 30 days
* donate-cpu-server.py: added version and some logging
* threadexecutor.cpp: streamlined error messages
* donate-cpu.py: detect additional signals and exitcode != 0 as crash as well and (ab)use elapsedTime to make the errorcode visible in the output / also detect ThreadExecutor issues
* donate-cpu.py: bumped version
* donate-cpu.py: fixed detection of ThreadExecutor errors
* Get stack traces for daca@home crashes
If a command in daca@home crashes, execute it again within gdb to get a stack trace.
* donate-cpu.py: added "gdb" to checkRequirements()
* donate-cpu.py: handle wget failures
* donate-cpu.py: added --no-upload option to disable all uploads
* donate-cpu.py: set max_packages to 1 if --package is provided to avoid endless processing of the same package
* donate-cpu.py: no longer treat missing sources as a crash
* donate-cpu.py: fixed wget "http://: Invalid host name." error caused by empty argument in subprocess.call()
* donate-cpu.py: added --no-upload to --help
* donate-cpu.py: detect crashes when using -j1
* donate-cpu.py: added -g to compiler flags
* donate-cpu.py: fixed gdb call and stacktrace printing / always pass "-j1" to gdb call so the exception will actually occur in the application
* donate-cpu.py: removed left-over --verbose from wget call
* donate-cpu.py: removed unnecessary break
* donate-cpu.py: only use gdb for crash in head run / actually provide the stack trace for the output
* donate-cpu.py: include the last checked file with the stack trace
* donate-cpu.py: removed unnecessary wget() call and a sleep in it / also inverted some logic
* donate-cpu.py: small hasInclude() optimization
* donate-cpu.py: bumped version number
* donate-cpu.py: detect start of gdb output when Cygwin is used
The Cygwin output looks like this:
Thread 1 "cppcheck" received signal SIGSEGV, Segmentation fault.
Co-Authored-By: firewave <firewave@users.noreply.github.com>
The official documentation recommends to include the Python C API via
`#include "Python.h"`:
https://docs.python.org/3/c-api/intro.html
And many projects do it exactly this way, that is why the client script
often does not detect the usage of the Python C API.
The client script will exit after the specified number of packages
have been processed. 0 means infinitely.
Useful for example to regularly quit the script, check for updates to
the client and start it again. Or as an alternative to the `--stop-time`
argument.
The function `iteritems()` of `dict`s is deprecated. The recommended
alternative is to use `items()`, this function also works with Python 2.
The next issue is that lambdas can no longer unpack tuple parameters
in Python 3. It would be possible to use some workaround and still use
a lambda, but using `operator.itemgetter(1)` instead is faster and the
recommended method in such a case.
The syntax is now compatible with Python 2 and 3 but the server script
still does not work with Python 3. For example `socket.recv()` returns
`bytes` in Python 3 and `str` in Python 2. Currently `str` is expected
so it does not work with Python 3.
In my tests there were about 1500 additional packages
available as bz2 on the server.
For some packages a newer version is now used if it is
only available as .tar.bz2 archive.
The donate-cpu.py client is tested to work with .tar.bz2
files under Python 2.7.15 and 3.6.8.
Python 3 directly decodes the text when it is read(). If there is any
invalid UTF-8 character in the text an exception is thrown (IIRC it is
UnicodeDecodeError). Opening the file with `error='ignore'` avoids
throwing an exception and just ignores the invalid character. Since
this is only possible since Python version 3 there must be extra code
for older versions.
The test script has been enhanced. It now also uses a package which
contains a file with at least one invalid UTF-8 character.
Now also found:
- Includes directly at the beginning of a file
- Indented includes
- Includes where there is no white-space between
"include" and header name
The function libcerror_error_set() is currently the function for which
daca@home most often reports a missing configuration (more than 80000
times).
Official repository of libcerror: https://github.com/libyal/libcerror
The library configuration has been tested with the library libvhdi:
ftp://ftp.se.debian.org/debian/pool/main/libv/libvhdi/libvhdi_20181227.orig.tar.gz
This detects more includes / headers. For example includes like
"# include <gtk/gtk.h>" with a space before "include" as it is used in
the package http://cppcheck.osuosl.org:8000/gbatnav are now also
detected.
The regex search also searches all includes for one library in one go
instead of one include per loop.
Tested with several packages to make sure libraries that were detected
before are still detected.
If "head" and "OLD_VERSION" both crash there are no messages and the
variable `results_exist" is set to False. But still the results must be uploaded
for the crashes to be saved also.
Tested with the package http://cppcheck.osuosl.org:8000/double-conversion
Ignore normal results (not fast or info) where the diff was made against the wrong OLD_VERSION. This avoids unwanted results when some client still analyzes an old package but the OLD_VERSION in the server script is changed for example.
Results missing the Cppcheck version info completely are also ignored.
Tested locally with correct and wrong version numbers.