Sometimes, given the same (16-bit TIF) input, one wants to generate a variety of J2C outputs (say, 16-, 12-, and 10-bit). This patch allows one to downsample input files, and so makes it easier to automate OpenJPEG in mass generation of J2Cs without having to pipe though an image processing program.
in opj_image_comp and opj_image_comptparm structures.
bpp was redundant with prec, and almost never set by the library, except
by opj_image_create(). This change should hopefully not impact existing,
working, users of the API, which should already have used prec to get
things working.
Fixes#1379
There are a few limitations:
- mixed mode (HT and regular code blocks) is not supported.
- ROI in HT blocks is not supported.
- Placeholder passes are not supported.
- MultiHT sets are not support, only a singleHT set.
- there are known issues with some compliance testing files related to
the parsing of packet header.
Support was already there, but restricted to Cinema and IMF profiles,
and 255 tiles
* Add -TLM switch added to opj_compress
* Make opj_encoder_set_extra_options() function accept a TLM=YES option.
The previous constant opj_c13318 was mysteriously equal to 2/K , and in
the DWT, we had to divide K and opj_c13318 by 2... The issue was that the
band->stepsize computation in tcd.c didn't take into account the log2gain of
the band.
The effect of this change is expected to be mostly equivalent to the previous
situation, except some difference in rounding. But it leads to a dramatic
reduction of the mean square error and peak error in the irreversible encoding
of issue141.tif !
* -PLT switch added to opj_compress
* Add a opj_encoder_set_extra_options() function that
accepts a PLT=YES option, and could be expanded later
for other uses.
-------
Testing with a Sentinel2 10m band, T36JTT_20160914T074612_B02.jp2,
coming from S2A_MSIL1C_20160914T074612_N0204_R135_T36JTT_20160914T081456.SAFE
Decompress it to TIFF:
```
opj_uncompress -i T36JTT_20160914T074612_B02.jp2 -o T36JTT_20160914T074612_B02.tif
```
Recompress it with similar parameters as original:
```
opj_compress -n 5 -c [256,256],[256,256],[256,256],[256,256],[256,256] -t 1024,1024 -PLT -i T36JTT_20160914T074612_B02.tif -o T36JTT_20160914T074612_B02_PLT.jp2
```
Dump codestream detail with GDAL dump_jp2.py utility (https://github.com/OSGeo/gdal/blob/master/gdal/swig/python/samples/dump_jp2.py)
```
python dump_jp2.py T36JTT_20160914T074612_B02.jp2 > /tmp/dump_sentinel2_ori.txt
python dump_jp2.py T36JTT_20160914T074612_B02_PLT.jp2 > /tmp/dump_sentinel2_openjpeg_plt.txt
```
The diff between both show very similar structure, and identical number of packets in PLT markers
Now testing with Kakadu (KDU803_Demo_Apps_for_Linux-x86-64_200210)
Full file decompression:
```
kdu_expand -i T36JTT_20160914T074612_B02_PLT.jp2 -o tmp.tif
Consumed 121 tile-part(s) from a total of 121 tile(s).
Consumed 80,318,806 codestream bytes (excluding any file format) = 5.329697
bits/pel.
Processed using the multi-threaded environment, with
8 parallel threads of execution
```
Partial decompresson (presumably using PLT markers):
```
kdu_expand -i T36JTT_20160914T074612_B02.jp2 -o tmp.pgm -region "{0.5,0.5},{0.01,0.01}"
kdu_expand -i T36JTT_20160914T074612_B02_PLT.jp2 -o tmp2.pgm -region "{0.5,0.5},{0.01,0.01}"
diff tmp.pgm tmp2.pgm && echo "same !"
```
-------
Funded by ESA for S2-MPC project
This was changed some time ago (https://google.github.io/oss-fuzz/getting-started/new-project-guide/) but the build didn't fail as there is a fallback mechanism. The main advantage of the new approach is that for libFuzzer this produces more performant binaries (as `$LIB_FUZZING_ENGINE` expands into `-fsanitize=fuzzer`, which links libFuzzer from the compiler-rt, allowing better optimization tricks).
I'm also experimenting with dataflow (https://github.com/google/oss-fuzz/issues/1632) on your project, and the dataflow config doesn't have a fallback (as it's a new configuration), therefore I'm proposing a change to migrate from `-lFuzzingEngine` to `$LIB_FUZZING_ENGINE`.
Previously the multiple component transformation SGcod(C)
and wavelet transformation SPcod(H)/SPcoc(E) parameter
values were never checked, allowing for out of range values.
The lack of validation allowed the bit stream provided in
issue #1158 through. After this commit an error message
points to the marker segments' parameters as being out of
range.
input/nonregression/edf_c2_20.jp2 contains an SPcod(H) value
of 17, but according to Table A-20 of the specification only
values 0 and 1 are valid. input/nonregression/issue826.jp2
contains a SGcod(B) value of 2, but according to Table A-17
of the specification only values 0 and 1 are valid.
input/nonregression/oss-fuzz2785.jp2 contains a SGcod(B)
value of 32, but it is likewise limited to 0 or 1. These test
cases have been updated to consistently fail to parse the
headers since they contain out of bounds values.
This fixes issue #1210.
This adds a opj_set_decoded_components(opj_codec_t *p_codec,
OPJ_UINT32 numcomps, const OPJ_UINT32* comps_indices) function,
and equivalent "opj_decompress -c compno[,compno]*" option.
When specified, neither the MCT transform nor JP2 channel transformations
will be applied.
Tests added for various combinations of whole image vs tiled-based decoding,
full or reduced resolution, use of decode area or not.
* Only works for single-tiled images --> will error out cleanly, as currently
in other cases
* Save re-reading the codestream for the tile, and re-use code-blocks of the
previous decoding pass.
* Future improvements might involve improving opj_decompress, and the image writing logic,
to use this strategy.
PR #975 introduced a check that rejects images that have different bit depth/sign
per compoment in SIZ marker if the JP2 IHDR box has BPC != 255
This didn't work properly if decoding a .j2k file since the new bit added in
opj_cp_t wasn't initialized to the right value.
For clarity, tThis new bit has also been renamed to allow_different_bit_depth_sign
But looking closer at the code, it seems we were already tolerant to inconsistencies.
For example we parsed a JP2 BPCC box even if BPC != 255 (just a warning is emitted)
So failing hard in opj_j2k_read_siz() wouldn't be very inconsistent, and that
alone cannot protect against other issues, so just emit a warning if BPC != 255
and the SIZ marker contains different bit depth/sign per component.
Note: we could also check that the content of JP2 BPCC box is consistant with the one
of the SIZ marker.
There are situations where, given a tile size, at a resolution level,
there are sub-bands with x0==x1 or y0==y1, that consequently don't have any
valid codeblocks, but the other sub-bands may be non-empty.
Given that we recycle the memory from one tile to another one, those
ghost codeblocks might be non-0 and thus candidate for packet inclusion.