aous72 / openjph Goto Github PK
View Code? Open in Web Editor NEWOpen-source implementation of JPEG2000 Part-15 (or JPH or HTJ2K)
License: BSD 2-Clause "Simplified" License
Open-source implementation of JPEG2000 Part-15 (or JPH or HTJ2K)
License: BSD 2-Clause "Simplified" License
I am seeing artifacts when using encoding color images using irreversible compression. You can see this by opening the following page:
https://chafey.github.io/openjphjs/test/browser/index.html
Select the "Lena" image, and adjusting the slider bar under "Lossy Quantization Factor". As you increase the quantization factor, you start seeing green speckles on different parts of the image.
Hi Aous,
I hope all is well. I noticed that there are some assert statements in the block decoder and encoder.
As these get removed when building without debug symbols, do you think it is a good idea to convert
some of them to exceptions? As the library gets used more in the wild. people will be throwing all sorts
of bad or malicious code streams at it - so, might be good to error out in these cases.
Thanks,
Aaron
I've found a bug in both encode and decode in the library.
It seems that you are encoding K_max, not the missing
bit planes, in the tier 2 step of the encoder.
Hello,
Thanks for the awesome work! Can this library be compiled on ARM architectures? If so, could you please share a sample makefile?
Thanks!
Hi Aous,
while checking playing with the OpenJPH library, it appeared that
precinct::scratch
is static in ojph_codestream.cpp and remains dirty after codestream::read() is called. This causes crashes in a situtation where a codestream is created/read/closed multiple times.
Any reason for having precinct::scratch as static and not cleaning it?
I noticed that test.j2c used in the web browser demo is 360,586 bytes long. This would be quite impressive if it is lossless, but I don't know how to look at the j2c header and there is no source image to reproduce. It would be nice if you could add the input/source image you used to feed the compressor and the command line arguments you used to generate test.j2c. Even better would be to check in the original lena512color.tiff with a script to generate test.j2c. In addition to this, it would be nice to have a few input.source images of different types with scripts to convert them (grayscale 16 bit is of specific interest to me).
Stress test the OpenJPH with 4 cores machine we see that the CPU usage is not equal for all cores one of core use 100% but other is just 30%
htj2k-cjph.exe -i image_21447_24bit.yuv -o image_21447_24bit-jph.j2c -dims {1563,1558} -num_comps 3 -signed false -bit_depth 10 -downsamp {1,1},{2,1},{2,1} -block_size {64,64} -precincts {128,128},{256,256} -prog_order CPRL -reversible true
htj2k-djph.exe -i image_21447_24bit-jph.j2c -o xxx.ppm
ojph error 0x20000003 at ojph_expand.cpp:235: To save an image to ppm, all the components must have the downsampling ratio
dopen_j2c.exe -i image_21447_24bit-jph.j2c -o xxx.ppm
[INFO] Start to read j2k main header (0).
[INFO] Main header has been correctly decoded.
[INFO] No decoded area parameters, set the decoded area to the whole image
[INFO] Header of tile 1 / 1 has been read.
[INFO] Generated Outfile xxx.ppm
decode time: 136 ms
Try building for ios and android architect i have error with ios
It seem there is some thing wrong when it included x86intrin.h:
=================================================================
cargo:warning=In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/include/x86intrin.h:15:
cargo:warning=In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/include/immintrin.h:15:
cargo:warning=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/include/mmintrin.h:33:25: error: too few arguments to function call, expected 2, have 0
cargo:warning= __builtin_ia32_emms();
cargo:warning= ^
exit status: 1
Detecting iOS SDK path for iphoneos
running: "clang++" "-O3" "-fPIC" "--target=aarch64-apple-ios" "-arch" "arm64" "-miphoneos-version-min=7.0" "-isysroot" "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS14.5.sdk" "-fembed-bitcode" "-stdlib=libc++" "-I" "vendor/openjph/src/core" "-I" "vendor/openjph/src/core/codestream" "-I" "vendor/openjph/src/core/coding" "-I" "vendor/openjph/src/core/others" "-I" "vendor/openjph/src/core/transform" "-I" "vendor/openjph/src/core/common" "-std=c++17" "-DOJPH_DISABLE_INTEL_SIMD" "-mavx" "-mavx2" "-o" "/Users/dao/openjph_ffi/target/aarch64-apple-ios/release/build/openjphffi-21d36605edc8156d/out/vendor/openjph/src/core/transform/ojph_transform_avx2.o" "-c" "vendor/openjph/src/core/transform/ojph_transform_avx2.cpp"
cargo:warning=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/include/ia32intrin.h:305:10: error: use of undeclared identifier '__builtin_ia32_crc32hi'; did you mean '__builtin_arm_crc32h'?
cargo:warning= return __builtin_ia32_crc32hi(__C, __D);
cargo:warning= ^
cargo:warning=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/include/ia32intrin.h:305:10: note: '__builtin_arm_crc32h' declared here
exit status: 0
cargo:warning=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/include/mmintrin.h:50:19: error: use of undeclared identifier '__builtin_ia32_vec_init_v2si'
cargo:warning= return (__m64)__builtin_ia32_vec_init_v2si(__i, 0);
cargo:warning= ^
cargo:warning=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.5/include/ia32intrin.h:326:10: error: use of undeclared identifier '__builtin_ia32_crc32si'
According to the standard, this is what I get as the formula for irrev step size:
float param_qcd::irrev_get_delta(int resolution, int subband) const
{
assert((resolution == 0 && subband == 0) ||
(resolution <= num_decomps && subband > 0 && subband<4));
assert((Sqcd & 0x1F) == 2);
float gain[] = { 0.0f, 1.0f, 1.0f, 2.0f };
int idx = ojph_max(resolution - 1, 0) * 3 + subband;
int exp = u16_SPqcd[idx] >> 11;
return (float) (((1.0 + (u16_SPqcd[idx] & 0x7FF)/ 2048.0)
* pow(2.0, (int32_t) (gain[subband] - exp))));
}
So, the only difference seems to be that the current code has
(1 + mantissa)/2048.0
rather than
1.0f + mantissa/2048.0
Or am I missing something ?
Currently the header files have the library name prefixed in their name (e.g. ojph in ojph_arch.h) and are stored in the /src/core/common folder. Based on recommendations in modern cmake the prefix should be removed and a folder used to scope them. Consumers would then include them like this:
#include <ojph/arch.h>
And they would be moved from /src/core/common to /include/ojph
Related to this is to move the library source from /src/core to /src and the applications from /src/apps to /apps.
I would be happy to make these changes and submit a PR if you agree.
Hi Aous,
Hope you're keeping well. Using valgrind
, I found an issue in this code:
inline void rev_read(rev_struct *vlcp)
{
//process 4 bytes at a time
if (vlcp->bits > 32)
return;
ui32 val;
val = *(ui32*)vlcp->data;
vlcp->data -= 4;
vlcp->size
may be less than 4, in which case the function is accessing out of bounds memory.
You can see this with this image if is is compressed and then decompressed.
One possible solution is to allocate a few more bytes for vlcp->data
to avoid this.
Best,
Aaron
Minor typo, but this method, in ojph_codestream.cpp
codestream::check_boardcast_validity()
should really be named
codestream::check_broadcast_validity()
Hi Aous,
I hope all is well.
I just cloned the repo on a new system and tried to run a build:
But, I get the following error on the t.zip file:
Archive: ./t.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of ./t.zip or
./t.zip.zip, and cannot find ./t.zip.ZIP, period.
What do you make of this ?
Thanks,
Aaron
Greetings,
I'm using chafey's openjphjs
library, with which I made a script to process a set of images and am now loading them for rendering, but the error appears to be at the level of this implementation.
I wrote a script to decode a set of images* with openjpegwasm
, encode with openjphjs
(with default parameters), and write the result to disk.
I then decode them with openjphjs
for rendering, which mostly works fine (and faster than the old JPXs), but a few are triggering errors /warnings such as
ojph error 0x00050042 at ojph_params.cpp:552: error in SIZ marker length
ojph error 0x00050041 at ojph_params.cpp:548: error reading SIZ marker
ojph warning 0x00050001 at ojph_params.cpp:559: Rsiz in SIZ has unimplemented fields
ojph error 0x0005004E at ojph_params.cpp:588: Csiz does not match the SIZ marker size
(that last error appearing after the warning)
ojph error 0x00050044 at ojph_params.cpp:557: Rsiz bit 14 not set (this is not a JPH file)
(perhaps it's true that none of them are proper JPH files, but most of them decode well enough for rock 'n' roll as far as I'm concerned, as per the readme Adding the .jph header is of little urgency, as the codestream contains all needed information to properly decode an image
).
I'd be very happy to provide samples / reproduction. Perhaps it's best if I fork https://github.com/chafey/openjphjs with some samples that produce errors, as well as other similar files that do not (and in each case, the sources from which they are derived). It might also be worth me trying to use your own version of the WASM decoder and perhaps a different version of my encoding script also using your code more directly.
*16 bit unsigned grayscale JPX files originally produced using Pillow in Python, which I've previously been decoding with openjpegwasm
for rendering in browser.
A small nit-pick :) :
Also, the _t
format is being used for classes here.
Eg:
class param_siz_t
{....
It would be useful to be able to decode a partial bitstream. For images encoded in resolution order (RLCP, RPCL), this would allow sub resolutions to be decoded from a partial bitstream enabling a client to display a thumbnail for an image without having to download the entire image. To accomplish this, a few things would be needed:
This functionality would not be available with non R* progression orders.
Related to this is enhancing the encoder to produce information about the size/location of each decomposition level in the encoded bitstream. That way a client could read the exact number of bytes needed to produce a specific decomposition level. This may also help 3 above if it is messy/difficult to make the decoder resilient to partial bitstream of arbitrary lengths
I am happy to do the work, but don't want to do it if its not wanted or rejected
While I'm not suggesting that using HT with quality layers is a great idea for mainstream use, it could be important in some special use cases and also when transcoding to/from J2K Part-1 codestreams which do commonly use quality layers. I think the HTJ2K standard allows the representation of quality layers using the placeholder passes feature.
Ideally, OpenJPH would be able to decode these files and if not, have a consistent response to files that are valid HTJ2K files yet not yet decodeable by OpenJPH.
ojph_expand compiled from current master branch has a different response to the attached input files with placeholder passes, depending on the input file
different errors and warnings are displayed
A malformed codeblock that has more than one coding pass, but zero length for 2nd and potential 3rd pass.
Error decoding a codeblock
We do not support more than 3 coding passes; This codeblocks has 5 passes.
error in parsing a tile header; missing msbs are larger or equal to Kmax. The most likely cause is a corruption in the bitstream.
Segmentation Fault
In the case of the pathological test with 65335 quality layers ht_216p_65535_layers.j2c
, a gray image is decoded without any error or warning output.
ojph_expand -i ht_216p_01_layers.j2c -o ht_216p_01_layers.j2c.tif
Elapsed time = 0.002735
ojph_expand -i ht_216p_02_layers.j2c -o ht_216p_02_layers.j2c.tif
ojph warning 0x00010001 at ojph_block_decoder_ssse3.cpp:1034: A malformed codeblock that has more than one coding pass, but zero length for 2nd and potential 3rd pass.
ojph error 0x000300A1 at ojph_codestream.cpp:4067: Error decoding a codeblock
ojph_expand -i ht_216p_07_layers.j2c -o ht_216p_07_layers.j2c.tif
ojph error 0x000300A1 at ojph_codestream.cpp:4067: Error decoding a codeblock
ojph_expand -i ht_216p_19_layers.j2c -o ht_216p_19_layers.j2c.tif
ojph warning 0x00010002 at ojph_block_decoder_ssse3.cpp:1042: We do not support more than 3 coding passes; This codeblocks has 5 passes.
ojph error 0x000300A1 at ojph_codestream.cpp:4067: Error decoding a codeblock
ojph_expand -i ht_216p_65535_layers.j2c -o ht_216p_65535_layers.j2c.tif
Elapsed time = 0.002206
ojph_expand -i ht_216p_01_layers.j2c -o ht_216p_01_layers.j2c.tif
Elapsed time = 0.002665
ojph_expand -i ht_2160p_01_layers.j2c -o ht_2160p_01_layers.j2c.tif
Elapsed time = 0.112171
ojph_expand -i ht_2160p_02_layers.j2c -o ht_2160p_02_layers.j2c.tif
ojph error 0x00030092 at ojph_codestream.cpp:1898: error in parsing a tile header; missing msbs are larger or equal to Kmax. The most likely cause is a corruption in the bitstream.
ojph_expand -i ht_2160p_10_layers.j2c -o ht_2160p_10_layers.j2c.tif
Segmentation fault
The input files were encoded with Kakadu v.8.2.1 using the following commands:
kdu_compress -i meridian_216p.tif -o ht_216p_01_layers.j2c Cmodes=HT -rate 1 Clayers=1 -no_info
kdu_compress -i meridian_216p.tif -o ht_216p_02_layers.j2c Cmodes=HT -rate 1 Clayers=2 -no_info
kdu_compress -i meridian_216p.tif -o ht_216p_07_layers.j2c Cmodes=HT -rate 1 Clayers=7 -no_info
kdu_compress -i meridian_216p.tif -o ht_216p_19_layers.j2c Cmodes=HT -rate 1 Clayers=19 -no_info
kdu_compress -i meridian_216p.tif -o ht_216p_65535_layers.j2c Cmodes=HT Clayers=65535 -no_info
kdu_compress -i 3840x2160_10bit_444_BT709_SDR.tif -o ht_2160p_01_layers.j2c Cmodes=HT -rate 1 Clayers=1 -no_info
kdu_compress -i 3840x2160_10bit_444_BT709_SDR.tif -o ht_2160p_02_layers.j2c Cmodes=HT -rate 1 Clayers=2 -no_info
kdu_compress -i 3840x2160_10bit_444_BT709_SDR.tif -o ht_2160p_10_layers.j2c Cmodes=HT -rate 1 Clayers=10 -no_info
The HJT2K J2C files generated by these commands are contained in the attached zip files:
ht_2160p_01_layers.zip
ht_2160p_02_layers.zip
ht_2160p_10_layers.zip
ht_216p_01_layers.zip
ht_216p_02_layers.zip
ht_216p_07_layers.zip
ht_216p_19_layers.zip
ht_216p_65535_layers.zip
Hi Aous,
Hope all is well, and thanks for fixing those warnings. My compiler is much happier.
However, there are still a number of warnings that should probably be looked at:
You can see them here
I have found a big benefit from keeping as many variables as possible as unsigned type, to avoid
sign conversion issues.
Best,
Aaron
Hi Aous,
Hope all is well. I am really interested in getting a better understanding
of how the block coder and decoder work. I have a copy of the Part 15 draft, but if
you have a little time to add comments here and there to the block coder, it would help
me parse it.
Thanks!
Aaron
While decoding color images, I am able to decode each component R, G, B for a line separately by using pull() when set_planar(false). This is what I would expect when set_planar(true) since planar images group components together (RRRGGGBBB) while non planar images group by sample/pixel (RGBRGBRGB). I tried decoding color images with set_planar(true), but I am not sure what it returns. I haven't dug into the code yet, but wanted to check and see if I am understanding things right and if there are any known issues first.
A new jpeg2000 library has appeared on the official pages. Too bad there are no part16 and part17 parts included. ;)
https://gitlab.com/wg1/htj2k-rs
What is this message? How to bypass it when converting with file YUV to JPH and then to PPM?
ojph error 0x20000003 at ojph_expand.cpp:145: To save an image to ppm, all the components must have the downsampling ratio
The source code for this project currently deals with memory using c style language features (e.g. malloc, free, pointers, arrays, etc) which can be the source of many bugs including memory corruption, memory leaks and overflows (which can be used for malicious purposes.) I would like to propose that the project move towards the C++ standard library to improve memory safety. Of specific interest is to replace all uses of arrays and raw pointers with std::vector wrapped with std::shared_ptr and std::unique_ptr. Since this project supports multiple toolchains, we can look at standardizing on boost rather than having to deal with variances found in the different standard c++ libraries found in the various toolchains.
Hello Aous,
Hope all is well.
Do you have plans to add rate control to the encoder?
I've been looking into this : as PCRD won't work on HTJ2K,
I guess it's possible to simply use quantization for lossy rate control,
but it won't be precise. Interested to know what approach you think
would work best.
Best,
Aaron
Should update the wiki :)
From param_qcd
base_delta = 1.0f /
(1 << (siz.get_bit_depth(0) + siz.is_signed(0)));
Did you intend to add an int
and a boolean
?
The ITU site [1] seems to no longer allow easy purchase and download to the general public, only to TIES users. I suggest adding or changing the link to the ISO site [2] which allows easy purchase and download to the general public.
[1] https://www.itu.int/rec/T-REC-T.814/en
[2] https://www.iso.org/standard/76621.html
I replicated the README.md example line for YUV input file, and get an error:
./ojph_compress -i input_file.yuv -o output_file.j2c -num_decomps 5 -reversible true -dims {3840,2160} -num_comps 3 -signed false -bit_depth 10 -downsamp {1,1},{2,2}
size must start with {
This is on Ubuntu 18.04.2 LTS.
Hi Aous,
Hope all is well.
Here is a file that exhibits some pathologies:
When decoding, in line 1202 of the block decoder, we can have m_n == 34
.
Decoding eventually fails, but perhaps there is a way of avoiding m_n > 32
?
This file is truncated, but if you ignore the exception thrown, you will see this problem.
clusterfuzz-testcase-minimized-grk_decompress_fuzzer-5751619555819520.zip
Interested to know your thoughts on how to deal with this code stream.
Thanks,
Aaron
Hi Aous,
Does it make sense to reject code blocks with scup
equal to 0 ?
OpenJPH/src/core/coding/ojph_block_decoder.cpp
Line 1022 in 78c8a80
i.e. adding in:
if (scup == 0)
return false
scup == 0
seems to indicate a corrupt code block
Thanks,
Aaron
I just read the HTJ2K white paper which references a possible javascript decoder based on this work. Can you provide any more information about this?
~/OpenJPH/bin$ ./ojph_expand -i test.j2c -o test_out.ppm Elapsed time = 0.112212
~/OpenJPH/bin$ ffprobe test_out.ppm
ffprobe version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2007-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
test_out.ppm: Invalid data found when processing input
Type should go in the first two bytes of the box data:
0 for binary comment
1 for IS 8859-15:1999 latin comment
So, you want to set type to 1.
I would like to make sure that the code I contribute is documented and am unsure what documentation guidelines are being followed. I have used doxygen style in the past and would prefer that if there is no preference. If there is a preference, can we add a link to the style in the documentation somewhere?
I think it would be handy for documentation to list how standard color subsamplings (4:2:2, 4:2:0, etc.) can be expressed with -downsamp
This line:
Pcap = 0x00020000; //for jph, Pcap^15 must be set
Here, you are setting 17th bit of Pcap.
Also, is the Pcap field not 16 bits ?
Hi Aous,
Another question: in the rev_init
decoder method, vlcp->data
is decremented by
(tnum+3)
bytes, but vlcp->size
is not decremented by the same amount. Is it then
possible to get an underflow ?
Thanks,
Aaron
It would be nice to reduce memory consumption by discarding HTJ2K bits once they are decoded. Right now partial decoding requires keeping the original bits around which consumes additional memory. For example, suppose you have a 512x512 image encoded with 2 resolution levels (128x128 base, 256x256 coefficients, 512x512 coefficients). If I want to progressive download and display each resolution level, it would go like this:
Memory use could be reduced if the decoder would allow taking in the decoded bits instead of the corresponding encoded bit stream.
I get a seg fault in Ubuntu when using ojph_expand compile from current master branch with the HTJ2K J2C file in the attached zip
root@79f3152c1012:/usr/src/openjph/build# ../bin/ojph_expand -i ht_2160p_10_layers.j2c -o ht_2160p_10_layers.j2c.tif
Segmentation fault
I created this file with Kakadu v.8.2.1 using the following command:
kdu_compress -i 3840x2160_10bit_444_BT709_SDR.tif -o ht_2160p_10_layers.j2c Cmodes=HT -rate 1 Clayers=10 -no_info
I run OpenJPH lib in android, the code is work good for "x86", "x86_64", "arm64-v8a". But I have crashed when running it in . a Samsung j7 prime (armeabi-v7).
09-29 15:46:34.192 10804 28443 28443 F libc : Fatal signal 7 (SIGBUS), code 1, fault addr 0xc70007ef in tid 28443 (com.openjph), pid 28443 (com.openjph)
09-29 15:46:34.276 10804 28489 28489 F DEBUG : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
09-29 15:46:34.276 10804 28489 28489 F DEBUG : Build fingerprint: 'samsung/on7xeltedd/on7xelte:8.1.0/M1AJQ/G610FDXS1CTE1:user/release-keys'
09-29 15:46:34.276 10804 28489 28489 F DEBUG : Revision: '3'
09-29 15:46:34.276 10804 28489 28489 F DEBUG : ABI: 'arm'
09-29 15:46:34.276 10804 28489 28489 F DEBUG : pid: 28443, tid: 28443, name: com.openjph >>> com.openjph <<<
09-29 15:46:34.276 10804 28489 28489 F DEBUG : signal 7 (SIGBUS), code 1 (BUS_ADRALN), fault addr 0xc70007ef
09-29 15:46:34.276 10804 28489 28489 F DEBUG : r0 c70007eb r1 c70007fb r2 c72562ac r3 00000001
09-29 15:46:34.276 10804 28489 28489 F DEBUG : r4 c80566d0 r5 00000000 r6 c7000000 r7 ffb56bb8
09-29 15:46:34.276 10804 28489 28489 F DEBUG : r8 c72562ac r9 000004c4 sl 000004d4 fp 0006b729
09-29 15:46:34.277 10804 28489 28489 F DEBUG : ip c78949a4 sp ffb56ba0 lr c787e529 pc c788319a cpsr 60070030
09-29 15:46:34.279 10804 28489 28489 F DEBUG :
09-29 15:46:34.279 10804 28489 28489 F DEBUG : backtrace:
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #00 pc 0003119a /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/lib/arm/libojph.so (ojph::mem_elastic_allocator::get_buffer(int, ojph::coded_lists*&)+131)
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #01 pc 0002c525 /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/lib/arm/libojph.so (ojph::local::precinct::parse(int, int*, ojph::mem_elastic_allocator*, unsigned int&, ojph::infile_base*, bool)+1392)
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #02 pc 0002afc5 /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/lib/arm/libojph.so (ojph::local::resolution::parse_one_precinct(unsigned int&, ojph::infile_base*)+54)
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #03 pc 0002915f /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/lib/arm/libojph.so (ojph::local::tile::parse_tile_header(ojph::local::param_sot const&, ojph::infile_base*, unsigned long long const&)+618)
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #04 pc 00026823 /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/lib/arm/libojph.so (ojph::local::codestream::read()+918)
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #05 pc 000301e5 /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/lib/arm/libojph.so (ojph::htj2kdecompress::decode(unsigned char const*, unsigned int)+136)
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #06 pc 000313c5 /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/lib/arm/libojph.so (Java_com_ht2k_openjph_HT2KDecoder_decodeHT2KByteArray+64)
09-29 15:46:34.279 10804 28489 28489 F DEBUG : #07 pc 00011075 /data/app/com.openjph-xuFG3APog506_hRWqpzozA==/oat/arm/base.odex (offset 0x11000)
I also set DOJPH_DISABLE_INTEL_SIMD
set(CMAKE_CXX_FLAGS "-std=c++11 -O3 -fexceptions -DOJPH_DISABLE_INTEL_SIMD")
in my code
It would be nice to reduce memory consumption by discarding HTJ2K bits once they are decoded. Right now partial decoding requires keeping the original bits around which consumes additional memory. For example, suppose you have a 512x512 image encoded with 2 resolution levels (128x128 base, 256x256 coefficients, 512x512 coefficients). If I want to progressive download and display each resolution level, it would go like this:
Memory use could be reduced if the decoder would allow taking in the decoded bits instead of the corresponding encoded bit stream.
This is MSVC specific, so it could be protected by an #ifdef:
#ifdef _MSC_VER
#endif
Hi Aous,
Hope all is well. I am curious: are the sqrt_energy_gains
arrays related to the wavelet norms for the different sub-bands? i.e. can the HH sub-band of the first decomposition be calculated from the energy gains ?
Thanks,
Aaron
I wrote rust interface with cpp openjph.
Compile and running good in mac (debug and release)
for linux i got it running good (for debug compile) but when build release it crash with "Illegal instruction"
Trace down i have it crash on the line __m256 m = _mm256_set1_ps(mul); in file "ojph_colour_avx.cpp"
Not sure why it is the case
Decoding a Part-15 J2C file containing a CPF marker segment (Corresponding Profile) issues a "warning"
ojph warning 0x00030001 at ojph_codestream.cpp:430: CPF is not supported yet
rather than a less severe "info" message. This is not a useful warning, to users unfamiliar with CPF, it makes it sound like CPF is required but not supported. When in fact CPF is pretty much useless to a HT decoder. Perhaps this should be changed to an info message like
ojph info 0x00030001 at ojph_codestream.cpp:430: Skipping unknown CPF marker segment
If there is no info
class of message, this whole message could be removed as its not really important to a decoder. I think CPF could be more important to a Part-15 to Part-1 transcoder.
Here is a dropbox link to a Lossy Part-15 J2C file containing a CPF marker
There are a number of static variables such as precinct::scratch
that prevent reuse of the codestream object. I think it would be an improvement to eliminate these statics, if possible.
This will also make the code safe for batch encode/decode of multiple images.
Hi Aous,
it seems like it's still possible to find unexpected conditions in the block coder.
0658dfcd793f1879b56d8959be7a175e85fe8e70.zip
Above is a truncated file, but it does trigger the exception:
ojph_decode_codeblock : line 1789 assert(dp[stride] == 0);
Thanks!
Aaron
Hi Aous,
Do you know when part 15 conformance files will be made available? I saw this mentioned, but I can't seem to find any files yet.
Thanks!
Aaron
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.