intel / vaapi-fits Goto Github PK
View Code? Open in Web Editor NEWLicense: BSD 3-Clause "New" or "Revised" License
License: BSD 3-Clause "New" or "Revised" License
In transcode tests, we can do various resolution transcodes. However, when verifying the transcoded files, we unconditionally decode those files back to the source resolution to do the comparison with the source file. However, this does not mean the transcoded file was actually transcoded to the requested resolution. Thus, add an extra check in transcode tests to ensure the transcoded file is in the requested resolution. For gstreamer tests, we can use gst-discoverer-1.0 -v to check the resolution of the transcoded file. For ffmpeg tests, we can use ffprobe.
Currently, md5 checksum metric is calculated on the whole file. gstreamer decoding does not have the capability to only process N frames from an input file. Therefore, it will always decode the entire input file which could have N+M frames. This makes it hard to compare/use same checksum results between ffmpeg and gstreamer.
Rewrite the md5 checksum metric to only generate checksum from N frames. This would also give the added benefit of detecting results that erroneously produce less-than N frames.
Port vaapi-fits to python 3.
The deinterlace tests should assume all input files are interlaced content. For gstreamer, we should set interlaced and top-field-first parameters for rawvideoparse.
We will need to investigate how to get ffmpeg to treat raw yuv as interlaced. I am not sure how this is done.
@uartie when I add new test feature , where I should add them in? in class level or function level?
for example:
class cbr_new_feature():
...
def test():
...
--------------or --------------
class cbr():
...
def test_new_feature():
...
thanks
following transcoding features validation can be added as an enhancements to vaapi-fits.
$ ./vaapi-fits list-config
Extracting assets...
Traceback (most recent call last):
File "./vaapi-fits", line 20, in <module>
sys.exit(main())
File "./vaapi-fits", line 17, in main
return main_entry_point()
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/frontend/main.py", line 71, in main_entry_point
sys.exit(main())
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/frontend/main.py", line 47, in main
returned = func(args.argv)
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/frontend/list_config.py", line 29, in list_config
slash.site.load()
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 17, in load
return _load_defaults(working_directory=working_directory)
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 22, in _load_defaults
_load_local_slashrc(working_directory=working_directory)
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 37, in _load_local_slashrc
_load_file_if_exists(os.path.abspath(path))
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 44, in _load_file_if_exists
_load_filename(path)
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 67, in _load_filename
_load_source(f.read(), filename)
File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 76, in _load_source
exec(code, {"__file__" : filename}) # pylint: disable=W0122
File "/home/djdeath/src/mesa-src/vaapi-fits/.slashrc", line 276, in <module>
execfile(config)
File "/home/djdeath/src/mesa-src/vaapi-fits/config/default", line 14, in <module>
with tarfile.open("{}.tbz2".format(assets), "r:*") as tf:
File "/usr/lib/python2.7/tarfile.py", line 1680, in open
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully
According to https://docs.microsoft.com/en-us/windows/win32/medfound/10-bit-and-16-bit-yuv-video-formats#p216-and-p210, P210 is 2 planes (Y plane + interleaved UV plane and each sample is stored as a WORD. There is no definition for P410, but one would assume it follows the same pattern.
Unfortunately, P210 and P410 in vaapi-fits are implemented incorrectly as 3 planes (Y plane + U plane + V plane). And, AFAICT, ffmpeg and gstreamer do not have appropriate formats to handle the official P210 and unofficial P410 formats.
Fortunately, #164 and #163 will add support for Y210 and Y410 which have better support across the media SW stacks (drivers & middleware). Once these patches are approved and merged, we should remove P210 and P410 code and tests in vaapi-fits.
It would have been nice to know that this software does not support my system before wasting my time... (yet it's old). The reference to the caps
directory is way too hidden to suffice in that circumstance.
Command:
LIBVA_DRIVER_NAME=iHD vaapi-fits run test/gst-msdk/vpp/rotation.py --platform KBL -k degrees=90 -vv
Error:
Traceback (most recent call last):
File "./vaapi-fits", line 11, in
from slash.frontend.main import main_entry_point
File "/usr/local/lib/python2.7/dist-packages/slash/init.py", line 6, in
from .core.session import Session
File "/usr/local/lib/python2.7/dist-packages/slash/core/session.py", line 16, in
from .fixtures.fixture_store import FixtureStore
File "/usr/local/lib/python2.7/dist-packages/slash/core/fixtures/fixture_store.py", line 6, in
from orderedset import OrderedSet
File "/usr/local/lib/python2.7/dist-packages/orderedset/init.py", line 5, in
from ._orderedset import OrderedSet
ImportError: /usr/local/lib/python2.7/dist-packages/orderedset/_orderedset.so: undefined symbol: PyFPE_jbuf
to add TGL support for legacy features as on ICL
Compared to ICL, the vp8 decode and encode were removed support on TGL for iHD driver
we should support multi-resolution stream SSIM/PSNR value calculation
gst-vaapi and ffmpeg-qsv have vpp overlay plugins now (vaapioverlay and overlay_qsv, respectively). Implement tests for these features.
Command-line examples
gst-vaapi:
gst-launch-1.0 -vf vaapioverlay name=overlay sink_1::xpos=320 \
sink_2::ypos=240 sink_3::xpos=320 sink_3::ypos=240 ! vaapisink \
filesrc location=input.h264 ! h264parse ! vaapih264dec \
! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay. \
filesrc location=input.h264 ! h264parse ! vaapih264dec \
! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay. \
filesrc location=input.h264 ! h264parse ! vaapih264dec \
! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay. \
filesrc location=input.h264 ! h264parse ! vaapih264dec \
! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay.
ffmpeg-qsv
ffmpeg -hwaccel qsv -c:v h264_qsv -i input.h264 \
-hwaccel qsv -c:v h264_qsv -i input.h264 \
-hwaccel qsv -c:v h264_qsv -i input.h264 \
-hwaccel qsv -c:v h264_qsv -i input.h264 \
-filter_complex 'nullsrc=size=640x480,format=nv12,hwupload=extra_hw_frames=120[base];[base][0:v]overlay_qsv=x=0:y=0:w=320:h=240[tmp0];[tmp0][1:v]overlay_qsv=x=320:y=0:w=320:h=240[tmp1];[tmp1][2:v]overlay_qsv=x=0:y=240:w=320:h=240[tmp2];[tmp2][3:v]overlay_qsv=x=320:y=240:w=320:h=240' \
-c:v h264_qsv -vframes 500 -y output.h264
ffmpeg-vaapi overlay has been implemented, too... but, as of writing, the patches are still under review.
Stetps:
1, ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -f rawvideo -pix_fmt yuv420p -s:v 7680x4320 -i /opt/media/src/assets/otc-media/yuv/Peru_8K_HDR_7680x4320_100frames_I420.yuv -vf 'format=nv12,hwupload=extra_hw_frames=120' -an -c:v hevc_qsv -profile:v main -g 30 -q 28 -preset 1 -slices 1 -bf 2 -low_power 0 -vframes 50 -y test_8k.h265
2, ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -c:v hevc_qsv -i test_8k.h265 -vf 'hwdownload,format=nv12' -pix_fmt yuv420p -f rawvideo -vsync passthrough -vframes 50 -y test_8k.yuv
3, check_psnr: get_media().baseline.check_psnr
Error Log info:
[2019-11-19 06:56:26.810025] INFO: slash: Total: 50 packets (5311738 bytes) demuxed
[2019-11-19 06:56:26.810155] INFO: slash: Output file #0 (/opt/media/test-results/vaapi-fits/encode/hevc/2c9a5ff8-0a95-11ea-961b-0242ac120002_0/_0.test.ffmpeg-qsv.encode.hevc/cqp/Peru_8K_HDR_7680x4320_100frames-cqp-main-30-28-1-1-2-0-7680x4320-I420.yuv):
[2019-11-19 06:56:26.810249] INFO: slash: Output stream #0:0 (video): 34 frames encoded; 34 packets muxed (1692057600 bytes);
[2019-11-19 06:56:26.810320] INFO: slash: Total: 34 packets (1692057600 bytes) muxed
[2019-11-19 06:56:26.847195] INFO: slash: [AVIOContext @ 0x55d30063dc00] Statistics: 0 seeks, 6455 writeouts
[2019-11-19 06:56:26.847378] INFO: slash: [AVIOContext @ 0x55d30063f080] Statistics: 5311738 bytes read, 0 seeks
[2019-11-19 06:56:26.885826] NOTICE: slash: DETAIL: time(ffmpeg:2) = 3.3977s
[2019-11-19 06:56:28.373194] NOTICE: slash: DETAIL: time(psnr:1) = 1.4866s
[2019-11-19 06:56:28.480388] TRACE: slash.core.error: Error added: ValueError: ('cannot reshape array of size 0 into shape (4320,7680)', 'frame 34/50')
/opt/media/src/vaapi-fits-full/full, line 41:
sys.exit(main())
/opt/media/src/vaapi-fits-full/full, line 38:
return main_entry_point()
/usr/local/lib/python2.7/dist-packages/slash/core/fixtures/parameters.py, line 34:
return func(*args, **kwargs)
/opt/media/src/vaapi-fits/test/ffmpeg-qsv/encode/hevc.py, line 48:
self.encode()
/opt/media/src/vaapi-fits/test/ffmpeg-qsv/encode/encoder.py, line 163:
self.check_metrics()
/opt/media/src/vaapi-fits/test/ffmpeg-qsv/encode/encoder.py, line 187:
self.frames, self.format),
/opt/media/src/vaapi-fits/lib/common.py, line 33:
ret = function(*args, **kwargs)
/opt/media/src/vaapi-fits/lib/metrics.py, line 149:
nframes, __compare_psnr)
/opt/media/src/vaapi-fits/lib/metrics.py, line 119:
y2, u2, v2 = file2.next_frame()
/opt/media/src/vaapi-fits/lib/metrics.py, line 64:
return self.reader(self.fd, self.width, self.height)
/opt/media/src/vaapi-fits/lib/framereader.py, line 46:
y = numpy.fromfile(fd, dtype=numpy.uint8, count=size).reshape((height, width))
Implement testing support for QVBR and ICQ BRC methods for AVC/HEVC encode.
gst-vaapi and gst-msdk support these options. Need to check ffmpeg, too.
In gst-vaapi vaapipostproc element, skin-tone-enhancement-level is exposed as a feature in it's properties. We should implement tests for this.
It does not appear other middleware plugins implement this feature. Thus, only gst-vaapi needs to be considered at this moment.
we should support multi-resolution stream MD5 value calculation
we should enable scripts to test csc on gst/ffmpeg
To add EHL platform support
almost the baseline file is same value between test context like:
"test/ffmpeg-qsv/decode/avc.py:default.test(case=1080p)": {
"md5": "2873526ec49defd754c7853f0658799d"
},
"test/ffmpeg-vaapi/decode/avc.py:default.test(case=1080p)": {
"md5": "2873526ec49defd754c7853f0658799d"
},
"test/gst-vaapi/decode/avc.py:default.test(case=1080p)": {
"md5": "2873526ec49defd754c7853f0658799d"
},
"test/gst-msdk/decode/avc.py:default.test(case=1080p)": {
"md5": "2873526ec49defd754c7853f0658799d"
},
So setup it as python format data will helpful with maintenance, the new test context can auto apply the same value
we can use importlib.import_module()
and getattr()
to get the target python class/data
Caps: 8k HEVC enc test cases failed on Gen10- platforms, take one 8k hevc encode case on gst-vaapi for example,
FAIL
cbr.test(bframes=0,bitrate=5000,case=Peru_8K_HDR_7680x4320_100frames,fps=30,gop=30,profile=main,slices=1) ( full.test.gst-vaapi.encode . hevc )
464 mils
[2019-11-21 08:05:17.443477] INFO: slash: CALL: gst-launch-1.0 -vf filesrc location=/opt/media/src/assets/otc-media/yuv/Peru_8K_HDR_7680x4320_100frames_I420.yuv num-buffers=50 ! rawvideoparse format=i420 width=7680 height=4320 framerate=30 ! videoconvert ! video/x-raw,format=NV12 ! vaapih265enc rate-control=cbr keyframe-period=30 num-slices=1 max-bframes=0 bitrate=5000 tune=none ! video/x-h265,profile=main ! h265parse ! filesink location=/opt/media/test-results/vaapi-fits/encode/hevc/38dc9244-0c35-11ea-9928-0242ac120002_0/_0.test.gst-vaapi.encode.hevc/cbr/Peru_8K_HDR_7680x4320_100frames-cbr-main-30-30-1-0-5000k-5000k.h265 (pid: 8023)
[2019-11-21 08:05:17.457603] INFO: slash: Setting pipeline to PAUSED ...
[2019-11-21 08:05:17.460851] INFO: slash: Pipeline is PREROLLING ...
[2019-11-21 08:05:17.461029] INFO: slash: Got context from element 'vaapiencodeh265-0': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"(GstVaapiDisplayDRM)\ vaapidisplaydrm1";
[2019-11-21 08:05:17.600949] INFO: slash: /GstPipeline:pipeline0/GstRawVideoParse:rawvideoparse0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)7680, height=(int)4320, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt2020, framerate=(fraction)30/1
[2019-11-21 08:05:17.602860] INFO: slash: ERROR: from element /GstPipeline:pipeline0/GstRawVideoParse:rawvideoparse0: Internal data stream error.
[2019-11-21 08:05:17.602947] INFO: slash: Additional debug info:
[2019-11-21 08:05:17.603067] INFO: slash: ../libs/gst/base/gstbaseparse.c(3678): gst_base_parse_loop (): /GstPipeline:pipeline0/GstRawVideoParse:rawvideoparse0:
[2019-11-21 08:05:17.603142] INFO: slash: streaming stopped, reason not-negotiated (-4)
[2019-11-21 08:05:17.603212] INFO: slash: ERROR: pipeline doesn't want to preroll.
[2019-11-21 08:05:17.603275] INFO: slash: Setting pipeline to PAUSED ...
[2019-11-21 08:05:17.603333] INFO: slash: Setting pipeline to READY ...
[2019-11-21 08:05:17.604717] INFO: slash: Setting pipeline to NULL ...
[2019-11-21 08:05:17.604934] INFO: slash: Freeing pipeline ...
[2019-11-21 08:05:17.610077] NOTICE: slash: DETAIL: time(gst:1) = 0.1754s
[2019-11-21 08:05:17.614742] TRACE: slash.core.error: Error added: AssertionError: CALL ERROR: failed with exitcode 1 (pid: 8023)
Since https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/merge_requests/640, rotation/transform baselines (90 and 270 cases) are invalid and need to be rebased. Previously, msdkvpp rotation only rotated the output image but not the output surface dimensions and was deemed acceptable at the time. Now the output surface dimensions are rotated too. The new behavior should bitmatch the videoflip results now.
When I try to add some new parameters to test, I need to add gen__variants and gen__parameters in lib/parameters.py , I think It's maybe too much redundancy code, maybe we can find a new way to pass parameters
I try to enable ICQ test
step 1:
use "preset" commandline:
ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -f rawvideo -pix_fmt yuv420p -s:v 720x480 -i /root/media/testing/assets/yuv/720x480p.yuv -vf 'format=nv12,hwupload=extra_hw_frames=120' -an -c:v h264_qsv -profile:v high -preset 4 -vframes 150 -y aa.264
log shows It's VBR mode
setp2:
use "global_quality“:
ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -f rawvideo -pix_fmt yuv420p -s:v 720x480 -i /root/media/testing/assets/yuv/720x480p.yuv -vf 'format=nv12,hwupload=extra_hw_frames=120' -an -c:v h264_qsv -profile:v high -global_quality 4 -vframes 150 -y aa.264
log show it's ICQ mode which is what I need.
the code is:
so can I set all code to use global_quality and delete preset?
I will do more test.
./full run test/ffmpeg-vaapi/encode/avc.py -k bframes=0,case=4k2013,gop=30,profile=main,qp=51,quality=1,slices=3
on ICL:
psnr = 25.6864, 39.3725, 39.5874, 28.0653, 41.1952, 40.7418
on KBL:
psnr = 26.8123, 38.7912, 38.7007, 28.5433, 40.9245, 40.3934
use one as reference then the other will fail.
After this https://patchwork.ffmpeg.org/patch/13883/ , intel-media-ci/ffmpeg#58 apply , ffmpeg-vaapi got same md5 as gst-vaapi , and passrate hit 98.5% . does ffmpeg test case need to change command line ? such as:
call(
"ffmpeg -hwaccel vaapi -init_hw_device vaapi=hw:/dev/dri/renderD128 "
" -filter_hw_device hw -v verbose -i {source} "
" -vf rawdump=file={decoded} "
" -pix_fmt p010le -vsync passthrough -f null -".format(**vars(self)))
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.