Giter VIP home page Giter VIP logo

vaapi-fits's People

Contributors

bin-ci avatar bin-qa avatar fanli-hub avatar feiwan1 avatar focusluo avatar fulinjie avatar hangjie22coder avatar haribommi avatar intel-media-ci avatar jian2x avatar mengker33 avatar rdower avatar tong1wu avatar uartie avatar wangzj0601 avatar wenqingx avatar xhaihao avatar xinyudox avatar yefengxx avatar zchrzhou avatar zhangyuankun-star avatar zhoujd avatar zhuqingliang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vaapi-fits's Issues

transcode: verify encoded width/height

In transcode tests, we can do various resolution transcodes. However, when verifying the transcoded files, we unconditionally decode those files back to the source resolution to do the comparison with the source file. However, this does not mean the transcoded file was actually transcoded to the requested resolution. Thus, add an extra check in transcode tests to ensure the transcoded file is in the requested resolution. For gstreamer tests, we can use gst-discoverer-1.0 -v to check the resolution of the transcoded file. For ffmpeg tests, we can use ffprobe.

md5 should only consider num-frames

Currently, md5 checksum metric is calculated on the whole file. gstreamer decoding does not have the capability to only process N frames from an input file. Therefore, it will always decode the entire input file which could have N+M frames. This makes it hard to compare/use same checksum results between ffmpeg and gstreamer.

Rewrite the md5 checksum metric to only generate checksum from N frames. This would also give the added benefit of detecting results that erroneously produce less-than N frames.

py2to3

Port vaapi-fits to python 3.

deinterlace tests should set interlace parameters in pipelines

The deinterlace tests should assume all input files are interlaced content. For gstreamer, we should set interlaced and top-field-first parameters for rawvideoparse.

We will need to investigate how to get ffmpeg to treat raw yuv as interlaced. I am not sure how this is done.

please help to define where I add new test feature

@uartie when I add new test feature , where I should add them in? in class level or function level?
for example:

class cbr_new_feature():
...
def test():
...
--------------or --------------
class cbr():
...
def test_new_feature():
...

thanks

Transcoding feature support

following transcoding features validation can be added as an enhancements to vaapi-fits.

  1. Transcoding AVC test streams to AVC, HEVC, MPEG2 and MJPEG using HW decode->HW encode, HW decode->SW encode ,SW decode ->HW encode.
  2. Transcoding HEVC test streams to AVC, HEVC, MPEG2 and MJPEG using HW decode->HW encode, HW decode->SW encode ,SW decode ->HW encode.
  3. Transcoding MPEG2 test streams to AVC, HEVC, MPEG2 and MJPEG using HW decode->HW encode
  4. Transcoding VC1 test streams to AVC, HEVC, MPEG2 and MJPEG using HW decode->HW encode.

list-config fails

$ ./vaapi-fits list-config
Extracting assets...
Traceback (most recent call last):
  File "./vaapi-fits", line 20, in <module>
    sys.exit(main())
  File "./vaapi-fits", line 17, in main
    return main_entry_point()
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/frontend/main.py", line 71, in main_entry_point
    sys.exit(main())
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/frontend/main.py", line 47, in main
    returned = func(args.argv)
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/frontend/list_config.py", line 29, in list_config
    slash.site.load()
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 17, in load
    return _load_defaults(working_directory=working_directory)
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 22, in _load_defaults
    _load_local_slashrc(working_directory=working_directory)
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 37, in _load_local_slashrc
    _load_file_if_exists(os.path.abspath(path))
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 44, in _load_file_if_exists
    _load_filename(path)
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 67, in _load_filename
    _load_source(f.read(), filename)
  File "/home/djdeath/.local/lib/python2.7/site-packages/slash/site.py", line 76, in _load_source
    exec(code, {"__file__" : filename}) # pylint: disable=W0122
  File "/home/djdeath/src/mesa-src/vaapi-fits/.slashrc", line 276, in <module>
    execfile(config)
  File "/home/djdeath/src/mesa-src/vaapi-fits/config/default", line 14, in <module>
    with tarfile.open("{}.tbz2".format(assets), "r:*") as tf:
  File "/usr/lib/python2.7/tarfile.py", line 1680, in open
    raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully

Remove P210 and P410 formats

According to https://docs.microsoft.com/en-us/windows/win32/medfound/10-bit-and-16-bit-yuv-video-formats#p216-and-p210, P210 is 2 planes (Y plane + interleaved UV plane and each sample is stored as a WORD. There is no definition for P410, but one would assume it follows the same pattern.

Unfortunately, P210 and P410 in vaapi-fits are implemented incorrectly as 3 planes (Y plane + U plane + V plane). And, AFAICT, ffmpeg and gstreamer do not have appropriate formats to handle the official P210 and unofficial P410 formats.

Fortunately, #164 and #163 will add support for Y210 and Y410 which have better support across the media SW stacks (drivers & middleware). Once these patches are approved and merged, we should remove P210 and P410 code and tests in vaapi-fits.

Please mention supported architectures in README

It would have been nice to know that this software does not support my system before wasting my time... (yet it's old). The reference to the caps directory is way too hidden to suffice in that circumstance.

vaapi-fits doesn't work

Command:
LIBVA_DRIVER_NAME=iHD vaapi-fits run test/gst-msdk/vpp/rotation.py --platform KBL -k degrees=90 -vv

Error:

Traceback (most recent call last):
File "./vaapi-fits", line 11, in
from slash.frontend.main import main_entry_point
File "/usr/local/lib/python2.7/dist-packages/slash/init.py", line 6, in
from .core.session import Session
File "/usr/local/lib/python2.7/dist-packages/slash/core/session.py", line 16, in
from .fixtures.fixture_store import FixtureStore
File "/usr/local/lib/python2.7/dist-packages/slash/core/fixtures/fixture_store.py", line 6, in
from orderedset import OrderedSet
File "/usr/local/lib/python2.7/dist-packages/orderedset/init.py", line 5, in
from ._orderedset import OrderedSet
ImportError: /usr/local/lib/python2.7/dist-packages/orderedset/_orderedset.so: undefined symbol: PyFPE_jbuf

overlay: implement overlay tests

gst-vaapi and ffmpeg-qsv have vpp overlay plugins now (vaapioverlay and overlay_qsv, respectively). Implement tests for these features.

Command-line examples
gst-vaapi:

gst-launch-1.0 -vf vaapioverlay name=overlay sink_1::xpos=320 \
  sink_2::ypos=240 sink_3::xpos=320 sink_3::ypos=240 ! vaapisink \
 filesrc location=input.h264 ! h264parse ! vaapih264dec \
  ! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay. \
 filesrc location=input.h264 ! h264parse ! vaapih264dec \
  ! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay. \
 filesrc location=input.h264 ! h264parse ! vaapih264dec \
  ! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay. \
 filesrc location=input.h264 ! h264parse ! vaapih264dec \
  ! vaapipostproc ! "video/x-raw(memory:VASurface),width=320,height=240" ! overlay.

ffmpeg-qsv

ffmpeg -hwaccel qsv -c:v h264_qsv -i input.h264 \
 -hwaccel qsv -c:v h264_qsv -i input.h264 \
 -hwaccel qsv -c:v h264_qsv -i input.h264 \
 -hwaccel qsv -c:v h264_qsv -i input.h264 \
 -filter_complex 'nullsrc=size=640x480,format=nv12,hwupload=extra_hw_frames=120[base];[base][0:v]overlay_qsv=x=0:y=0:w=320:h=240[tmp0];[tmp0][1:v]overlay_qsv=x=320:y=0:w=320:h=240[tmp1];[tmp1][2:v]overlay_qsv=x=0:y=240:w=320:h=240[tmp2];[tmp2][3:v]overlay_qsv=x=320:y=240:w=320:h=240' \
-c:v h264_qsv -vframes 500 -y output.h264 

ffmpeg-vaapi overlay has been implemented, too... but, as of writing, the patches are still under review.

[vaapi-fits][ffmpeg-qsv][HEVC 8k encode] cannot reshape array of size 0 into shape (4320,7680)

Stetps:
1, ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -f rawvideo -pix_fmt yuv420p -s:v 7680x4320 -i /opt/media/src/assets/otc-media/yuv/Peru_8K_HDR_7680x4320_100frames_I420.yuv -vf 'format=nv12,hwupload=extra_hw_frames=120' -an -c:v hevc_qsv -profile:v main -g 30 -q 28 -preset 1 -slices 1 -bf 2 -low_power 0 -vframes 50 -y test_8k.h265
2, ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -c:v hevc_qsv -i test_8k.h265 -vf 'hwdownload,format=nv12' -pix_fmt yuv420p -f rawvideo -vsync passthrough -vframes 50 -y test_8k.yuv
3, check_psnr: get_media().baseline.check_psnr

Error Log info:
[2019-11-19 06:56:26.810025] INFO: slash: Total: 50 packets (5311738 bytes) demuxed
[2019-11-19 06:56:26.810155] INFO: slash: Output file #0 (/opt/media/test-results/vaapi-fits/encode/hevc/2c9a5ff8-0a95-11ea-961b-0242ac120002_0/_0.test.ffmpeg-qsv.encode.hevc/cqp/Peru_8K_HDR_7680x4320_100frames-cqp-main-30-28-1-1-2-0-7680x4320-I420.yuv):
[2019-11-19 06:56:26.810249] INFO: slash: Output stream #0:0 (video): 34 frames encoded; 34 packets muxed (1692057600 bytes);
[2019-11-19 06:56:26.810320] INFO: slash: Total: 34 packets (1692057600 bytes) muxed
[2019-11-19 06:56:26.847195] INFO: slash: [AVIOContext @ 0x55d30063dc00] Statistics: 0 seeks, 6455 writeouts
[2019-11-19 06:56:26.847378] INFO: slash: [AVIOContext @ 0x55d30063f080] Statistics: 5311738 bytes read, 0 seeks
[2019-11-19 06:56:26.885826] NOTICE: slash: DETAIL: time(ffmpeg:2) = 3.3977s
[2019-11-19 06:56:28.373194] NOTICE: slash: DETAIL: time(psnr:1) = 1.4866s
[2019-11-19 06:56:28.480388] TRACE: slash.core.error: Error added: ValueError: ('cannot reshape array of size 0 into shape (4320,7680)', 'frame 34/50')
/opt/media/src/vaapi-fits-full/full, line 41:
sys.exit(main())
/opt/media/src/vaapi-fits-full/full, line 38:
return main_entry_point()
/usr/local/lib/python2.7/dist-packages/slash/core/fixtures/parameters.py, line 34:
return func(*args, **kwargs)
/opt/media/src/vaapi-fits/test/ffmpeg-qsv/encode/hevc.py, line 48:
self.encode()
/opt/media/src/vaapi-fits/test/ffmpeg-qsv/encode/encoder.py, line 163:
self.check_metrics()
/opt/media/src/vaapi-fits/test/ffmpeg-qsv/encode/encoder.py, line 187:
self.frames, self.format),
/opt/media/src/vaapi-fits/lib/common.py, line 33:
ret = function(*args, **kwargs)
/opt/media/src/vaapi-fits/lib/metrics.py, line 149:
nframes, __compare_psnr)
/opt/media/src/vaapi-fits/lib/metrics.py, line 119:
y2, u2, v2 = file2.next_frame()
/opt/media/src/vaapi-fits/lib/metrics.py, line 64:
return self.reader(self.fd, self.width, self.height)
/opt/media/src/vaapi-fits/lib/framereader.py, line 46:
y = numpy.fromfile(fd, dtype=numpy.uint8, count=size).reshape((height, width))

vpp: implement skintone enhancement tests

In gst-vaapi vaapipostproc element, skin-tone-enhancement-level is exposed as a feature in it's properties. We should implement tests for this.

It does not appear other middleware plugins implement this feature. Thus, only gst-vaapi needs to be considered at this moment.

use python format data instead of json format data for baseline file

almost the baseline file is same value between test context like:

"test/ffmpeg-qsv/decode/avc.py:default.test(case=1080p)": {
    "md5": "2873526ec49defd754c7853f0658799d"                     
  }, 
 "test/ffmpeg-vaapi/decode/avc.py:default.test(case=1080p)": {
  "md5": "2873526ec49defd754c7853f0658799d"
 },
"test/gst-vaapi/decode/avc.py:default.test(case=1080p)": {
    "md5": "2873526ec49defd754c7853f0658799d"                     
  }, 
 "test/gst-msdk/decode/avc.py:default.test(case=1080p)": {
  "md5": "2873526ec49defd754c7853f0658799d"
 },

So setup it as python format data will helpful with maintenance, the new test context can auto apply the same value
we can use importlib.import_module() and getattr() to get the target python class/data

Caps: 8k HEVC enc test cases failed on Gen10- platforms

Caps: 8k HEVC enc test cases failed on Gen10- platforms, take one 8k hevc encode case on gst-vaapi for example,

FAIL
cbr.test(bframes=0,bitrate=5000,case=Peru_8K_HDR_7680x4320_100frames,fps=30,gop=30,profile=main,slices=1) ( full.test.gst-vaapi.encode . hevc )
464 mils

[2019-11-21 08:05:17.443477] INFO: slash: CALL: gst-launch-1.0 -vf filesrc location=/opt/media/src/assets/otc-media/yuv/Peru_8K_HDR_7680x4320_100frames_I420.yuv num-buffers=50 ! rawvideoparse format=i420 width=7680 height=4320 framerate=30 ! videoconvert ! video/x-raw,format=NV12 ! vaapih265enc rate-control=cbr keyframe-period=30 num-slices=1 max-bframes=0 bitrate=5000 tune=none ! video/x-h265,profile=main ! h265parse ! filesink location=/opt/media/test-results/vaapi-fits/encode/hevc/38dc9244-0c35-11ea-9928-0242ac120002_0/_0.test.gst-vaapi.encode.hevc/cbr/Peru_8K_HDR_7680x4320_100frames-cbr-main-30-30-1-0-5000k-5000k.h265 (pid: 8023)
[2019-11-21 08:05:17.457603] INFO: slash: Setting pipeline to PAUSED ...
[2019-11-21 08:05:17.460851] INFO: slash: Pipeline is PREROLLING ...
[2019-11-21 08:05:17.461029] INFO: slash: Got context from element 'vaapiencodeh265-0': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"(GstVaapiDisplayDRM)\ vaapidisplaydrm1";
[2019-11-21 08:05:17.600949] INFO: slash: /GstPipeline:pipeline0/GstRawVideoParse:rawvideoparse0.GstPad:src: caps = video/x-raw, format=(string)I420, width=(int)7680, height=(int)4320, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt2020, framerate=(fraction)30/1
[2019-11-21 08:05:17.602860] INFO: slash: ERROR: from element /GstPipeline:pipeline0/GstRawVideoParse:rawvideoparse0: Internal data stream error.
[2019-11-21 08:05:17.602947] INFO: slash: Additional debug info:
[2019-11-21 08:05:17.603067] INFO: slash: ../libs/gst/base/gstbaseparse.c(3678): gst_base_parse_loop (): /GstPipeline:pipeline0/GstRawVideoParse:rawvideoparse0:
[2019-11-21 08:05:17.603142] INFO: slash: streaming stopped, reason not-negotiated (-4)
[2019-11-21 08:05:17.603212] INFO: slash: ERROR: pipeline doesn't want to preroll.
[2019-11-21 08:05:17.603275] INFO: slash: Setting pipeline to PAUSED ...
[2019-11-21 08:05:17.603333] INFO: slash: Setting pipeline to READY ...
[2019-11-21 08:05:17.604717] INFO: slash: Setting pipeline to NULL ...
[2019-11-21 08:05:17.604934] INFO: slash: Freeing pipeline ...
[2019-11-21 08:05:17.610077] NOTICE: slash: DETAIL: time(gst:1) = 0.1754s
[2019-11-21 08:05:17.614742] TRACE: slash.core.error: Error added: AssertionError: CALL ERROR: failed with exitcode 1 (pid: 8023)

Can I replace "preset" with "global_quality" in ffmpeg-qsv

I try to enable ICQ test
step 1:
use "preset" commandline:
ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -f rawvideo -pix_fmt yuv420p -s:v 720x480 -i /root/media/testing/assets/yuv/720x480p.yuv -vf 'format=nv12,hwupload=extra_hw_frames=120' -an -c:v h264_qsv -profile:v high -preset 4 -vframes 150 -y aa.264
log shows It's VBR mode
setp2:
use "global_quality“:
ffmpeg -init_hw_device qsv=qsv:hw -hwaccel qsv -filter_hw_device qsv -v verbose -f rawvideo -pix_fmt yuv420p -s:v 720x480 -i /root/media/testing/assets/yuv/720x480p.yuv -vf 'format=nv12,hwupload=extra_hw_frames=120' -an -c:v h264_qsv -profile:v high -global_quality 4 -vframes 150 -y aa.264
log show it's ICQ mode which is what I need.

the code is:

if self.codec in ["jpeg",]:

so can I set all code to use global_quality and delete preset?
I will do more test.

do we need more than one suit of reference psnr

./full run test/ffmpeg-vaapi/encode/avc.py -k bframes=0,case=4k2013,gop=30,profile=main,qp=51,quality=1,slices=3
on ICL:
psnr = 25.6864, 39.3725, 39.5874, 28.0653, 41.1952, 40.7418
on KBL:
psnr = 26.8123, 38.7912, 38.7007, 28.5433, 40.9245, 40.3934

use one as reference then the other will fail.

Does ffmpeg test need change to use before scaler to calculate md5 in vp9_10bit decode?

After this https://patchwork.ffmpeg.org/patch/13883/ , intel-media-ci/ffmpeg#58 apply , ffmpeg-vaapi got same md5 as gst-vaapi , and passrate hit 98.5% . does ffmpeg test case need to change command line ? such as:

call(
"ffmpeg -hwaccel vaapi -init_hw_device vaapi=hw:/dev/dri/renderD128 "
" -filter_hw_device hw -v verbose -i {source} "
" -vf rawdump=file={decoded} "
" -pix_fmt p010le -vsync passthrough -f null -".format(**vars(self)))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.