pristineio / webrtc-mirror Goto Github PK
View Code? Open in Web Editor NEWUnofficial WebRTC Mirror
Home Page: https://chromium.googlesource.com/external/webrtc
License: Other
Unofficial WebRTC Mirror
Home Page: https://chromium.googlesource.com/external/webrtc
License: Other
Hi,
Can anyone guide me to record both local or remote screen in Android?
I want to use desktop_capture module for native webrtc instead of implementing a new desktop capturer so I want to set the source for peerconnection in CreateVideoSource
to webrtc::DesktopCapture
instead of using cricket::VideoCapturer
as in CreateVideoSource
you have to use cricket::VideoCapturer
cricket::VideoCapturer * vc = nullptr;
if (capturer_internal)
{
vc = capturer_internal.get();
}
auto v = pc_factory_->CreateVideoSource(vc, NULL);
auto video_track = pc_factory_->CreateVideoTrack(kVideoLabel, v);
Even if I created a class which inherit from VideoCapturer
. How to convert webrtc::DesktopFrame
to webrtc::VideoFrame
in this code ?
I need to send specific cropped(face) video to the VideoTrack
. I was able manipulate Camera1Session
class of WebRTC
to get the face cropped. Right now I am setting it to an ImageView
.
My listenForBytebufferFrames()
of Camera1Session.java
:
private void listenForBytebufferFrames() {
this.camera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera callbackCamera) {
Camera1Session.this.checkIsOnCameraThread();
if(callbackCamera != Camera1Session.this.camera) {
Logging.e("Camera1Session", "Callback from a different camera. This should never happen.");
} else if(Camera1Session.this.state != Camera1Session.SessionState.RUNNING) {
Logging.d("Camera1Session", "Bytebuffer frame captured but camera is no longer running.");
} else {
mFrameProcessor.setNextFrame(data, callbackCamera);
long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
if(!Camera1Session.this.firstFrameReported) {
int startTimeMs = (int)TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - Camera1Session.this.constructionTimeNs);
Camera1Session.camera1StartTimeMsHistogram.addSample(startTimeMs);
Camera1Session.this.firstFrameReported = true;
}
ByteBuffer byteBuffer1 = ByteBuffer.wrap(data);
Frame outputFrame = new Frame.Builder()
.setImageData(byteBuffer1,
Camera1Session.this.captureFormat.width,
Camera1Session.this.captureFormat.height,
ImageFormat.NV21)
.setTimestampMillis(mFrameProcessor.mPendingTimeMillis)
.setId(mFrameProcessor.mPendingFrameId)
.setRotation(3)
.build();
int w = outputFrame.getMetadata().getWidth();
int h = outputFrame.getMetadata().getHeight();
SparseArray<Face> detectedFaces = mDetector.detect(outputFrame);
if (detectedFaces.size() > 0) {
Face face = detectedFaces.valueAt(0);
ByteBuffer byteBufferRaw = outputFrame.getGrayscaleImageData();
byte[] byteBuffer = byteBufferRaw.array();
YuvImage yuvimage = new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
//My crop logic to get face co-ordinates
yuvimage.compressToJpeg(new Rect(left, top, right, bottom), 80, baos);
final byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Activity currentActivity = getActivity();
if (currentActivity instanceof CallActivity) {
((CallActivity) currentActivity).setBitmapToImageView(bitmap); //face on ImageView is set just fine
}
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
} else {
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(data);
}
}
}
});}
jpegArray
is the final byteArray
that I need to stream via WebRTC
, which I tried with something like this:
Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, jpegArray, (int) face.getWidth(), (int) face.getHeight(), Camera1Session.this.getFrameOrientation(), captureTimeNs);
Camera1Session.this.camera.addCallbackBuffer(jpegArray);
Setting them up like this gives me following error:
../../webrtc/sdk/android/src/jni/androidvideotracksource.cc line 82
Check failed: length >= width * height + 2 * uv_width * ((height + 1) / 2) (2630 vs. 460800)
Which I assume is because androidvideotracksource
does not get the same length of byteArray
that it expects, since the frame is cropped now. Could someone point me in the direction of how to achieve it? Is this the correct way/place to manipulate the data and feed into the videoTrack
?
Dear sir/madame,
My Android application has a dependency on your libjingle build, version 11139, found here.
It used to receive frequent updates; however, since December 2015 no new updates have been posted. Has this wonderful service been discontinued, or are the updates available elsewhere?
Sincerely,
Arjan Boschman
PS: This issue was also referenced here.
WebRTC move to a new source-of-truth Git repo on September 13.
will we fix this?
Hi! Team Pristineio,
I'm using the old version of building the iOS version of tech.pristine.io/build-ios-apprtc/. Is it possible to make a dynamic library that easily integrate on any iOS project?
Or is possible that every releases of webrtc-mirror, it has a iOS library release?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.