Giter VIP home page Giter VIP logo

mind-ar-js's Introduction

MindAR

MindAR is a web augmented reality library. Highlighted features include:

⭐ Support Image tracking and Face tracking. For Location or Fiducial-Markers Tracking, checkout AR.js

⭐ Written in pure javascript, end-to-end from the underlying computer vision engine to frontend

⭐ Utilize gpu (through webgl) and web worker for performance

⭐ Developer friendly. Easy to setup. With AFRAME extension, you can create an app with only 10 lines of codes

Fund Raising

MindAR is the only actively maintained web AR SDK which offer comparable features to commercial alternatives. This library is currently maintained by me as an individual developer. To raise fund for continuous development and to provide timely supports and responses to issues, here is a list of related projects/ services that you can support.

Unity WebAR Foundation

WebAR Foundation is a unity package that allows Unity developer to build WebGL-platform AR applications. It acts as a Unity Plugin that wraps around popular Web SDK.

If you are a Unity developer, check it out! https://github.com/hiukim/unity-webar-foundation

Web AR Development Course

I'm offering a WebAR development course in Udemy. It's a very comprehensive guide to Web AR development, not limited to MindAR.

Check it out if you are interested: https://www.udemy.com/course/introduction-to-web-ar-development/?referralCode=D2565F4CA6D767F30D61

MindAR Studio

MindAR Studio allows you to build Face Tracking AR without coding. You can build AR effects through a drag-n-drop editor and export static webpages for self-host. Free to use!

Check it out if you are interested! https://studio.mindar.org

Pictarize

Pictarize is a hosted platform for creating and publishing Image Tracking AR applications. Free to use!

Check it out if you are interested! https://pictarize.com

Documentation

Official Documentation: https://hiukim.github.io/mind-ar-js-doc

Demo - Try it yourself

Image Tracking - Basic Example

Demo video: https://youtu.be/hgVB9HpQpqY

Try it yourself: https://hiukim.github.io/mind-ar-js-doc/examples/basic/

Image Tracking - Multiple Targets Example

Try it yourself: https://hiukim.github.io/mind-ar-js-doc/examples/multi-tracks

Image Tracking - Interactive Example

Demo video: https://youtu.be/gm57gL1NGoQ

Try it yourself: https://hiukim.github.io/mind-ar-js-doc/examples/interative

Face Tracking - Virtual Try-On Example

Try it yourself: https://hiukim.github.io/mind-ar-js-doc/face-tracking-examples/tryon

Face Tracking - Face Mesh Effect

Try it yourself: https://hiukim.github.io/mind-ar-js-doc/more-examples/threejs-face-facemesh

More examples

More examples can be found here: https://hiukim.github.io/mind-ar-js-doc/examples/summary

Quick Start

Learn how to build the Basic example above in 5 minutes with a plain text editor!

Quick Start Guide: https://hiukim.github.io/mind-ar-js-doc/quick-start/overview

To give you a quick idea, this is the complete source code for the Basic example. It's static HTML page, you can host it anywhere.

<html>
  <head>
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <script src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/dist/mindar-image.prod.js"></script>
    <script src="https://aframe.io/releases/1.2.0/aframe.min.js"></script>
    <script src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/dist/mindar-image-aframe.prod.js"></script>
  </head>
  <body>
    <a-scene mindar-image="imageTargetSrc: https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/image-tracking/assets/card-example/card.mind;" color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
      <a-assets>
        <img id="card" src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/image-tracking/assets/card-example/card.png" />
        <a-asset-item id="avatarModel" src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/image-tracking/assets/card-example/softmind/scene.gltf"></a-asset-item>
      </a-assets>

      <a-camera position="0 0 0" look-controls="enabled: false"></a-camera>
      <a-entity mindar-image-target="targetIndex: 0">
        <a-plane src="#card" position="0 0 0" height="0.552" width="1" rotation="0 0 0"></a-plane>
        <a-gltf-model rotation="0 0 0 " position="0 0 0.1" scale="0.005 0.005 0.005" src="#avatarModel" animation="property: position; to: 0 0.1 0.1; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate">
      </a-entity>
    </a-scene>
  </body>
</html>

Target Images Compiler

You can compile your own target images right on the browser using this friendly Compiler tools. If you don't know what it is, go through the Quick Start guide

https://hiukim.github.io/mind-ar-js-doc/tools/compile

Roadmaps

  1. Supports more augmented reality features, like Hand Tracking, Body Tracking and Plane Tracking

  2. Research on different state-of-the-arts algorithms to improve tracking accuracy and performance

  3. More educational references.

Contributions

I personally don't come from a strong computer vision background, and I'm having a hard time improving the tracking accuracy. I could really use some help from computer vision expert. Please reach out and discuss.

Also welcome javascript experts to help with the non-engine part, like improving the APIs and so.

If you are graphics designer or 3D artists and can contribute to the visual. Even if you just use MindAR to develop some cool applications, please show us!

Whatever you can think of. It's an opensource web AR framework for everyone!

Development Guide

Directories explained

  1. /src folder contains majority of the source code
  2. /examples folder contains examples to test out during development

To create a production build

run > npm run build. the build will be generated in dist folder

For development

To develop threeJS version, run > npm run watch. This will observe the file changes in src folder and continuously build the artefacts in dist-dev.

To develop AFRAME version, you will need to run >npm run build-dev everytime you make changes. The --watch parameter currently failed to automatically generate mindar-XXX-aframe.js.

All the examples in the examples folder is configured to use this development build, so you can open those examples in browser to start debugging or development.

The examples should run in desktop browser and they are just html files, so it's easy to start development. However, because it requires camera access, so you need a webcam. Also, you need to run the html file with some localhost web server. Simply opening the files won't work.

For example, you can install this chrome plugin to start a local server: https://chrome.google.com/webstore/detail/web-server-for-chrome/ofhbbkphhbklhfoeikjpcbhemlocgigb?hl=en

You most likely would want to test on mobile device as well. In that case, it's better if you could setup your development environment to be able to share your localhost webserver to your mobile devices. If you have difficulties doing that, perhaps behind a firewall, then you could use something like ngrok (https://ngrok.com/) to tunnel the request. But this is not an ideal solution, because the development build of MindAR is not small (>10Mb), and tunneling with free version of ngrok could be slow.

webgl backend

This library utilize tensorflowjs (https://github.com/tensorflow/tfjs) for webgl backend. Yes, tensorflow is a machine learning library, but we didn't use it for machine learning! :) Tensorflowjs has a very solid webgl engine which allows us to write general purpose GPU application (in this case, our AR application).

The core detection and tracking algorithm is written with custom operations in tensorflowjs. They are like shaders program. It might looks intimidating at first, but it's actually not that difficult to understand.

Credits

The computer vision idea is borrowed from artoolkit (i.e. https://github.com/artoolkitx/artoolkit5). Unfortunately, the library doesn't seems to be maintained anymore.

Face Tracking is based on mediapipe face mesh model (i.e. https://google.github.io/mediapipe/solutions/face_mesh.html)

mind-ar-js's People

Contributors

dependabot[bot] avatar hiukim avatar jmschrack avatar jmswrnr avatar jotatoledo avatar makio64 avatar parikshit-hooda avatar yannklein avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mind-ar-js's Issues

Tutorial on how to make object stays visible even when the image target is lost

Considering that mind-ar has a huge potential and that use case is very usual, I'd like to help you create such tutorial/documentation page.

Do you already have in mind what changes are required to do so?

You mentioned that "If you want to do something like this, you can approach the problem by using the non-aframe library build, then include and modify the above aframe.js script", but it wasn't that clear to jump into the code and do it.

Any guidelines?

Your compiled .mind might be outdated. Please recompile

I created a new tracker image using the Image Targets compiler and when using this image I get an error in the console saying "Your compiled .mind might be outdated. Please recompile". I also notice that the file size for this image is much smaller than my old tracker image created last week. Even creating a new tracker image using my old file results in the same error message, and a much smaller filesize.

keep on screen

its it possible to keep object on screen after losing the marker ? and the object will set it self in the middle of the screen in front of the camera !

multi-targets crash mobile browser + many targets are problematic to compile targets.mind

Hey

(first of: wonderful project, came here from ar.js and I am amazed by the ease of setup and stability in tracking).

I was thinking of using mind-ar-js to AR-enrich a book. This means ~50 targets have to be recognized by the app. I set up a test with only 6 targets and immediately Safari on iPhone X and Chrome on a Galaxy A5 crash. The multi-targets example app works on both devices - but it has only two targets. I managed to get it to load on my iPad Pro, but that is not what end users would use.

Is there any way of getting 10+ targets to work?

Moreover, is it necessary to put all the tracking targets into the same targets.mind-file? This seems to be the bottleneck here, as even with only 6 targets it grew to 32MB. Moreover, they are difficult to compile and it would be hard to add or remove individual targets at a later point. One .mind-file per target would seem much easier to use and address in the app.

Any feedback or help is highly appreciated.

Best, Matthias

Image targets need to be square

It tracks the marker at index 0 perfectly. But any other index doesn't work.
Strangely i can reorder the markers when making it, and it is always the one at 0 that functions as expected. So its not the image that is the issue here.

Just downloaded your latest to double check (as i had made changes), but still the same behaviour.

Error compiling fragment (GPU)

Hi @hiukim while i was testing your example https://hiukim.github.io/mind-ar-js/samples/example1.html i receive this error:

gpu-browser.js:14991 Uncaught (in promise) Error: Error compiling fragment shader: Fragment shader compilation failed.
ERROR: 0:2: '' :  integer constant overflow 
ERROR: 0:67: '' :  integer constant overflow 
ERROR: 2 compilation errors.  No code generated.


    at WebGLKernel.build (gpu-browser.js:14991)
    at WebGLKernel.run (gpu-browser.js:18496)
    at shortcut (gpu-browser.js:18516)
    at Tracker._combineImageList (tracker.js:334)
    at new Tracker (tracker.js:38)
    at eval (controller.js:84)

of course nothing can be tracked or displayed...
Maybe my Android device is not supported? i tested with a Wiko View with Android 7.1.2 but i will test with another device.
EDIT: tested on Chrome browser 86.04240.110

[Violation] Added non-passive

in chrome i got this warn :

mindar.prod.js:2 [Violation] Added non-passive event listener to a scroll-blocking 'touchstart' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952

and the click event not working

triggering audio play/pause when target is found/lost respectively

I'm trying to use the targetFound and targetLost events to play/pause an audio clip. But even with autoplay set to false, I get the following message in the console as soon as the application is loaded. Any workaround to fix this?

The AudioContext was not allowed to start. It must be resumed (or created) after a user gesture on the page.

cell phone sideways (rotate)

Hi guys,
I'm pretty new with programing and this platform. So, sorry for my basic question.
But I was trying the examples and when you put your cell phone sideways (rotate), the view is not correct anymore. You can only see the camera image in the lowerleft corner of the browser. The problem appears when you change from one orientation to another (vertical to horizontal for example).
Here I attached an image.
Does it happen to you? How can I solve this?
Thank you very much for your help.

WhatsApp Image 2021-06-02 at 12 49 45

unexpected identifier

Trying the facetracking examples, I get this message
image

In the source code there are lines like

<script src="../../dist-dev/mindar-face.js"></script>

But the folder dist-dev doesn't exist, nor does the file mindar-face.js. The folder that exists is just "dist" and the file is "mindar-face.prod.js". Even changing these things I still get the error. Would love to try out the facetracking, but I can't get it to work.

Mesh distortion on a curved marker

Hi @hiukim, we've been trying to replicate the curved images tracking feature offered by the 8th wall with the mind ar library.

We tried using a curved plane and curved image primitives offered by a-frame, but in both cases, the edges of the mesh appear to distort when the device is moved wrt the marker.

Screenshot 2021-06-13 at 11 46 45 PM

Here a video recording of the same issue.

I'm curious to know your thoughts on the issue and any workaround you can suggest to overcome this?

problem playing a video

Hello, I am new to programming and augmented reality, I am studying your code a bit, but I find the problem that the video starts when loading the page and not when recognizing the augmented reality image, some example or advice where I modify this thank you very much.
`

<script src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/dist/mindar.prod.js"></script>
<script>
  const showInfo = () => {
    let y = 0;
    const profileButton = document.querySelector("#profile-button");
    const webButton = document.querySelector("#web-button");
    const emailButton = document.querySelector("#email-button");
    const locationButton = document.querySelector("#location-button");
    const text = document.querySelector("#text");

    profileButton.setAttribute("visible", true);
    setTimeout(() => {
      webButton.setAttribute("visible", true);
    }, 300);
    setTimeout(() => {
      emailButton.setAttribute("visible", true);
    }, 600);
    setTimeout(() => {
      locationButton.setAttribute("visible", true);
    }, 900);

    let currentTab = "";
    webButton.addEventListener("click", function(evt) {
      text.setAttribute("value", "https://softmind.tech");
      currentTab = "web";
    });
    emailButton.addEventListener("click", function(evt) {
      text.setAttribute("value", "[email protected]");
      currentTab = "email";
    });
    profileButton.addEventListener("click", function(evt) {
      text.setAttribute("value", "AR, VR solutions and consultation");
      currentTab = "profile";
    });
    locationButton.addEventListener("click", function(evt) {
      console.log("loc");
      text.setAttribute("value", "Vancouver, Canada | Hong Kong");
      currentTab = "location";
    });

    text.addEventListener("click", function(evt) {
      if (currentTab === "web") {
        window.location.href = "https://softmind.tech";
      }
    });
  };

  const showPortfolio = done => {
    const portfolio = document.querySelector("#portfolio-panel");
    const portfolioLeftButton = document.querySelector(
      "#portfolio-left-button"
    );
    const portfolioRightButton = document.querySelector(
      "#portfolio-right-button"
    );
    const paintandquestPreviewButton = document.querySelector(
      "#paintandquest-preview-button"
    );

    let y = 0;
    let currentItem = 0;

    portfolio.setAttribute("visible", true);

    const showPortfolioItem = item => {
      for (let i = 0; i <= 2; i++) {
        document
          .querySelector("#portfolio-item" + i)
          .setAttribute("visible", i === item);
      }
    };
    const id = setInterval(() => {
      y += 0.008;
      if (y >= 0.6) {
        clearInterval(id);
        portfolioLeftButton.setAttribute("visible", true);
        portfolioRightButton.setAttribute("visible", true);
        portfolioLeftButton.addEventListener("click", () => {
          currentItem = (currentItem + 1) % 3;
          showPortfolioItem(currentItem);
        });
        portfolioRightButton.addEventListener("click", () => {
          currentItem = (currentItem - 1 + 3) % 3;
          showPortfolioItem(currentItem);
        });

        paintandquestPreviewButton.addEventListener("click", () => {
          paintandquestPreviewButton.setAttribute("visible", false);
          const testVideo = document.createElement("video");
          const canplayWebm = testVideo.canPlayType(
            'video/webm; codecs="vp8, vorbis"'
          );
          if (canplayWebm == "") {
            document
              .querySelector("#paintandquest-video-link")
              .setAttribute("src", "#paintandquest-video-mp4");
            document.querySelector("#paintandquest-video-mp4").play();
          } else {
            document
              .querySelector("#paintandquest-video-link")
              .setAttribute("src", "#paintandquest-video-webm");
            document.querySelector("#paintandquest-video-webm").play();
          }
        });

        setTimeout(() => {
          done();
        }, 500);
      }
      portfolio.setAttribute("position", "0 " + y + " -0.01");
    }, 10);
  };

  const showAvatar = onDone => {
    const avatar = document.querySelector("#avatar");
    let z = -0.3;
    const id = setInterval(() => {
      z += 0.008;
      if (z >= 0.3) {
        clearInterval(id);
        onDone();
      }
      avatar.setAttribute("position", "0 -0.25 " + z);
    }, 10);
  };

  AFRAME.registerComponent("mytarget", {
    init: function() {
      this.el.addEventListener("targetFound", event => {
        console.log("target found");
        showAvatar(() => {
          setTimeout(() => {
            showPortfolio(() => {
              setTimeout(() => {
                showInfo();
              }, 300);
            });
          }, 300);
        });
      });
      this.el.addEventListener("targetLost", event => {
        console.log("target found");
      });
      //this.el.emit('targetFound');
    }
  });
</script>

<style>
  body {
    margin: 0;
  }
  .example-container {
    overflow: hidden;
    position: absolute;
    width: 100%;
    height: 100%;
  }

  #example-scanning-overlay {
    display: flex;
    align-items: center;
    justify-content: center;
    position: absolute;
    left: 0;
    right: 0;
    top: 0;
    bottom: 0;
    background: transparent;
    z-index: 2;
  }
  @media (min-aspect-ratio: 1/1) {
    #example-scanning-overlay .inner {
      width: 50vh;
      height: 50vh;
    }
  }
  @media (max-aspect-ratio: 1/1) {
    #example-scanning-overlay .inner {
      width: 80vw;
      height: 80vw;
    }
  }

  #example-scanning-overlay .inner {
    display: flex;
    align-items: center;
    justify-content: center;
    position: relative;

    background: linear-gradient(to right, white 10px, transparent 10px) 0 0,
      linear-gradient(to right, white 10px, transparent 10px) 0 100%,
        linear-gradient(to left, white 10px, transparent 10px) 100% 0,
          linear-gradient(to left, white 10px, transparent 10px) 100% 100%,
            linear-gradient(to bottom, white 10px, transparent 10px) 0 0,
              linear-gradient(to bottom, white 10px, transparent 10px) 100%
                  0,
                linear-gradient(to top, white 10px, transparent 10px) 0 100%,
                  linear-gradient(to top, white 10px, transparent 10px) 100%
                    100%;
    background-repeat: no-repeat;
    background-size: 40px 40px;
  }

  #example-scanning-overlay.hidden {
    display: none;
  }

  #example-scanning-overlay img {
    opacity: 0.6;
    width: 90%;
    align-self: center;
  }

  #example-scanning-overlay .inner .scanline {
    position: absolute;
    width: 100%;
    height: 10px;
    background: white;
    animation: move 2s linear infinite;
  }
  @keyframes move {
    0%,
    100% {
      top: 0%;
    }
    50% {
      top: calc(100% - 10px);
    }
  }
</style>
  <a-scene
    mindar="imageTargetSrc: https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/assets/card-example/card.mind; showStats: false; uiScanning: #example-scanning-overlay;"
    embedded
    color-space="sRGB"
    renderer="colorManagement: true, physicallyCorrectLights"
    vr-mode-ui="enabled: false"
    device-orientation-permission-ui="enabled: false"
  >
    <a-assets>
      <img id="card" src="./assets/card-example/card.png" />
      <img id="icon-web" src="./assets/card-example/icons/web.png" />
      <img
        id="icon-location"
        src="./assets/card-example/icons/location.png"
      />
      <img
        id="icon-profile"
        src="./assets/card-example/icons/profile.png"
      />
      <img id="icon-phone" src="./assets/card-example/icons/phone.png" />
      <img id="icon-email" src="./assets/card-example/icons/email.png" />
      <img
        id="icon-play"
        src="https://cdn.glitch.com/b38eb9a2-8d3c-4e64-998d-6d0738b4c845%2Fplay.png?v=1616294545285"
      />
      <img id="icon-left" src="./assets/card-example/icons/left.png" />
      <img id="icon-right" src="./assets/card-example/icons/right.png" />
      <img
        id="paintandquest-preview"
        src="./assets/card-example/portfolio/paintandquest-preview.png"
      />
      <video
        id="paintandquest-video-mp4"
        autoplay="false"
        loop="true"
        src="https://cdn.glitch.com/d854003b-b32d-455a-98db-95fe418cab4c%2Fpaintandquest.mp4?v=1616389137177"
      ></video>
      <video
        id="paintandquest-video-webm"
        autoplay="false"
        loop="true"
        src="https://cdn.glitch.com/d854003b-b32d-455a-98db-95fe418cab4c%2Fpaintandquest.webm?v=1616389074156"
      ></video>
      <img
        id="coffeemachine-preview"
        src="./assets/card-example/portfolio/coffeemachine-preview.png"
      />
      <img
        id="peak-preview"
        src="./assets/card-example/portfolio/peak-preview.png"
      />

      <a-asset-item
        id="avatarModel"
        src="https://cdn.jsdelivr.net/gh/hiukim/[email protected]/examples/assets/card-example/softmind/scene.gltf"
      ></a-asset-item>
    </a-assets>

    <a-camera
      position="0 0 0"
      look-controls="enabled: false"
      cursor="fuse: false; rayOrigin: mouse;"
      raycaster="far: 10000; objects: .clickable"
    >
    </a-camera>

    <a-entity id="mytarget" mytarget mindar-image-target="targetIndex: 0">
      <a-plane
        src="#card"
        position="0 0 0"
        height="0.552"
        width="1"
        rotation="0 0 0"
      ></a-plane>

      <a-entity visible="false" id="portfolio-panel" position="0 0 -0.01">
        <a-text
          value="Portfolio"
          color="black"
          align="center"
          width="2"
          position="0 0.4 0"
        ></a-text>
        <a-entity id="portfolio-item0">
          <a-video
            id="paintandquest-video-link"
            webkit-playsinline
            playsinline
            width="1"
            height="0.552"
            position="0 0 0"
          ></a-video>
          <a-image
            id="paintandquest-preview-button"
            class="clickable"
            src="#paintandquest-preview"
            alpha-test="0.5"
            position="0 0 0"
            height="0.552"
            width="1"
          >
          </a-image>
        </a-entity>
        <a-entity id="portfolio-item1" visible="false">
          <a-image
            class="clickable"
            src="#coffeemachine-preview"
            alpha-test="0.5"
            position="0 0 0"
            height="0.552"
            width="1"
          >
          </a-image>
        </a-entity>
        <a-entity id="portfolio-item2" visible="false">
          <a-image
            class="clickable"
            src="#peak-preview"
            alpha-test="0.5"
            position="0 0 0"
            height="0.552"
            width="1"
          >
          </a-image>
        </a-entity>

        <a-image
          visible="false"
          id="portfolio-left-button"
          class="clickable"
          src="#icon-left"
          position="-0.7 0 0"
          height="0.15"
          width="0.15"
        ></a-image>
        <a-image
          visible="false"
          id="portfolio-right-button"
          class="clickable"
          src="#icon-right"
          position="0.7 0 0"
          height="0.15"
          width="0.15"
        ></a-image>
      </a-entity>

      <a-image
        visible="false"
        id="profile-button"
        class="clickable"
        src="#icon-profile"
        position="-0.42 -0.5 0"
        height="0.15"
        width="0.15"
        animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
      ></a-image>

      <a-image
        visible="false"
        id="web-button"
        class="clickable"
        src="#icon-web"
        alpha-test="0.5"
        position="-0.14 -0.5 0"
        height="0.15"
        width="0.15"
        animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
      ></a-image>

      <a-image
        visible="false"
        id="email-button"
        class="clickable"
        src="#icon-email"
        position="0.14 -0.5 0"
        height="0.15"
        width="0.15"
        animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
      ></a-image>

      <a-image
        visible="false"
        id="location-button"
        class="clickable"
        src="#icon-location"
        position="0.42 -0.5 0"
        height="0.15"
        width="0.15"
        animation="property: scale; to: 1.2 1.2 1.2; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate"
      ></a-image>

      <a-gltf-model
        id="avatar"
        rotation="0 0 0"
        position="0 -0.25 0"
        scale="0.004 0.004 0.004"
        src="#avatarModel"
      ></a-gltf-model>

      <a-text
        id="text"
        class="clickable"
        value=""
        color="black"
        align="center"
        width="2"
        position="0 -1 0"
        geometry="primitive:plane; height: 0.1; width: 2;"
        material="opacity: 0.5"
      ></a-text>
    </a-entity>
  </a-scene>
</div>
`

Scanning screen does not come back after targetLost-event

The scanning screen provides clear gui-feedback to the user that scanning is going on. As such, it is very helpful. However, once a target-match has been found, it won't come back.
Is there any way of making it show up again after a targetLost-event?

setting a react starter project with mind-ar-js

Hi @hiukim, I challenged myself to use your library in a React project. I used jsDeliver to host the library on a CDN. And then created a component with the source code of example 1 from this repo. Unfortunately, I didn't succeed and got the following error:

Screenshot 2021-03-10 at 12 07 28 AM

Any tips to tackle this? 🙂
You can find my repo here.

Basic example doesn't display on iPhone

Hi,

When I try to see the live demo of the basic example on my iPhone (6SPlus - iOS 14.4.1),
I stay on the waiting animation and if I take a look at the debug console, I got
Capture d’écran 2021-03-30 à 12 06 33

Any idea ?

Compile Target Images

Is there any way to compile images with NodeJS? Just to run command in console or whatever

Keeping the object in front of camera

Hi, I'm new to mind-ar-js and Javascript. I have to create an AR experience where after having detected the target the AR experience starts, and I want it to continue even after the target is lost. Specifically, I have a video playing, and when I lose the target the audio keeps playing but the video is no longer visible, so I want the whole a-entity to be retained on screen. This is a snippet of the code I'm using. I'll be glad for any help!

AFRAME.registerComponent('mytarget-one', {
        init: function () {
          this.el.addEventListener('targetFound', event => {
            console.log("target found");
              setTimeout(() => {
                
                  setTimeout(() => {
                    showInfo();
                  }, 100);
              }, 100);

          });
          this.el.addEventListener('targetLost', event => {
            console.log("target lost");
            //stopInfoOne();
          });
          this.el.emit('targetFound');
        }
});
<a-scene mindar="imageTargetSrc: multitargets.mind; showStats: false;" embedded color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
       
        <a-assets>...</a-assets>

 <a-camera position="0 0 0" look-controls="enabled: false" cursor="fuse: false; rayOrigin: mouse;" raycaster="far: 10000; objects: .clickable"></a-camera>
<a-entity id="mytarget-one" mytarget-one mindar-image-target="targetIndex: 1">
          <a-video id="paintandquest-video-link_wear" webkit-playsinline playsinline muted autoplay width="1" height="0.552" position="0 0 0.2"  ></a-video>
          <a-image id="paintandquest-preview-button" class="clickable" src="#paintandquest-preview" alpha-test="0.5" position="0 0 0.25" height="0.552" width="1">
          </a-image>
          <a-image id="linkedin" class="clickable" src="#icon-profile" position="-0.42 -0.5 0" height="0.15" width="0.15"
        ></a-image>

        <a-image  id="facebook" class="clickable" src="#icon-web" alpha-test="0.5" position="-0.14 -0.5 0" height="0.15" width="0.15"
        ></a-image>

        <a-image  id="web" class="clickable" src="#icon-email"  position="0.14 -0.5 0" height="0.15" width="0.15"
        ></a-image>

        <a-image  id="contact" class="clickable" src="#icon-location"  position="0.42 -0.5 0" height="0.15" width="0.15"
        ></a-image>
      </a-entity>

Errors in code-example "Events Handling"

The source code example Events Handling in the documentation has a couple of mistakes:

  • The example on the page (not the one linked under demo) links to version ´[email protected]´ of the script. However, the unpause() function has only been added with version 0.3.2
  • The code example arSystem.pause(true); // pause AR and video is incorrect - the parameter needs to be false to stop AR and freeze the video.

Keep up the good work. This is such a cool library!

Recommended image size

Hi @hiukim, do you have any recommendations about the ideal minimum and maximum size (resolution) of the image that should be compiled as a target for efficient tracking?

Image target tool fault

Your image target tool is not working. After uploading the photo nothing is happening after clicking on start button. I had wait for around 30 minutes nothing was happened

Tracking a smaller or farther image

Hi all...
Is it possible to track the image when the image is farther? I mean, to detect a target image, it is necesary to be quite near or have almost 50% of the camera screen with the image. Is is possible to track the image being farther or at least to keep the 3D model when the camera goes farther?
Thanks!!!

Any way to improve fps and tracker follow speed?

Hi,

I'm using mind-ar with an a-video as asset with no additional overlay or interactivity.

Image target specs:
656x656
892KB size

Video asset specs:
9 seconds video length
mp4 format
320x320
30 fps
643KB size

The image tracker is spot-on but somewhat slow to follow the image target and have low fps except in iphone which have a good fps.
I've tested on devices:
Asus Zenfone Max Pro M2 (< 15 Fps) Chrome Browser
Xiaomi Redmi 8 (< 15 Fps) Chrome Browser
Samsung Galaxy A71 (< 20 Fps) Samsung Internet Browser
Samsung Galaxy A7 (< 10 Fps) Chrome Browser
Samsung Galaxy Tab 8 (< 10 Fps) Chrome Browser
Apple Iphone 7 plus (< 45 Fps) Safari Browser

Is there any way to improve fps? and tracker follow speed?

Thank you.

error on more than 2 image target

hello dear
when i add more than 2 image for compile, and when i add targets.mind to project.
project not run and show loading...

please help me to solve the problem.

Transparent images

Hey, @hiukim how do images, that are mostly transparent backgrounds, work on mind-ar? On JsArtoolkit5 the background becomes black and then there are issues tracking the image on white or bright backgrounds.

1.0.0 doesn't work with aframe-animation-timeline-component?

I developed an AR experience using version 0.4.2 together with the aframe-animation-timeline-component. After updating to 1.0.0 everything stopped working. I don't get the loading spinner animation and no errors in the console. Did something change?

Trouble accessing camera texture

Hi,

I am trying to capture screenshot of the scene & camera.

When I tried to access the camera texture using:

document.querySelector('video');

This is returning a black screen.

Any ideas how to correct this issue?

Thanks

Enhancement: Gyro take over

If you give me a few pointers to start, I will look at making Gyroscope take over when marker is lost for a limited time.

Camera is not really switched off after triggering arSystem.pause()

Calling arSystem.pause(false) switches off the camera feed in the a-frame, but the camera remains active. At least on iOS, the status-light remains switched on. For user-security reasons it would be good if it could be switched off completely during pause, or at least optionally. Especially as the camera is also a drain on energy consumption.

image compiler tool not working

Hi @hiukim, I've tried a couple of images before trying out the default card.png that is provided with the repo. In all the cases, the generated .mind file isn't getting recognized. I also noticed 2 KB size difference in the card.mind that's included in the example and the one I generate with the same image.

Here is one of my images:

ww

Option for selecting external camera device

I am running file on mobile browser, but my mobile is connected to HMD-(Head Mounted Display) which has inbuilt camera.
I want the application to select camera from HMD rather than normal phone's camera.

Gratitude post

Hey @hiukim, I'm so glad I found this repo. You've done really impressive work. I'll explore some of the examples and will let you know how it goes. Cheers mate! ✌️

Issue trying new tensorflow version

I tested the new version with tensorflow, i get this error:

A-Frame Version: 1.0.4 (Date 2020-02-05, Commit #2b359246)
mindar.prod.js:12858 three Version (https://github.com/supermedium/three.js): ^0.111.6
mindar.prod.js:12858 WebVR Polyfill Version: ^0.10.10
mindar.prod.js:12680 THREE.WebGLRenderer: WEBGL_depth_texture extension not supported.
get @ mindar.prod.js:12680
mindar.prod.js:12680 THREE.WebGLRenderer: OES_texture_float_linear extension not supported.
get @ mindar.prod.js:12680
mindar.prod.js:12658 video ready... <video autoplay muted playsinline style="position:​ absolute;​ top:​ 0px;​ left:​ -32px;​ z-index:​ -2;​ width:​ 384px;​ height:​ 512px;​" width="480" height="640"></video>​
mindar.prod.js:10332 Could not get context for WebGL version 2
mindar.prod.js:10348 1    
2        precision highp float;
3        precision highp int;
4        precision highp sampler2D;
5        varying vec2 resultUV;
6        
7        const vec2 halfCR = vec2(0.5, 0.5);
8    
9        struct ivec5
10       {
11         int x;
12         int y;
13         int z;
14         int w;
15         int u;
16       };
mindar.prod.js:10348 Fragment shader compilation failed.
mindar.prod.js:10348  17                                                                                                      
mindar.prod.js:10348 18       struct ivec6
19       {
20         int x;
21         int y;
22         int z;
23         int w;
24         int u;
25         int v;
26       };
27   
28       uniform float NAN;
29       
30         #define isnan(value) isnan_custom(value)
31         bool isnan_custom(float val) {
32           return (val > 0. || val < 1. || val == 0.) ? false : true;
33         }
34         bvec4 isnan_custom(vec4 val) {
35           return bvec4(isnan(val.x), isnan(val.y), isnan(val.z), isnan(val.w));
36         }
37       
38       
39         uniform float INFINITY;
40   
41         bool isinf(float val) {
42           return abs(val) == INFINITY;
43         }
44         bvec4 isinf(vec4 val) {
45           return equal(abs(val), vec4(INFINITY));
46         }
47       
48       
49         int round(float value) {
50           return int(floor(value + 0.5));
51         }
52   
53         ivec4 round(vec4 value) {
54           return ivec4(floor(value + vec4(0.5)));
55         }
56       
57   
58       int imod(int x, int y) {
59         return x - y * (x / y);
60       }
61   
62       int idiv(int a, int b, float sign) {
63         int res = a / b;
64         int mod = imod(a, b);
65         if (sign < 0. && mod != 0) {
66           res -= 1;
67         }
68         return res;
69       }
70   
71       //Based on the work of Dave Hoskins
72       //https://www.shadertoy.com/view/4djSRW
73       #define HASHSCALE1 443.8975
74       float random(float seed){
75         vec2 p = resultUV * seed;
76         vec3 p3  = fract(vec3(p.xyx) * HASHSCALE1);
77         p3 += dot(p3, p3.yzx + 19.19);
78         return fract((p3.x + p3.y) * p3.z);
79       }
80   
81       
82   vec2 uvFromFlat(int texNumR, int texNumC, int index) {
83     int texR = index / texNumC;
84     int texC = index - texR * texNumC;
85     return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
86   }
87   vec2 packedUVfrom1D(int texNumR, int texNumC, int index) {
88     int texelIndex = index / 2;
89     int texR = texelIndex / texNumC;
90     int texC = texelIndex - texR * texNumC;
91     return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
92   }
93   
94       
95   vec2 packedUVfrom2D(int texelsInLogicalRow, int texNumR,
96     int texNumC, int row, int col) {
97     int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2);
98     int texR = texelIndex / texNumC;
99     int texC = texelIndex - texR * texNumC;
100    return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
101  }
102  
103      
104  vec2 packedUVfrom3D(int texNumR, int texNumC,
105      int texelsInBatch, int texelsInLogicalRow, int b,
106      int row, int col) {
107    int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2);
108    int texR = index / texNumC;
109    int texC = index - texR * texNumC;
110    return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR);
111  }
112  
113    
114  
115      float sampleTexture(sampler2D textureSampler, vec2 uv) {
116        return texture2D(textureSampler, uv).r;
117      }
118    
119  
120      void setOutput(vec4 val) {
121        gl_FragColor = val;
122      }
123    
124  uniform sampler2D A;
125  uniform int offsetA;
126  
127      ivec3 getOutputCoords() {
128        ivec2 resTexRC = ivec2(resultUV.yx *
129                               vec2(160, 160));
130        int index = resTexRC.x * 160 + resTexRC.y;
131  
132        int b = index / 25500;
133        index -= b * 25500;
134  
135        int r = 2 * (index / 510);
136        int c = imod(index, 510) * 2;
137  
138        return ivec3(b, r, c);
139      }
140    
141  
142          
143      float getA(int row, int col) {
144        vec2 uv = (vec2(col, row) + halfCR) / vec2(1020.0, 100.0);
145        return sampleTexture(A, uv);
146      }
147    
148          float getA(int row, int col, int depth) {
149            return getA(col, depth);
150          }
151        
152      float getAAtOutCoords() {
153        ivec3 coords = getOutputCoords();
154        
155        return getA(coords.x, coords.y, coords.z);
156      }
157    
158  
159        ivec3 outCoordsFromFlatIndex(int index) {
160          int r = index / 102000; index -= r * 102000;int c = index / 1020; int d = index - c * 1020;
161          return ivec3(r, c, d);
162        }
163  
164        void main() {
165          ivec2 resTexRC = ivec2(resultUV.yx *
166            vec2(160, 160));
167          int index = 4 * (resTexRC.x * 160 + resTexRC.y);
168  
169          vec4 result = vec4(0.);
170  
171          for (int i=0; i<4; i++) {
172            int flatIndex = index + i;
173            ivec3 rc = outCoordsFromFlatIndex(flatIndex);
174            result[i] = getA(rc.x, rc.y, rc.z);
175          }
176  
177          gl_FragColor = result;
178        }
179      
mindar.prod.js:10348 Uncaught (in promise) Error: Failed to compile fragment shader.
    at mf (mindar.prod.js:10348)
    at dg.createProgram (mindar.prod.js:10911)
    at mindar.prod.js:11440
    at mindar.prod.js:11440
    at vb.getAndSaveBinary (mindar.prod.js:11440)
    at vb.runWebGLProgram (mindar.prod.js:11440)
    at vb.decode (mindar.prod.js:11440)
    at vb.getValuesFromTexture (mindar.prod.js:11440)
    at vb.readSync (mindar.prod.js:11440)
    at g.readSync (mindar.prod.js:4141)

Tested with a Wiko View, with Android 7.1.2. Chrome 87.0.4280.141.

Replace gpu.js with tensorflowjs

In view of having lots of unknown bugs from gpu.js, I'm trying to replace it with another library for the webgl part.

I came up with this interesting idea of using tensorflow while discussing with my colleague. There is a very solid foundation of of api that utilize webgl in tensorflow, so I decided to give it a try. Nope, we are not using it for Machine Learning. :D

If it works out, it will solve most of the issues.

Feature Request: hints for supported browser

Hello. This tool is great.

But if someone opens the web in some browser not supported like in-app-browser, would there be any hints for reminding user to open the right browser to view the effect?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.