Giter VIP home page Giter VIP logo

azure-samples / cognitive-services-quickstart-code Goto Github PK

View Code? Open in Web Editor NEW
344.0 25.0 510.0 35.81 MB

Code Examples used by the Quickstarts in the Cognitive Services Documentation

License: MIT License

Go 4.78% Java 6.79% JavaScript 7.98% Python 8.63% C# 11.98% Ruby 0.43% PHP 0.37% PowerShell 0.17% Shell 1.44% Dockerfile 0.16% HTML 0.08% C++ 0.28% Jupyter Notebook 56.78% CSS 0.11%
cognitive-services sdk-samples cognitiveservices

cognitive-services-quickstart-code's Introduction

page_type languages products description urlFragment
sample
go
python
javascript
csharp
php
ruby
azure
This repository contains the code examples used by the QuickStarts on the Cognitive Services Documentation
cognitive-services-quickstart-code

Code examples for Cognitive Services Quickstarts

This repository contains the code examples used by the QuickStarts on the Cognitive Services Documentation

What are code examples

Code examples are a collection of snippets whose primary purpose is to be demonstrated in the QuickStart documentation. This code can compile and run, but is not intended as a template for how a solution should be developed. They are intentionally shortened and written in script-style to make understanding their basic concepts, easier.

Most of these examples can be run independent of the steps in the Quickstarts, yet represent what they offer.

Getting Started

These code snippets back Quickstarts found on Cognitive Services Documentation. These snippets are not intended as standalone solutions or for production use. To learn how to use the Cognitive Services, go to a QuickStart for each service.

Running the code snippets

Choose the language and service you wish to run. Navigate to the corresponding folder and open the README. Each quickstart contains prerequisites and run instructions at the top of its main execution file.

How this repo is structured

   .
    ├── python                   # Language of the code snippet
    │   ├── ComputerVision       # Cognitive Service
    │   └── ... 
    ├──  ...
    └── README.md

Data files (images, audio, video) should not be checked into the repo. Please add data files to the following central location: cognitive-services-sample-data-files

Samples

Samples (unlike examples) are a more complete, best-practices solution for each of the snippets. They're better for integrating into production code.

We offer samples for each language here:

Resources

cognitive-services-quickstart-code's People

Contributors

aahill avatar anatolip avatar chrishmsft avatar dargilco avatar dependabot[bot] avatar der3318 avatar diberry avatar erhopf avatar glharper avatar homezcx avatar laujan avatar linndaqun avatar longli0 avatar luzhang06 avatar magrefaat avatar microsoftopensource avatar mrbullwinkle avatar msyache avatar orenmichaely avatar ormichae avatar patrickfarley avatar roy-har avatar ryogok avatar sahithikkss avatar sahithikkss-zz avatar sanjeev3 avatar v-jaswel avatar wiazur avatar yungshinlintw avatar yutongtie-msft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cognitive-services-quickstart-code's Issues

Use .Net SDK with V2

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [X] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

SDK analyze functions (AnalyzeWithCustomModelAsync, AnalyzeWithCustomModelWithHttpMessagesAsync, AnalyzeWithCustomModelWithHttpMessagesAsync2) are pointing to FormsRecognizerV1, endpoint, there´s a way to use SDK with V2?

Any log messages given by the failure

When I use any of the parse functions, it returns the message saying that my modelID does not exist.
{"error":{"code":"2020","innerError":{"requestId":"052e3703-8beb-4373-acc1-80f62227fde7"},"message":"Model with 'id=7f5a3798-33f7-4479-b240-*********' not found."}}

Expected/desired behavior

OS and Version?

Windows 10, .net core 2.2.0

Versions

FormsRecognizer 0.8.0-preview

OCR gives RestError

  • [x ] bug report -> please search issues before submitting
  • feature request
  • documentation issue or request
  • regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Running readTextFromUrl() gives out a new RestError and printRecText just prints "Recognized Text:"

Any log messages given by the failure

at new RestError (/Users/yashjajoo/Documents/js_projects/music_extract/node_modules/@azure/ms-rest-js/dist/msRest.node.js:2460:28)
at /Users/yashjajoo/Documents/js_projects/music_extract/node_modules/@azure/ms-rest-js/dist/msRest.node.js:3662:37
at process._tickCallback (internal/process/next_tick.js:68:7)
code: undefined,

Expected/desired behavior

OS and Version?

macOS Big Sur

Versions

The analyze request is invalid. Ensure that each filter key is not null, empty or longer than 128 and that the length of all keys is less than 1024 characters

When running the analyze function using a list of keys to filter the data, I am getting the error below.

"{"error":{"code":"1010","message":"The analyze request is invalid. Ensure that each filter key is not null, empty or longer than 128 and that the length of all keys is less than 1024 characters."}}"

AnalyzeResult result = await formClient.AnalyzeWithCustomModelAsync(modelId, fileStream, contentType: "application/pdf", GetKeyListByProvince(province));

private IList<string> GetKeyListByProvince(Province province)
{
            var keys = new List<string>();
            switch (province)
            {
                default:
                case Province.BC:
                    keys = new List<string>{
                        "Name of Company:", "Incorporation Number:", "Business Number:", "Last Name, First Name, Middle Name:", "Jurisdiction:", "Delivery Address:", "Entity Status:"
                    };
                    break;
            }
            return keys;
}

It works fine when analyzing the data without the list of keys, but we need to narrow it down to only the ones required.

Emotions and Facial Hair Broken in the DetectFaceAttributes.py file

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

https://stackoverflow.com/questions/65246616/python-formatting-dictionaries-within-dictionary?noredirect=1#comment115349649_65246616

Any log messages given by the failure

Expected/desired behavior

Facial attributes detected:
Gender: male
Emotion: 0% anger, 0% contempt, 0% disgust, 0% fear, 0% happiness, 98% neutral, 0% sadness, 0% surprise

OS and Version?

Windows 10 pro

Versions

Mention any other details that might be useful

when face.face_attributes.emotion is printed I get:
{'additional_properties': {}, 'anger': 0.001, 'contempt': 0.002, 'disgust': 0.0, 'fear': 0.0, 'happiness': 0.542, 'neutral': 0.455, 'sadness': 0.0, 'surprise': 0.0}

Would be more convenient to have it like so:
Emotion: 0% anger, 0% contempt, 0% disgust, 0% fear, 0% happiness, 98% neutral, 0% sadness, 0% surprise

For more details please look at the StackOverflow link I put above

Face quickstart fails to compile

Please provide us with the following information:

This issue is for a: (mark with an x)

- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Run the face quickstart sample and it fails to compile with following error:

Code:
https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/Face/FaceQuickstart.cs

Error:
Severity Code Description Project Line File Suppression State
Error CS1503 Argument 2: cannot convert from 'System.Collections.Generic.List<System.Guid?>' to 'System.Collections.Generic.IList<System.Guid>' analyze-faces 411 C:\Users\gsutt\source\repos\AI-102-AIEngineer\19-face\C-Sharp\face-api\Program.cs N/A

Any log messages given by the failure

Severity Code Description Project Line File Suppression State
Error CS1503 Argument 2: cannot convert from 'System.Collections.Generic.List<System.Guid?>' to 'System.Collections.Generic.IList<System.Guid>' analyze-faces 411 C:\Users\gsutt\source\repos\AI-102-AIEngineer\19-face\C-Sharp\face-api\Program.cs N/A

Expected/desired behavior

Code should compile and the quickstart should work for all scenarios

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Windows 10, VS 2019, .net core 3.1

Versions

Windows 10, VS 2019, .net core 3.1

Mention any other details that might be useful

Replace the following lines with the below lines:

429:
List sourceFaceIds = new List();

571:
List faceIds = new List();

Issue raised on docs repo.
MicrosoftDocs/azure-docs#77515


Thanks! We'll be in touch soon.

TypeError: 'DetectedFace' object is not subscriptable

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

From https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/Face/FaceQuickstart.py
running this snippet:

import asyncio, io, glob, os, sys, time, uuid, requests
from urllib.parse import urlparse
from io import BytesIO
from PIL import Image, ImageDraw
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from azure.cognitiveservices.vision.face.models import TrainingStatusType, Person, SnapshotObjectType, OperationStatusType

face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

# <snippet_frame>
# Detect a face in an image that contains a single face
single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
single_image_name = os.path.basename(single_face_image_url)
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)
if not detected_faces:
        raise Exception('No face detected from image {}'.format(single_image_name))

# Convert width height to a point in a rectangle
def getRectangle(faceDictionary):
    rect = faceDictionary['face_rectangle']
    left = rect['left']
    top = rect['top']
    bottom = left + rect['height']
    right = top + rect['width']
    return ((left, top), (bottom, right))

# Download the image from the url
response = requests.get(single_face_image_url)
img = Image.open(BytesIO(response.content))

# For each face returned use the face rectangle and draw a red box.
draw = ImageDraw.Draw(img)
for face in detected_faces:
    draw.rectangle(getRectangle(face), outline='red')

# Display the image in the users default image browser.
img.show()

throws an error

Any log messages given by the failure

TypeError: 'DetectedFace' object is not subscriptable

Expected/desired behavior

The rectangle is drawn.

OS and Version?

Ubuntu 18.04

Versions

Python 3.6.8

$ pip3 freeze|grep azu
azure-cognitiveservices-vision-face==0.4.0
azure-common==1.1.23
azure-core==1.0.0b4
azure-storage-common==2.1.0
azure-storage-file==12.0.0b4

Update Management client library code samples

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ X] feature request
- [ X] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

New Libraries are published here:

C#
https://www.nuget.org/packages/Microsoft.Azure.Management.CognitiveServices/8.0.0-preview
Python
https://pypi.org/project/azure-mgmt-cognitiveservices/12.0.0/
Go
https://github.com/Azure/azure-sdk-for-go/tree/master/services/cognitiveservices/mgmt/2021-04-30/cognitiveservices
JavaScript
https://www.npmjs.com/package/@azure/arm-cognitiveservices/v/6.0.0
Java
https://search.maven.org/artifact/com.azure.resourcemanager/azure-resourcemanager-cognitiveservices/1.0.0-beta.2/jar

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Custom Model Input Requirements are Incorrect

Please provide us with the following information:

Custom Model Input Requirements are Incorrect

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [X ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Navigate to https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-train-extract.md#L50
Review https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-train-extract.md#train-a-form-recognizer-model

Any log messages given by the failure

NA

Expected/desired behavior

Should delineate custom training data input requirements here:
https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/build-training-data-set

OS and Version?

NA

Versions

NA

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Analyzing Brand Logo Error

Facing error in file ComputerVisionQuickstart.cs at line 232.
for each (var brand in result.Brands){}
Not able to analyze image with a logo in my local machine

There are no changes to the rating of items in the reward system

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report 

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

mac os

Versions

python3.6

Mention any other details that might be useful

When I used the Personalizer service, I found that giving a reward a 0 had no effect on the recommended result


Thanks! We'll be in touch soon.

Unable to get the response

Unable to get a response .
Does not pull out of this code line to give a response.
response = await client.PostAsync(uri, content);

PLease help.

Errors in the image classification Program.cs

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

cognitive-services-quickstart-code/dotnet/CustomVision/ImageClassification/Program.cs
Line 28: TrainingApi is with a capital T and the rest of the references are with an undercast t...

Any log messages given by the failure

Expected/desired behavior

This shows up before compilation in visual studio. -

OS and Version?

Visual Studio 2019 on windows 10

Versions

Mention any other details that might be useful

there might be more issues in the code...for instance the fact that the main method is static and the other methods are not. Just try to compile it and please fix the errors....

Grouping with local images

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Changing IMAGE_BASE_URL to const string IMAGE_BASE_URL = "C:/Users/Net Çözümleri/source/repos/AlbumApp/Album/images/";

Any log messages given by the failure

Unhandled exception. System.AggregateException: One or more errors occurred. (Operation returned an invalid status code 'BadRequest')
 ---> Microsoft.Azure.CognitiveServices.Vision.Face.Models.APIErrorException: Operation returned an invalid status code 'BadRequest'
   at Microsoft.Azure.CognitiveServices.Vision.Face.FaceOperations.DetectWithUrlWithHttpMessagesAsync(String url, Nullable`1 returnFaceId, Nullable`1 returnFaceLandmarks, IList`1 returnFaceAttributes, String recognitionModel, Nullable`1 returnRecognitionModel, String detectionModel, Nullable`1 faceIdTimeToLive, Dictionary`2 customHeaders, CancellationToken cancellationToken)
   at Microsoft.Azure.CognitiveServices.Vision.Face.FaceOperationsExtensions.DetectWithUrlAsync(IFaceOperations operations, String url, Nullable`1 returnFaceId, Nullable`1 returnFaceLandmarks, IList`1 returnFaceAttributes, String recognitionModel, Nullable`1 returnRecognitionModel, String detectionModel, Nullable`1 faceIdTimeToLive, CancellationToken cancellationToken)
   at face.Program.DetectFaceRecognize(IFaceClient faceClient, String url, String recognition_model) in C:\Users\Net Çözümleri\source\repos\AlbumApp\face\Program.cs:line 80
   at face.Program.Group(IFaceClient client, String url, String recognition_model) in C:\Users\Net Çözümleri\source\repos\AlbumApp\face\Program.cs:line 118
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at System.Threading.Tasks.Task.Wait()
   at face.Program.Main(String[] args) in C:\Users\Net Çözümleri\source\repos\AlbumApp\face\Program.cs:line 46

C:\Users\Net Çözümleri\source\repos\AlbumApp\face\bin\Debug\net5.0\face.exe (process 11360) exited with code 0.
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
Press any key to close this window . . .

Expected/desired behavior

Grouping local images

OS and Version?

Windows 10

Incorrect duplication of code lines in the python file CustomVisionQuickstart.py

Are the code lines 80 - 84 in the python file CustomVisionQuickstart.py incorrect duplicates of the code line directly before them ?

-->

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Visual study of the lines of code 74 - 84 in the python code file CustomVisionQuickstart.py

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

'ComputerVisionClient' object has no attribute 'batch_read_file_in_stream'

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [x ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

recognize_handwriting_results = computervision_client.batch_read_file_in_stream(local_image_handwritten, raw=True)

Any log messages given by the failure

AttributeError: 'ComputerVisionClient' object has no attribute 'batch_read_file_in_stream'

Expected/desired behavior

OS and Version?

Operating System: Ubuntu 18.04.4 LTS
Kernel: Linux 5.3.0-1028-azure

Versions

azure-cognitiveservices-vision-computervision 0.7.0

Mention any other details that might be useful

I'm trying to perform handwriting recognition on a local image that I have. Using the example provided in ComputerVisionQuickstart.py, I was able to get the remote example to work, but when trying with the local image, I keep getting the error 'ComputerVisionClient' object has no attribute 'batch_read_file_in_stream'. Is there something I might be doing wrong, or does the ComputerVisionClient no longer support local images?

Using face API with an already opened cv2 image

Please provide us with the following information:

How do I use azure face API with an already opened cv2 image without saving it again locally or on the web?
I tried looking for it in the documentations, but couldn't find a solution.
As far as I know and could find, there are only two ways of reading an image:

  1. detect_with_url : Needs an URL to read the image.
  2. detect_with_stream: Needs an image path to read the image.

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Expected/desired behavior

The face API should have a function or work-around to use an already opened image without having to save it.

OS and Version?

Windows 10 and Raspberry Pi


Thanks! We'll be in touch soon.

predictor.classify_image(project.id, publish_iteration_name, image_contents.read()) causes invalid iteration

<
'''
Code taken from https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/CustomVision/ImageClassification/CustomVisionQuickstart.py

Using instructions posted at;- https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/quickstarts/image-classification?tabs=visual-studio&pivots=programming-language-python

Note;- both the above contain errors !

'''

<snippet_imports>
from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from azure.cognitiveservices.vision.customvision.training.models import ImageFileCreateBatch, ImageFileCreateEntry, Region
from msrest.authentication import ApiKeyCredentials
import time
</snippet_imports>

<snippet_creds>
Replace with valid values
ENDPOINT ="https://resource-group-name.cognitiveservices.azure.com/"

training_key = "3589b< deleted from this >f5a8b95"
prediction_key = "c229a2c0e82b4f"

prediction_resource_id = "0261b07a074339" This value was copied from the subscription id , on the overview blade of the prediction resource
this does NOT work

prediction_resource_id="/subscriptions/0261b17d74339/resourceGroups/vision_group/providers/Microsoft.CognitiveServices/accounts/vicegrou-Prediction"
taken from MicrosoftDocs/azure-docs#28445 THIS SEEMS TO WORK

</snippet_creds>

<snippet_auth>
credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})
trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(ENDPOINT, prediction_credentials)
</snippet_auth>

<snippet_create>
publish_iteration_name = "classifyModel"

credentials = ApiKeyCredentials(in_headers={"Training-key": training_key})

trainer = CustomVisionTrainingClient(ENDPOINT, credentials)

Create a new project
print ("Creating project...")
project = trainer.create_project("MyProject")
</snippet_create>

<snippet_tags>
Make two tags in the new project
hemlock_tag = trainer.create_tag(project.id, "Hemlock")
cherry_tag = trainer.create_tag(project.id, "Japanese Cherry")
</snippet_tags>

<snippet_upload>
base_image_location = "/cognitive-services-python-sdk-samples/samples/vision/"
base_image_location = "/"
print("Adding images...")

image_list = []

for image_num in range(1, 11):
file_name = "hemlock_{}.jpg".format(image_num)
with open(base_image_location + "images/Hemlock/" + file_name, "rb") as image_contents:
image_list.append(ImageFileCreateEntry(name=file_name, contents=image_contents.read(), tag_ids=[hemlock_tag.id]))

for image_num in range(1, 11):
file_name = "japanese_cherry_{}.jpg".format(image_num)
with open(base_image_location + "images/Japanese Cherry/" + file_name, "rb") as image_contents:
image_list.append(ImageFileCreateEntry(name=file_name, contents=image_contents.read(), tag_ids=[cherry_tag.id]))

upload_result = trainer.create_images_from_files(project.id, ImageFileCreateBatch(images=image_list))
if not upload_result.is_batch_successful:
print("Image batch upload failed.")
for image in upload_result.images:
print("Image status: ", image.status)
exit(-1)
</snippet_upload>

<snippet_train>
print ("Training...")
iteration = trainer.train_project(project.id)
while (iteration.status != "Completed"):
iteration = trainer.get_iteration(project.id, iteration.id)
print ("Training status: " + iteration.status)
time.sleep(1)

The iteration is now trained. Publish it to the project endpoint
trainer.publish_iteration(project.id, iteration.id, publish_iteration_name, prediction_resource_id)
print ("Done!")

'''
THIS IS A DUPLICATE AND CAUSES AN ERROR

The iteration is now trained. Publish it to the project endpoint
trainer.publish_iteration(project.id, iteration.id, publish_iteration_name, prediction_resource_id)
print ("Done!")

'''

Now there is a trained endpoint that can be used to make a prediction
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(ENDPOINT, prediction_credentials)

with open(base_image_location + "images/Test/test_image_2.jpg", "rb") as image_contents:

results = predictor.classify_image(
    project.id, publish_iteration_name, image_contents.read())

The above line does NOT work. It causes ... Invalid iteration error !!

Display the results.
for prediction in results.predictions:
    print("\t" + prediction.tag_name +
          ": {0:.2f}%".format(prediction.probability * 100))

</snippet_test>

This issue is for a: (mark with an x)

- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

using Visual Studio 2019 and using python version 3.7, on a windows 10 laptop interfacing to Azure custom vision.
Using the set of example image files provided.
Ran the code in visual studio de-bug mode.

Any log messages given by the failure

The line results = predictor.classify_image(project.id, publish_iteration_name, image_contents.read())

causes the reply;- invalid iteration.

Expected/desired behavior

It should run and send back the prediction results

OS and Version?

Windows 10

Versions

Mention any other details that might be useful

there is also duplicate lines of code ( a different problem ) see the comments in the above code.


Thanks! We'll be in touch soon.

Cognitive Services C# sample code broken

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

I created a windows Forms Application(.Net Framework 4.7) and utilized the sample code given at the link below: https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ComputerVisionQuickstart.cs

My purpose was to be able to read text out of a local image. However It seems despite having a valid endpoint/subscription key, the function: await client.ReadInStreamAsync(File.OpenRead(localFile), language: "en") never completes and program is held up at this point indefinitely. I tried Cognitive services API which seems to be working just fine.

Any log messages given by the failure

No Error given, the called function never completes.

Expected/desired behavior

The function should complete and provide read text results from the image.

OS and Version?

Windows 10

Versions

NA

Mention any other details that might be useful

No others.


Thanks! We'll be in touch soon.

Update SDK for Text Analytics service in the multi-language sample

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/Multi-Service/text_analytics_cs.py

This still points to the older library for TA

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

ComputerVision Read API Call Produces Failure in Python

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Run python ComputerVisionQuickstart.py

Any log messages given by the failure

Traceback (most recent call last):
File "ComputerVisionQuickstart.py", line 514, in
client_response = computervision_client.batch_read_file(remote_image_url, text_recognition_mode, raw=True)
File "/Users/philipsj16/anaconda3/envs/msazure/lib/python3.7/site-packages/azure/cognitiveservices/vision/computervision/operations/_computer_vision_client_operations.py", line 871, in batch_read_file
header_parameters.update(custom_headers)
ValueError: dictionary update sequence element #0 has length 1; 2 is required

Expected/desired behavior

Completing the tutorial at: https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/python-sdk

OS and Version?

macOS (Mojave)

Versions

10.14.6; Python 3.7.4

Mention any other details that might be useful

I believe the issue is the call to batch_read_file() is not correctly passing 'text_recognition' mode. It should be passed in kwargs instead.
Thanks! We'll be in touch soon.

Getting Error when trying to Upload as a stream. Documentation missing?

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [X] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Create a FaceClient, IFormFile coming in, then make the file a MemoryStream and try to upload it through .DetectWithStreamAsync()

Any log messages given by the failure

Image size is too small (Error: InvalidImageSize)

Expected/desired behavior

As a stream, to make it easier to upload. Or maybe give me some information.

OS and Version?

Windows 10 Pro

Versions

1903

Mention any other details that might be useful

I'm not sure if that's a bug (I think I'm doing it wrong). The main problem I have is that nowhere on the internet (not in the documentation either) could I find an example where someone uploads it as a stream. Ideally I'd prefer not saving it to Cloud Blob Storage and use the url but rather get it straight from the frontend, change it into a stream and use Detect. Is it possible for an example using a Stream without filePath to be uploaded?

Thank you


AttributeError: 'ComputerVisionClient' object has no attribute 'batch_read_file_in_stream'

Hi teams of Azure. I try to read ocr result from a local image. I can normally read a ocr result from the remote url. Then, I follow the code in "https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/ComputerVision/ComputerVisionQuickstart.py" -- "Batch Read File - local". But "AttributeError:ComputerVisionClient' object has no attribute 'batch_read_file_in_stream'" happens.

always return me error code "400 InvalidImageSize" when use computervision api

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Any log messages given by the failure

Captions: 
'Bern Collaco et al. posing for a photo' with confidence 0.989732
Status code 400, {"code":"InvalidImageSize","requestId":"b90ca72b-bcd2-4af2-9437-d184fb7e59ce","message":"Image must be at least 50 pixels in width and height"}
null
com.microsoft.azure.cognitiveservices.vision.computervision.models.ComputerVisionErrorException: Status code 400, {"code":"InvalidImageSize","requestId":"b90ca72b-bcd2-4af2-9437-d184fb7e59ce","message":"Image must be at least 50 pixels in width and height"}
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:481)
	at com.microsoft.rest.ServiceResponseBuilder.build(ServiceResponseBuilder.java:122)
	at com.microsoft.azure.AzureResponseBuilder.build(AzureResponseBuilder.java:56)
	at com.microsoft.azure.cognitiveservices.vision.computervision.implementation.ComputerVisionImpl.recognizePrintedTextInStreamDelegate(ComputerVisionImpl.java:833)
	at com.microsoft.azure.cognitiveservices.vision.computervision.implementation.ComputerVisionImpl.access$400(ComputerVisionImpl.java:71)
	at com.microsoft.azure.cognitiveservices.vision.computervision.implementation.ComputerVisionImpl$14.call(ComputerVisionImpl.java:820)
	at com.microsoft.azure.cognitiveservices.vision.computervision.implementation.ComputerVisionImpl$14.call(ComputerVisionImpl.java:816)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.onNext(OnSubscribeMap.java:69)
	at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$RequestArbiter.request(RxJavaCallAdapterFactory.java:173)
	at rx.Subscriber.setProducer(Subscriber.java:211)
	at rx.internal.operators.OnSubscribeMap$MapSubscriber.setProducer(OnSubscribeMap.java:102)
	at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:152)
	at retrofit2.adapter.rxjava.RxJavaCallAdapterFactory$CallOnSubscribe.call(RxJavaCallAdapterFactory.java:138)
	at rx.Observable.unsafeSubscribe(Observable.java:10142)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:48)
	at rx.internal.operators.OnSubscribeMap.call(OnSubscribeMap.java:33)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)
	at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)
	at rx.Observable.subscribe(Observable.java:10238)
	at rx.Observable.subscribe(Observable.java:10205)
	at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:444)
	at rx.observables.BlockingObservable.single(BlockingObservable.java:341)
	at com.microsoft.azure.cognitiveservices.vision.computervision.implementation.ComputerVisionImpl$ComputerVisionRecognizePrintedTextInStreamParameters.execute(ComputerVisionImpl.java:878)
	at russiavk.module.ocr.api.azure.OCRByLocal(azure.java:79)
	at russiavk.module.ocr.api.azure.main(azure.java:164)

Minimal steps to reproduce

package russiavk.module.ocr.api;

import com.microsoft.azure.cognitiveservices.vision.computervision.*;
import com.microsoft.azure.cognitiveservices.vision.computervision.models.*;

import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.file.Files;

import java.util.ArrayList;
import java.util.List;
import russiavk.module.ocr.base.OCRBase;

public class azure {//extends OCRBase 
	String  SECRET_KEY="**********************";
	String END_POINT="https://regking.cognitiveservices.azure.com/";


//	@Override
//	public OcrClient initAccountData(String... args) { 
//	}

//	@Override
	protected String resultProcessors(Object result) {
		// Print results of local image
		System.out.println();
		System.out.println("Recognizing printed text from a local image with OCR ...");
		System.out.println("\nLanguage: " + ((OcrResult) result).language());
		System.out.printf("Text angle: %1.3f\n", ((OcrResult) result).textAngle());
		System.out.println("Orientation: " + ((OcrResult) result).orientation());

		boolean firstWord = true;
		// Gets entire region of text block
		for (OcrRegion reg : ((OcrResult) result).regions()) {
			// Get one line in the text block
			for (OcrLine line : reg.lines()) {
				for (OcrWord word : line.words()) {
					// get bounding box of first word recognized (just to demo)
					if (firstWord) {
						System.out.println("\nFirst word in first line is \"" + word.text() + "\" with  bounding box: "
								+ word.boundingBox());
						firstWord = false;
						System.out.println();
					}
					System.out.print(word.text() + " ");
				}
				System.out.println();
			}
		}
		return null;

	}

//	@Override
	public String OCRByLocal(String filePath, Object options) {
		ComputerVisionClient compVisClient = ComputerVisionManager.authenticate(SECRET_KEY)
				.withEndpoint(END_POINT);

		// END - Create an authenticated Computer Vision client.

		System.out.println("\nAzure Cognitive Services Computer Vision - Java Quickstart Sample");

		// Analyze local and remote images
		AnalyzeLocalImage(compVisClient);

		// Recognize printed text with OCR for a local and remote (URL) image
		// RecognizeTextOCRLocal(compVisClient);
		try {
			byte[] localImageBytes = Files.readAllBytes(new File(filePath).toPath());

			// Recognize printed text in local image
			OcrResult ocrResultLocal = compVisClient.computerVision().recognizePrintedTextInStream()
					.withDetectOrientation(true).withImage(localImageBytes).withLanguage(OcrLanguages.EN).execute();
			// Print results of local image
			System.out.println();
			System.out.println("Recognizing printed text from a local image with OCR ...");
			System.out.println("\nLanguage: " + ocrResultLocal.language());
			System.out.printf("Text angle: %1.3f\n", ocrResultLocal.textAngle());
			System.out.println("Orientation: " + ocrResultLocal.orientation());

			boolean firstWord = true;
			// Gets entire region of text block
			for (OcrRegion reg : ocrResultLocal.regions()) {
				// Get one line in the text block
				for (OcrLine line : reg.lines()) {
					for (OcrWord word : line.words()) {
						// get bounding box of first word recognized (just to demo)
						if (firstWord) {
							System.out.println("\nFirst word in first line is \"" + word.text()
									+ "\" with  bounding box: " + word.boundingBox());
							firstWord = false;
							System.out.println();
						}
						System.out.print(word.text() + " ");
					}
					System.out.println();
				}
			}
			return resultProcessors(compVisClient);
		} catch (Exception e) {
			System.out.println(e.getMessage());
			e.printStackTrace();
			return null;

		}

	}

	public static void AnalyzeLocalImage(ComputerVisionClient compVisClient) {
		/*
		 * Analyze a local image:
		 *
		 * Set a string variable equal to the path of a local image. The image path
		 * below is a relative path.
		 */
		String pathToLocalImage = "D:/asd.jpg";
		String pathToRemoteImage = "https://github.com/Azure-Samples/cognitive-services-sample-data-files/raw/master/ComputerVision/Images/faces.jpg";

		// This list defines the features to be extracted from the image.
		List<VisualFeatureTypes> featuresToExtractFromLocalImage = new ArrayList<>();
		featuresToExtractFromLocalImage.add(VisualFeatureTypes.DESCRIPTION);
		featuresToExtractFromLocalImage.add(VisualFeatureTypes.CATEGORIES);
		featuresToExtractFromLocalImage.add(VisualFeatureTypes.TAGS);
		featuresToExtractFromLocalImage.add(VisualFeatureTypes.FACES);
		featuresToExtractFromLocalImage.add(VisualFeatureTypes.ADULT);
		featuresToExtractFromLocalImage.add(VisualFeatureTypes.COLOR);
		featuresToExtractFromLocalImage.add(VisualFeatureTypes.IMAGE_TYPE);
		// Need a byte array for analyzing a local image.
		File rawImage = new File(pathToLocalImage);
		byte[] imageByteArray = null;
		try {
			imageByteArray = Files.readAllBytes(rawImage.toPath());
		} catch (IOException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}

		// Call the Computer Vision service and tell it to analyze the loaded image.
//		ImageAnalysis analysis = compVisClient.computerVision().analyzeImageInStream().withImage(imageByteArray)
//				.withVisualFeatures(featuresToExtractFromLocalImage).execute();

		ImageAnalysis analysis = compVisClient.computerVision().analyzeImage().withUrl(pathToRemoteImage)
				.withVisualFeatures(featuresToExtractFromLocalImage).execute();

		// Display image captions and confidence values.
		System.out.println("\nCaptions: ");
		for (ImageCaption caption : analysis.description().captions()) {
			System.out.printf("\'%s\' with confidence %f\n", caption.text(), caption.confidence());
		}
	}

	//
	public static void main(String[] args) {
		try {
			System.out.println(new azure().OCRByLocal(
					""
					null));

		} catch (Exception e) {
			e.printStackTrace();
		}
	}
}

OS and Version?

Windows 10.

Versions

azure-cognitiveservices-computervision1.0.2-beta


Thanks! We'll be in touch soon.

(Invalid URI: The URI scheme is not valid)

Please provide us with the following information:

This issue is for a: (mark with an x)

(Invalid URI: The URI scheme is not valid)
I just wanna know what is this URL referees ti cause it keeping sending me invalid URL and  I didn't get any comment for what this URL refers to or should lead to 
the URL is the next one
(https://csdx.blob.core.windows.net/resources/Face/Images/
![prob 1](https://user-images.githubusercontent.com/73299367/104908599-65ea9580-598f-11eb-9d10-bdc8d3ee5a35.png)
)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Variable not defined in python face api sample

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

# <snippet_frame>
# Detect a face in an image that contains a single face
single_face_image_url = 'https://raw.githubusercontent.com/Microsoft/Cognitive-Face-Windows/master/Data/detection1.jpg'
single_image_name = os.path.basename(single_face_image_url)
detected_faces = face_client.face.detect_with_url(url=single_face_image_url)
if not detected_faces:
	raise Exception('No face detected from image {}'.format(single_image_name))

# Convert width height to a point in a rectangle
def getRectangle(faceDictionary):
    rect = faceDictionary.face_rectangle
    left = rect.left
    top = rect.top
    bottom = left + rect.height
    right = top + rect.width
    return ((left, top), (bottom, right))

# Download the image from the url
response = requests.get(img_url)
img = Image.open(BytesIO(response.content))

# For each face returned use the face rectangle and draw a red box.
draw = ImageDraw.Draw(img)
for face in detected_faces:
    draw.rectangle(getRectangle(face), outline='red')

# Display the image in the users default image browser.
img.show()
# </snippet_frame>

Any log messages given by the failure

NameError: name 'img_url' is not defined

Expected/desired behavior

This line

response = requests.get(img_url)

should be changed to

response = requests.get(single_face_image_url)

Thanks! We'll be in touch soon.

QnaMaker QuickStart Code: generate_answer: ErrorResponseException: (BadArgument) Invalid input. See details.


This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

after creating knowledgebase in qnamaker, using the sample code under the preview sdk:

def download_kb(client, kb_id):
    print("Downloading knowledge base...")
    kb_data = client.knowledgebase.download(kb_id=kb_id, environment="Prod")
    print("Downloaded knowledge base. It has {} QnAs.".format(len(kb_data.qna_documents)))
download_kb (client=client, kb_id=kb_id)

Downloading knowledge base...
Downloaded knowledge base. It has 9 QnAs.

but when i try generate answer, it gives an error message:

def generate_answer(client, kb_id):
    print ("Querying knowledge base...")

    listSearchResults = client.knowledgebase.generate_answer(kb_id, QueryDTO(question = "How do I manage my knowledgebase?"))

    for i in listSearchResults.answers:
        print(f"Answer ID: {i.id}.")
        print(f"Answer: {i.answer}.")
        print(f"Answer score: {i.score}.")

generate_answer(client=client, kb_id=kb_id)

Any log messages given by the failure

ErrorResponseException: (BadArgument) Invalid input. See details.
---------------------------------------------------------------------------
ErrorResponseException                    Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_15788/4283219011.py in <module>
      9         print(f"Answer score: {i.score}.")
     10 
---> 11 generate_answer(client=client, kb_id=kb_id)

~\AppData\Local\Temp/ipykernel_15788/4283219011.py in generate_answer(client, kb_id)
      2     print ("Querying knowledge base...")
      3 
----> 4     listSearchResults = client.knowledgebase.generate_answer(kb_id, QueryDTO(question = "How do I manage my knowledgebase?"))
      5 
      6     for i in listSearchResults.answers:

~\AppData\Local\Programs\Python\Python39\lib\site-packages\azure\cognitiveservices\knowledge\qnamaker\operations\knowledgebase_operations.py in generate_answer(self, kb_id, generate_answer_payload, custom_headers, raw, **operation_config)
    520 
    521         if response.status_code not in [200]:
--> 522             raise models.ErrorResponseException(self._deserialize, response)
    523 
    524         deserialized = None

ErrorResponseException: (BadArgument) Invalid input. See details.

Expected/desired behavior

OS and Version?

Windows 11 Version 21H2, Build 22000.120, python 3.9.6

Versions

client.api_version

'v5.0-preview.1'

Mention any other details that might be useful

qnamaker-1
qnamaker-2

OCR-Print Text-Display Text Overlay with Image

Hi there,

Goal

I am trying to detect text in the image and locate where the text is.

Issue

Current code to display the image and overlay it with the extracted text cannot handle the image with angles.
plt.figure(figsize=(5, 5)) image = Image.open(BytesIO(requests.get(image_url).content)) ax = plt.imshow(image, alpha=0.5) for word in word_infos: bbox = [int(num) for num in word["boundingBox"].split(",")] text = word["text"] origin = (bbox[0], bbox[1]) patch = Rectangle(origin, bbox[2], bbox[3], fill=False, linewidth=2, color='y') ax.axes.add_patch(patch) plt.text(origin[0], origin[1], text, fontsize=20, weight="bold", va="top") plt.show() plt.axis("off")

Example

The image example I use is:
"https://ivc.ischool.utexas.edu/VizWiz_visualization_img/VizWiz_train_00013129.jpg"

image

image

CRITICAL lines of code are DUPLICATED !!!

OBSERVED:
Lines 18 & 19 are IDENTICAL to Lines 27 & 28 <--- WHY DOES MICROSOFT PUBLISH CODE IT HAS NEVER IN FACT TESTED ????
HOW can SO MANY YEARS GO BY with NO ONE FIXING SUCH BLUNDERS ???

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Computer Vision SDK for iOS

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

I want use OCR in iOS by computer vision sdk,but the document dosen't contains OC/Swift for iOS.

Expected/desired behavior

I hope there is a computer vision sdk for iOS.
Thanks.

"'this.Endpoint' cannot be null." -> DOTNET C#

I download the example and i tried to build the app, but when I run the app, I am getting this error

  Name Value Type
  Message "'this.Endpoint' cannot be null." string

image

what can i do? i review all my keys but I dont find solution,

Train Model has been running for past 30 mins for only 5 training image for each Tag. I have 2 tags

Please provide us with the following information:

This issue is for a: (mark with an x)

- [X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

I have tried to train a model with 2 tags and 5 images per tag .. train model has been running for the past 30 mins without any sign of getting closed.

Any log messages given by the failure

Expected/desired behavior

10 images training should not take a lot of time.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

URGENT: the dreaded "ModuleNotFound: Azure" error is being thrown when executing the from-import lines of code !!!

  • bug report -> please search issues before submitting
  • feature request
  • documentation issue or request
  • regression (a behavior that used to work and stopped in a new release)

### Minimal steps to reproduce
> 
START VSCode;
INSTALL Python extensions (which will INCLUDE everything to run Jupyter Notebooks in a project) in VSCode;
DEFINE a python virtual environment for your project;
INSTALL Azure CognitiveServices  "custom vision" package into that virtual environment; 
(pip install azure-cognitiveservices-vision-customvision)
COPY the following line into a notebook cell: 
       from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient
EXECUTE the cell 

=> RESULT: VSCode will throw the following exception: 
ModuleNotFoundError: No module named 'azure'

### Any log messages given by the failure
>ModuleNotFoundError: No module named 'azure'

### Expected/desired behavior
> the desired class is SUCCESSFULLY imported into the notebook

### OS and Version?
> Win10pro (all auto-patching CURRENT)

### Mention any other details that might be useful
> COPIED & PASTED code VERBATUM directly from the following "QuickStart" web page at docs.microsoft.com: 
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/quickstarts/image-classification?tabs=visual-studio&pivots=programming-language-python

> ---------------------------------------------------------------
> Thanks! We'll be in touch soon.

OCR not working, run time error

getting error: https://myaccount/cognitiveservices.azure.com/vision/v3.1/read/analyze?language=en 400 (Bad Request)

also getting consecutive error as :
Uncaught (in promise) Error: The file submitted couldn't be parsed. This can be due to one of the following reasons: the file format is not supported ( Supported formats include JPEG, PNG, BMP, PDF and TIFF), the file is corrupted or password protected.
at new RestError (restError.js:7)
at deserializationPolicy.js:92

Kindly help...
Also it would be great if typescript running code can be found for this integration as well

operations\_apps_operations.py - Error

When i am trying to run the python code, it fails with the below error. I have the correct key and endpoint URL in it.

https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/LUIS/python-sdk-authoring-prediction/application_quickstart.py

Creating application...
Traceback (most recent call last):
File "c:/zz Temp 01/py luis/pyluis.py", line 181, in
app_id, app_version = create_app()
File "c:/zz Temp 01/py luis/pyluis.py", line 49, in create_app
culture=app_locale))
File "C:\Users\zzzz\AppData\Local\Programs\Python\Python37-32\lib\site-packages\azure\cognitiveservices\language\luis\authoring\operations_apps_operations.py", line 85, in add
raise models.ErrorResponseException(self._deserialize, response)
azure.cognitiveservices.language.luis.authoring.models._models_py3.ErrorResponseException: Operation returned an invalid status code 'Access Denied'

Cognitive servIce -Form Recogniser sample code in Azure Documentation

Please provide us with the following information:

The code samples provided for Form recogniser lacks pre requisite information .
The 'Setting up’ instruction is inadequate as there is no information on Python setup for Azure.
The python interpreter fails to recognise Azure libraries in visual studio code.
There needs additional instruction on the setup to enable users to guide through the process.
Needs detailed documentation on the setup for partners ..

I get the following error:-

Traceback (most recent call last):
File "/Users/visiothoughts/Documents/Visiothoughts/Work-Project/AI-Moto/com/ai/Bill-Recogniser/RecogniseFormContent.py", line 5, in
from azure.core.exceptions import ResourceNotFoundError
ModuleNotFoundError: No module named ‘azure'

https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=ga%2Cv2-0&pivots=programming-language-python

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [X ] feature request
- [X] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Install the Form recogniser code in Visual Studio code
Yes .

Run pip3 install azure-ai-formrecognizer
----pip 21.0 from /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pip (python 3.9)
python—version
----Python 3.9.1

Any log messages given by the failure

Traceback (most recent call last):
File "/Users/visiothoughts/Documents/Visiothoughts/Work-Project/AI-Moto/com/ai/Bill-Recogniser/RecogniseFormContent.py", line 5, in
from azure.core.exceptions import ResourceNotFoundError
ModuleNotFoundError: No module named 'azure'

Expected/desired behavior

Expected the Form recogniser code to run as per the code snippets in the document
https://docs.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/client-library?tabs=ga%2Cv2-0&pivots=programming-language-python

Screen Shot 2021-01-27 at 23 00 46

Screen Shot 2021-01-27 at 23 00 46

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Mac OS Sienna

Versions

Sienna

Mention any other details that might be useful


Thanks! We'll be in touch soon.

UnsupportedMediaType issue

Attempting to send a .PDF file through Postman, which is resulting in the following error:

"{"value":{"error":{"code":"UnsupportedMediaType","message":"In case of HTML form data, the multipart request must contain a document with a media type of - 'application/pdf', 'image/jpeg' or 'image/png'."}},"formatters":[],"contentTypes":[],"statusCode":415}"

When I output the ContentType of the file, it's "application/pdf".

private async Task AnalyzePdfForm(
            IFormRecognizerClient formClient, Guid modelId, IFormFile pdfFormFile)
        {
            if (pdfFormFile == null)
            {
                throw new Exception("Invalid PDF file");
            }
            Console.WriteLine(pdfFormFile.ContentType);
            try
            {
                using (Stream stream = pdfFormFile.OpenReadStream())
                {
                    AnalyzeResult result = await formClient.AnalyzeWithCustomModelAsync(modelId, stream, contentType: "application/pdf");
                   Console.WriteLine("\nExtracted data from:" + pdfFormFile);
                    DisplayAnalyzeResult(result);
                }
            }
            catch (ErrorResponseException e)
            {
                Console.WriteLine("Analyze PDF form : " + e.Message);
            }
            catch (Exception ex)
            {
                Console.WriteLine("Analyze PDF form : " + ex.Message);
            }
        }

Bad Class File ms-azure-cs-faceapi // Class file has wrong version 56 should be 52


This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ x ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Concern java/Face Api Vision Quickstart

I tried to follow the steps in your README
I create a folder with the FaceQuickstart.java + /lib with dependencies
+ms-azure-cs-faceapi.jar
+all sub dependenceies extract from the faceapi-dependencies.jar

I edit the authentificate part

    final String key = "e7caeff3-7d1c-41fd-a178-xxxxxxxxxxxx";
    final AzureRegions myRegion = AzureRegions.NORTHEUROPE;
    FaceAPI client = FaceAPIManager.authenticate(myRegion, key);

I change "REGION" by "myRegion" and "KEY" by "key" inside the "FaceAPI client" creation line

i try to run the command in console &/or autocompile with intelij and get the message of wrong version for the dependecy

(I tried for 2 days and cannont find any other guid neither doc on azure for java implementation, i'm not sur if i did the authentificate corretly (i created my account as student and i get the key + region name)

Sorry in advance if it is a shitty simple solution but cannot find on my own (i'm first year student-career reconversion)

Any log messages given by the failure

Error Message in console :

c:\DEV\PROJECT\MSPR4\QUICKSTART>javac FaceQuickstart.java -cp .;lib\*
FaceQuickstart.java:115: error: cannot access FaceAPI
    public static List<UUID> detectFaces(FaceAPI client, String imageURL, String imageName) {
                                                                ^
  bad class file: lib\ms-azure-cs-faceapi.jar(com/microsoft/azure/cognitiveservices/vision/faceapi/FaceAPI.class)
    class file has wrong version 56.0, should be 52.0
    Please remove or make sure it appears in the correct subdirectory of the classpath.

Expected/desired behavior

working sample i guess.

OS and Version?

Windows 10 64 bits

Versions

Java SDK 1.8.0_261
Intelij & Console
Donwloaded the dependencies from the jar in your repo


Thanks in advance for your help

How Can I use local image to request "face detect" function?

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

I follow the doc “ https://docs.microsoft.com/zh-cn/azure/cognitive-services/face/quickstarts/client-libraries?pivots=programming-language-python&tabs=windows#detect-faces-in-an-image”, but change the "single_face_image_url" to local image file path.

Any log messages given by the failure

APIErrorException: (InvalidURL) Invalid image URL.
I didn't found any information about how to request by local image.

Expected/desired behavior

Descriptions about how to request by local file.

OS and Version?

Windows 10.

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

OCR API doesn't return correct bounding box values.

Please provide us with the following information:

This issue is for a: (mark with an x)

- [X] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

When using the Read API with an image that is not in the right orientation, the READ API correctly returns the bounding box coordinates within the same orientation as the source image. BUT, when using the OCR API, the image is rotated in the correct orientation before the OCR resulting in bounding box coordinates not matching the source image. Our AI algorithm needs to match the bounding boxes to the OCR bounding boxes. In READ API it's working but not OCR API. Applying a rotation matrix to the image coordinate also doesn't match the OCR API extracted coordinates.

Any log messages given by the failure

No

Expected/desired behavior

Same orientation coordinate as the input image.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Any OS

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.