microsoft / azure-pipelines-yaml Goto Github PK
View Code? Open in Web Editor NEWAzure Pipelines YAML examples, templates, and community interaction
License: MIT License
Azure Pipelines YAML examples, templates, and community interaction
License: MIT License
Please add schedule triggers to yaml (currently not supported).
copy from: microsoft/azure-pipelines-agent#1809
When I try to build my source using the yaml below;
pool:
vmImage: 'Ubuntu 16.04'
variables:
imageName: 'xxxredactedxxx'
projectfolder: 'xxxredactedxxx'
steps:
script: docker build -f
displayName: 'docker build'
task: AmazonWebServices.aws-vsts-tools.ECRPushImage.ECRPushImage@1
displayName: 'Push Image: '
inputs:
awsCredentials: 'AWS'
regionName: 'eu-west-1'
sourceImageName: '$(imageName)'
repositoryName: '$(imageName)'
autoCreateRepository: true
I get the following error;
Job phase1: Step input awsCredentials references service connection AWS which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz
I confirm I have a service connection named AWS.
I tried reading the link in the error message. However, the article doesn't show any directions on how to authorize the build to use the service conneciton.
First of all, I worked with visual designer last year, and it was cool.
now with the yaml is even much more cooler & flexible. I love that.
But I have a big question, that I wasn't able to answer myself looking at the good docs here.
so, I want to have this situation:
and now my question is: how do I define different pipelines for different branches? I wasn't able to have multiple yaml, which would be ideal, or put some 'conditions'
What I'm getting wrong here? It's a missing thing?
I've noticed, that if I create a new project in Azure DevOps and initialize it using my existing local git repo which includes '.vsts-ci.yml' file, a build definition doesn't get created automatically and in order to trigger one to be created I need to make some changes to the file and push those changes up to the origin.
The thing is that I have the '.vsts-ci.yml' file inside of my project template and I'd really like it to automatically generate the build definition straight on the initial push (so that I could then go and configure branch policies for the master to depend on this build).
Is this an expected behavior or it's actually a bug?
Thanks!
Apologies if issues are reserved for template issues.
I am struggling to make any progress towards getting a package R CMD check
ed on a Win VM. Any tips would be appreciated.
I have a need to use the minimatch file pattern functionality in PowerShell. In the VSTS Task SDK there is the Find-Match function that implements this. How can I use this (or equivalent) in a YAML PowerShell step so I can use pattern matching outside of the built in tasks?
Note that I have need for this in a step template so simply copying the .ps1 implementation file from GitHub wouldn't work because templates don't work with external .ps1 files at this time.
Copy from microsoft/azure-pipelines-agent#1727
Have you tried trouble shooting?
Couldn't queue the build
Agent Version and Platform
Version of your agent?
2.136.1
OS of the machine running the agent?
CentOS 7 x64 7.5
VSTS Type and Version
VSTS
What's not working?
According to templateexpressions there is a possibility to use task condition for the templates. I've tried to use next value for one of the task input ${{ and(not(parameters.force), not(parameters.override)) }} and it wasn't calculated properly. Limiting value to just one function ${{ not(parameters.force) }} throws the same error. I want to have task enabled/disabled depending on the value of two parameters. Is it possible?
Error in VSTS while trying to queue a build:
build.package.yml (Line: 13, Col: 14): Unexpected type: 'System.Boolean',build.package.yml (Line: 13, Col: 14): Expected a Boolean value. Actual value: ''
name:
queue: PrivatePool
steps:
parameters:
force: false
override: false
steps:
checkout: self
clean: true
task: DownloadBuildArtifacts@0
displayName: Download Build Artifacts
condition: and(succeeded(), in(variables['build.reason'], 'Manual', 'Schedule'))
continueOnError: true
enabled: ${{ and(not(parameters.force), not(parameters.override)) }}
inputs:
buildType: specific
downloadType: specific
Looking at this document (which I found with some help) it seems that this repository contains all the necessary functionality to validate YAML pipeline definitions locally, without actually sending any build commands to build servers.
I tried to use run.sh --yaml --what-if as shown in that document, but that complained about not finding bin/Agent.Listener and I couldn't find any instructions on how to build that file correctly. However, based on the descriptions in the document, it seems to do exactly what I would need to avoid having to push code to the repository just to see if my build config is valid.
It would be awesome if that could be exposed as something I can run from within my repository (e.g. a dotnet global tool).
In my PowerShell script I'm trying to generate a copyright message. I put © into a string and it gets converted to a lowercase c when the script is run (in the log and in the actual script).
# This will be a c
$copyright = "Copyright ©"
I also tried using the raw Unicode equivalent.
# This will also be a c
$copyright = "Copyright " + [char]0x00A9
This works in PowerShell directly but not when put into a PowerShell task in YAML.
When using the full syntax for jobs the timeoutInMinutes can be set at the same level as the job. From the docs:
jobs:
- job: string
timeoutInMinutes: number
cancelTimeoutInMinutes: number
strategy:
maxParallel: number
# note: `parallel` and `matrix` are mutually exclusive
# you may specify one or the other; including both is an error
parallel: number
matrix: { string: { string: string } }
pool:
name: string
demands: string | [ string ]
container: string
steps:
- script: echo Hello world
However if I set it when using the single-job syntax the build pipeline fails with error "Unexpected value 'timeoutInMinutes'"
If I move the timeoutInMinutes into the pool element it works ok.
If this is by design then it might be worth updating the docs to indicate this.
Thanks :)
I can't find the docs for Yaml for authorization. These docs used to live at:
Now they are gone, as you can see here:
https://github.com/Microsoft/azure-pipelines-agent/blob/master/docs/preview/yamlgettingstarted.md
I couldn't find the information that was available on those md files anywhere else. Did I miss it somewhere on azure-pipelines-agent
or on this repo or is it really missing?
In the visual editor, there is a checkbox:
This checkbox adds pwsh: true
to the yml when you hit the 'View YAML' button:
steps:
- powershell: 'Install-Module platyPS -Force -Confirm:$false -Scope CurrentUser'
pwsh: true
displayName: 'Install-Module platyPS'
However, if you put this into a yml file, when attempting to run the build on a Windows worker (VS2017), this error occurs in the pipeline:
Is there some other way to identify the use of pwsh (PowerShell Core 6) in this task in the yml?
The YAML schema described on this repo here differs from the YAMl schema on the Microsoft site here. Which is it?
From Microsoft site
name: string # build numbering format
resources:
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: { string: string } | [ variable ]
trigger: trigger
pr: pr
jobs: [ job | templateReference ]
From this repo
resources:
pipelines: [ pipeline ]
builds: [ build ]
repositories: [ repository ]
containers: [ container ]
packages: [ package ]
I've found that "pipelines" isn't recognized on Azure Pipelines
Is it possible to regex a build variable?
ex: $(System.TeamProject) = TEAM1-TestProject
I would like get anything to the left of the hyphen 'TEAM1'. Is this possible?
Provide a way to select agent queues with matrix the same way as containers described here:
Container jobs - Multiple jobs
Use case - I have .NET Core library that needs to be built and tested on all platforms - Windows, Linux & Mac. Right now I can solve it with Job reuse with parameters but I think matrix would be more "natural" and less verbose way to do that
Moved from here: #1815
Copied from: https://github.com/MicrosoftDocs/vsts-docs/issues/2327
It would be nice if you could dynamically set the display name of a task. This would allow the build logs to show information more relevant to the build.
For example you might have a build task that generates a version number for the build. It would be nice to be able to see the version number in the log summary by updating the display name. This would require the display name to be able to reference parameters, variables, env vars or something equivalent that preceding steps could set.
Based upon the link given above it is possible by using separate jobs. But I'd like to see this support added directly so we don't have to create separate jobs just to set the display name.
copy from: microsoft/azure-pipelines-agent#1887
Have you tried trouble shooting?
N/A
Agent Version and Platform
Agent version: 2.140.2
OS of the machine running the agent? OSX/Windows/Linux/...
Windows
VSTS Type and Version
VisualStudio.com
If VisualStudio.com, what is your account name?
http://uipath.visualstudio.com
What's not working?
As per this issue, we know that some time ago the agent supported running jobs locally (it was some kind of a hidden feature, but this prospect excited us). What's the status on this? Is there any way of running builds locally? This would really help for speeding up testing!
Thank you!
YAML supports PowerShell and a couple of other scripting languages. It would be nice if it could support TypeScript as well. There appears to be a shift in preference for all the DevOps tasks to use TypeScript instead of PowerShell. So it would be nice if we could move our TypeScript code into YAML and have it called directly in YAML instead of PowerShell.
For this to work we'd need better integration than we currently have in PS. At a minimum external scripts would need to be supported. Additionally we would need the Task SDK library available so we can call the various DevOps functions that are available in the SDK.
Exactly how this would work I'm not sure but I could envision a call to the TS compiler followed by it being run sort of like how VS Code can run TS now.
Hello,
I'd like to start an android emulator in my Pipelines yml. I followed the instructions on MSFT to create a bash script, but the build fails:
Invalid file path '/Users/vsts/agent/2.144.0/work/1/s/buildmachine'.
What did I do wrong?
Thanks,
Igor
I've put together a template for R that I think others would be interested in. I'll create a pull request for you to review.
I have a job in a template that depends on a preceding job as well as a template parameter:
parameters:
runTests: true
# omitted for brevity
- job: "Test"
dependsOn: "Build"
condition: ${{and(parameters.runTests, succeeded())}}
I'm trying to use the succeeded()
job status function within a template expression to ensure that the job only runs if "Build" succeeded and "runTests" is true. Unfortunately, it seems like this isn't allowed - the job "Test" fails with an error message stating that "succeeded" is invalid.
I've tried
and(${{parameters.runTests}}, succeeded())
as well.
Is there a way to evaluate parameters and job status functions in a single boolean expression?
The PowerShell step (backed by the PowerShell task) is limited to 2000 characters. This prevents a task from doing a reasonable amount of work since it is too big. There needs to be a way to run scripts larger than 2000. The possibilities that come to mind.
I understand that the limitation is probably with the PS task. Nevertheless using large PS tasks in YAML seems more common than using it in a GUI build definition so I think something YAML-specific would need to be done to support this scenario.
Hello,
We have a lot of ${{ if .... }}: in our yaml, and it would be nice to be able to use else as well instead of having to manually write out the inverse of the condition every time.
Any plans for doing this?
Thanks,
-Adam
It seems I cannot run a UniversalPackages@0 task in a container job unless I add a bunch of dependencies to my container image.
I would assume the task to be self sufficient, no?
As an example, when trying to download a package I get:
Error: An unexpected error occurred while trying to download the package. Exit code(136) and error(Failed to load ؖ!�, error: libunwind.so.8: cannot open shared object file: No such file or directory
Failed to bind to CoreCLR at '/__t/ArtifactTool/0.2.88/x64/libcoreclr.so')
Packages failed to download
Hello,
I'm trying to commit a file that is created during build back into the git repo. I've followed the instructions on this page to enable rights for Project Collection Build Service and I also made sure that persistCredentials is set to true. When I do git config --list in the PowerShell script I can see the line with http.https:(someurl)=AUTHORIZATION: bearer ***, which was set as the last step in checkout. However, I'm getting the error below. Do I need to setup the user email (what would that be with respect to the Azure DevOps project?) and the password as the access token or is there something else that I'm missing? Thank you!
pool:
vmImage: 'VS2017-Win2016'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- checkout: self
persistCredentials: true
clean: true
- powershell: |
git --version
New-Item -Path "$(System.DefaultWorkingDirectory)" -Name "testfile1.txt" -ItemType "file" -Value "This is a text string."
Write-Host "new file created, now adding file to git"
git -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" add -A
Write-Host "git commit with message"
git -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" commit -a -m "Test Commit from Azure DevOps"
Write-Host "git push"
git -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" push
Even though docs state that the checkout function allows cloning a secondary repo, when I try running a simple pipeline to push to a secondary repo which is within the same user account, I get this error Checkout of repository 'destinationRepo' is not supported. Only 'self' and 'none' are supported.
Here's the code I'm trying to test with:
trigger:
- master
pool:
vmImage: 'Ubuntu-16.04'
resources:
repositories:
- repository: destinationRepo
type: github
name: samiyaakhtar/solid-doodle
endpoint: solid-doodle-push-auth
steps:
- checkout: destinationRepo
persistCredentials: true
clean: true
- powershell: |
git --version
git checkout master
git config user.email "[email protected]"
git config user.name "AzureDevOps Bot"
Write-Host "git commit with message"
git commit --allow-empty -a -m "Test Commit from Azure DevOps"
Write-Host "git push"
git push
copy from: microsoft/azure-pipelines-agent#1749
Have you tried trouble shooting?
Queuing a build generates an error. Unexpected value 'condition'.
Agent Version and Platform
Version of your agent? 2.102.0/2.100.1/...
OS of the machine running the agent? Windows
VSTS Type and Version
VSTS
If VisualStudio.com, what is your account name? https://fsmb.visualstudio.com
What's not working?
I am trying to set up a YAML build. We have a lot of builds and our existing definitions use task groups. I am moving the task group logic into a template so they can be shared across build definitions. The templates are stored in a separate repository. All this is working correctly.
There is one step that shouldn't execute when it is a PR build so I tried to do this.
It would be nice if templates could be conditional like tasks are. Right now the workaround would be to either use a parameter or specify the condition inside the template but the template may not always know the correct condition to use.
Agent and Worker's Diagnostic Logs
It is not queuing so there is no log.
Related Repositories
Please ensure you are logging issues to the correct repository in order to get the best support.
Tasks Repository - contains all of the inbox tasks we ship with VSTS/TFS. If you are having issues with tasks in Build/Release jobs (e.g. unreasonable task failure) please log an issue here.
Hosted Agent Image Repository - contains the VM image used in the VSTS Hosted Agent Pool. If you are having Build/Release failures that seems like they are related to software installed on the Hosted Agent (e.g. the DotnetSDK is missing or the AzureSDK is not on the latest version) please log an issue here.
If you are hitting a generic issue about VSTS/TFS, please report it to the Developer Community
I would like to run a specific template if an array of parameters exist or not:
#template.yml
${{ if eq(parameters.deploy_extra_vars, '') }}:
${{ if ne(parameters.deploy_extra_vars, '') }}:
#vsts-ci.yml
parameters:
deploy_extra_vars: |
test: "testing"
test1: "testing1"
Now if in my vsts-ci.yml file I do not pass the 'deploy_extra_vars' parameter I receive an error. If it exists run this template, if it does not run this template.
I am currently running into an issue where I am able to pull from one private GitHub source in the checkout
stage but unable to pull another private GitHub source from a script later in the pipeline due to lack of access rights. Note: Both Repos are a part of the same organization. And I've already connected the Azure App to GitHub with access to all Repos
Within my checkout stage I've explicitly set persistCredentials: true
This, as I understand it, should allow the following scripts in the pipeline to use the GitHub credentials that were used in the checkout for "Get Sources".
Here is an example of the script that fails:
- script: |
git clone --branch=username --single-branch https://github.com/username/myRepo.git $(Agent.BuildDirectory)/myRepo
displayName: 'clone myRepo'
And the output:
Generating script.
[command]/bin/bash --noprofile --norc /Users/vsts/agent/2.140.2/work/_temp/cb2622cc-28e0-435a-bb98-154bdabf9641.sh
Cloning into '/Users/vsts/agent/2.140.2/work/1/myRepo'...
fatal: could not read Username for 'https://github.com': Device not configured
##[error]Bash exited with code '128'
logs from azure-pipelines show stdout output out of order; this seems like a serious bug.
I created a minimal example repo and pipeline so you can reproduce this issue:
see repo: https://github.com/timotheecour/timcitest (at revision f61677330cb33d781844bdb46f115165af6243ae)
see logs: https://dev.azure.com/timotheecour/timotheecour/_build/results?buildId=88
python start1
should appear before Cloning into 'csources'
, but it doesn't, see below:
2019-01-20T05:40:43.1789050Z ##[section]Starting: Run a multi-line script
2019-01-20T05:40:43.1792367Z ==============================================================================
2019-01-20T05:40:43.1792439Z Task : Command Line
2019-01-20T05:40:43.1792486Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2019-01-20T05:40:43.1792579Z Version : 2.146.1
2019-01-20T05:40:43.1792623Z Author : Microsoft Corporation
2019-01-20T05:40:43.1792673Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2019-01-20T05:40:43.1792765Z ==============================================================================
2019-01-20T05:40:44.9491923Z Generating script.
2019-01-20T05:40:44.9551007Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/ef990781-b2bb-4666-9c12-235027eabc79.sh
2019-01-20T05:40:44.9716813Z Add other tasks to build, test, and deploy your project.
2019-01-20T05:40:44.9717682Z See https://aka.ms/yaml
2019-01-20T05:40:45.0116585Z Cloning into 'csources'...
2019-01-20T05:40:49.9792667Z python start1
2019-01-20T05:40:49.9793315Z in buildNimCsources
2019-01-20T05:40:49.9794349Z ('runCmd', 'git clone --depth 1 https://github.com/nim-lang/csources.git')
2019-01-20T05:40:49.9794599Z output:
2019-01-20T05:40:49.9794735Z
2019-01-20T05:40:49.9794887Z after buildNimCsources
2019-01-20T05:40:50.0340286Z ##[section]Finishing: Run a multi-line script
I've reduced this from a more complex example where the out of order logs were making debugging very difficult.
looks like it's due to stderr buffering (prob line buffering) become out of sync w stdout buffering (prob block)
workaround: unbuffer
or python -u
See microsoft/azure-pipelines-agent#1733
for more info.
I have the following parameter in a yaml build and it's set to 'yes' in VSTS variables tab.
parameters:
buildDebugPackage: '$(BuildDebugPackage)'
I would like to do the following
- ${{ if eq(parameters.buildDebugPackage, 'yes') }}:
- ${{ parameters.preBuildDebugProjects }}
- task: DotNetCoreCLI@2
displayName: Build Debug Projects
inputs:
projects: ${{ parameters.projects }}
arguments: '--no-restore -c Debug /p:Version=$(ASSEMBLY_VERSION);FileVersion=$(FILE_VERSION)'
- ${{ parameters.postBuildDebugProjects }}
But this does not work because the variable BuildDebugPackage is not expanded to the value 'yes'.
I have to instead do the following
- powershell: |
Write-Host "##vso[task.setvariable variable=BuildDebugPackage]$($env:BuildDebugPackage)"
displayName: Setup Environment Variables
env:
BuildDebugPackage: ${{ parameters.buildDebugPackage }}
ignoreLASTEXITCODE: false
errorActionPreference: Stop
failOnStderr: true
- ${{ parameters.preBuildDebugProjects }}
- task: DotNetCoreCLI@2
displayName: Build Debug Projects
condition: and(succeeded(), eq(variables[BuildDebugPackage], 'yes'))
inputs:
projects: ${{ parameters.projects }}
arguments: '--no-restore -c Debug /p:Version=$(ASSEMBLY_VERSION);FileVersion=$(FILE_VERSION)'
- ${{ parameters.postBuildDebugProjects }}
This is far inferior as I cannot even condition the pre and post steps. This also shows up on the UI as a skipped step instead of not showing up at all. I also have to convert the parameter to a variable in PowerShell as well. Please fix this bug or give a workaround. Thanks.
Is it possible to use a variable or parameters in agent pool demands? Something like this:
variables:
projectName:my-vsts-project
agentName: '$(projectName)-$(Build.BuildId)'
jobs:
- job: JavaBuild
dependsOn: LaunchAgent
pool:
name: 'mgmt-aks-sandbox'
demands:
- agent.name -equals $(agentName)
It doesn't seem to work, as I end up getting like this:
##[Error 1]
No agent found in pool mgmt-aks-sandbox which satisfies the specified demands:
agent.name -equals $(projectName)-$(Build.BuildId)
java
Agent.Version -gtVersion 2.140.2
I've also tried with parameters and template expressions, and those don't work either:
##[Error 1]
No agent found in pool mgmt-aks-sandbox which satisfies the specified demands:
agent.name -equals ${{ parameters.agentName }}
java
Agent.Version -gtVersion 2.140.2
It looks like this guy managed to do it via the UI (see comments at the end), but is there a way to do it in YAML?
https://www.noelbundick.com/posts/serverless-vsts-build-agents-with-azure-container-instances/
This npm script tasks in node templates fail on Windows agents:
- script: |
npm install
npm run build
To repro, change the vmImage
to vs2017-win2016
and trigger a build.
I believe this happens because:
npm
is npm.cmd
on Windowsscript
task combines npm install
and npm test
into a single .cmd
file and executes itcall
syntax), when the inner batch file returns, the outer batch file exits early.cmd
file exits after the first npm
commandYou can repo this on any Windows machine by executing a batch file containing:
npm install
npm test
To execute both commands, the call
syntax must be used:
call npm install
call npm test
I believe the best fix is to use the Npm
task instead of the script
task. Here's an issue and pull request for the same issue in the pipelines-javascript
sample repo:
MicrosoftDocs/pipelines-javascript#10
MicrosoftDocs/pipelines-javascript#3
Here are all the impacted templates in this repo:
https://github.com/Microsoft/azure-pipelines-yaml/search?q=npm&unscoped_q=npm
The YAML parsing reports errors when trying to queue a YAML build but often the actual error is well beyond the truncated error message that is shown. It would be nice if the YAML parser would provide a more focused error message around the actual code (ideally with lines if applicable) that is causing the error instead of just showing the first part of the script.
Example 1 - Script is longer than 2000 characters. The only way to figure out how much you have to truncate is to load the script into an editor that counts characters. What makes this difficult at best is the fact that it is unclear what counts to the parser. I noticed that template expressions cause the entire block to be put inside a format message where the expressions are replaced with ordinal positions. So is this part of the char limit or the original template expression? What about the spaces before the start of the line when doing something like a multiline PowerShell script where you have to put spaces to get the indentation proper? Rather than memorizing, playing around with the rules it would be nice if it would just show the line and an indicator where it ran out of room.
Example 2 - Body has ${ elements such that parser says a $ is invalid. This is a complex PowerShell script mixing template expressions, PowerShell variables, a hashtable, regular expression string with nested quotes and $. The only message I get is that the $ is invalid but there is no easy way to figure out where this is actually occurring. Note that line/column information probably isn't sufficient here since the code is converted to a format call. It would be unclear whether this is the original code or the modified code.
It would be great to have an example about how to publish an npm module to the public npmjs.com registry from a github repository. Ideally, this would illustrate how to trigger a new CI build on any commit, while running the npm publish
command only when a new tag has been committed to the master branch.
I could imagine that such an example might really improve the adoption of Azure CI as build solution for npm-based Javascript projects. Right now it is very difficult to find useful documentation for this individual problem.
Thanks in advance :)
Please add pull request triggers to yaml (currently not supported).
As seen on stackoverflow.
I have the following pipeline that runs a container job using an image from a private azure registry:
resources:
containers:
- container: build_container
image: gxg08regtest.azurecr.io/ci-build-image:66162
endpoint: SandboxGXG08
pool:
vmImage: 'ubuntu-16.04'
container: build_container
steps:
- script: ./build.sh
This seems close enough to this snippet but it gives me the following error:
Expected 'dockerregistry' service connection type for image registry referenced by build_container, but got azurerm for service connection SandboxGXG08.
Any idea?
By "cross-product matrix", I mean the scenario where you want to use the matrix
strategy on a cross-product of multiple dimensions. For example, say you want to run tests on 2 OS (Linux, Windows) and 2 versions of Python (3.6, 3.7). Currently, you'd need to manually create a matrix with all the combinations:
strategy:
matrix:
Linux_Python36:
VM_IMAGE: 'ubuntu-16.04'
PYTHON_VERSION: '3.6'
Linux_Python37:
VM_IMAGE: 'ubuntu-16.04'
PYTHON_VERSION: '3.7'
Windows_Python36:
VM_IMAGE: 'vs2017-win2016'
PYTHON_VERSION: '3.6'
Windows_Python37:
VM_IMAGE: 'vs2017-win2016'
PYTHON_VERSION: '3.7'
While this isn't too bad with a small number of dimensions and a small number of variables in each dimension, it can quickly become unmaintainable as the dimensions and variables grow. It would be nice if there was a way to express this more succinctly, something like:
strategy:
cross-product:
PYTHON_VERSION: [ '3.6', '3.7' ]
VM_IMAGE: [ `ubuntu-16.04`, `vs2017-win2016` ]
Is it possible to nest templates? My first attempt at doing this is throwing an error of android-modules.yml (Line: 1, Col: 1): Unexpected value 'resources'
within the top level phase template. This seems to be not a possible option then. Would it be better to use conditions within the phase template to decide whether or not to execute a step?
Thanks
While working with Ruby build/test, tests were failing due to line ending issues.
I added a step before 'checkout' in the script with the following:
git config --global core.autocrlf false
That solved the issue. I guess I'm not sure what I think might be changed. Any or all of the following?
Re 2, the following is shown:
steps: [ script | bash | powershell | checkout | task | stepTemplate ]
So it is doc'd...
To emulate task groups in regular build definitions we have created YAML steps in a build repository that all our other builds reference using the reusable step syntax given in the documentation. This works correctly for us.
However some of the logic either cannot be done directly in YAML or doesn't work well because of issues in the current implementation. So we figured the best solution was to pull that logic into a separate script (.ps1) that sits next to the .yaml file. For non-reusable steps this seems to work OK but for steps that reused in other repos we cannot figure out the path that works. We have even gone so far as to dump the entire build directory to see where the yaml files go and we cannot see that the reusable steps (in other repos) are ever copied to the build directory such that a path would work.
So, what is the correct path to use when you want to call a .ps1 script from a YAML file that is called from other YAML files in other repos?
- powershell: ${{ format('.\steps\my-step.ps1 {0}', parameters.someArg) }}
This appears to work if you do this in a YAML file that is in your repo you're building. It does not work if you try to do this from a YAML file that is in a reusable step.
- powershell: ${{ format('$(Build.Repository.LocalPath)\steps\my-step.ps1 {0}', parameters.someArg) }}
Tried this approach as we saw something like this as part of one of the MS repos in GitHub (e.g. Azure build task or something).
Hello,
I'm trying to do something like this, which works for other variables:
Basically, do some logic if this isn't a github build. However, Build.Repository.Provider doesn't seem to resolve at yaml "compile" time. If I check for 'github' in my build definition name (hacky) it works.
The build that’s trying to run microbuild is at https://dnceng.visualstudio.com/internal/_build/results?buildId=37125&_a=summary&view=logs
Notice the Install Microbuild Plugin step does run and it’s not supposed to.
I downloaded the binlog for the debug phase and the Build Repository Provider is indeed set as it should be.
Is this a bug?
Hello,
I'm trying to define a variable to be used in later phases. It seems the way you access variables $(varName) does not work inside of if statements.
Here's what I have:
variables:
Build.Repository.Clean: true
_enableTelemetry: true
_HelixType: build/product
buildType: public
${{ if eq(variables.System.TeamProject, 'internal') }}:
${{ if notin(variables.Build.Reason, 'PullRequest') }}:
${{ if not(contains(variables.Build.DefinitionName, 'github')) }}:
buildType: internal
I've trying to branch based on the value of buildType later, but I'm having trouble doing so.
I've tried all of the following:
${{ if eq($(buildType), 'public') }}:
${{ if eq(variables.buildType, 'public') }}:
${{ if eq('variables.buildType', 'public') }}:
${{ if eq(variables['buildType'], 'public') }}:
But nothing seems to work.
Does this not work in yaml or am I just doing it wrong?
Thanks!
-Adam
Hey, I have a central step template, which I use in different builds. One build started to freakout, and does only part of one step of the steps. No changes were done to yaml in the past 3 weeks. How do I fix this? Running with debug shows no errors. It looks like it just skips some of the commands.
I am trying to use the GitHub App authentication in multiple projects for the same organization.
I have created a project using the GitHub App (which is installed in GitHub giving access to all repos in the organization). This works fine as expected.
However, when I create another (a second) new project, I am prompted to use OAuth:
If I click on the other option Install our app from the GitHub Marketplace
I am redirected to the GitHub App page:
Where I can confirm it is installed, and enabled for all repositories.
Expected behaviour
I would expect Azure Pipelines to give me a list of all my repositories when creating a new project.
According to the documentation:
The app will become Azure Pipelines’ default method of authentication to GitHub (instead of OAuth) when organization members create pipelines for their repositories. This is recommended so that pipelines run as “Azure Pipelines” instead of a user’s GitHub identity which may lose access to the repository.
The following works nicely, except if the script length is larger than 2000 chars. Some of us need such long scripts :-)
- bash: |
if [ '${{parameters.some_param}}' == true ]; then
do_something
fi
With larger scripts, the build fails at the template substitution step with the following message:
azure-pipelines.yml (Line: xxx, Col: yyy): Exceeded max expression length 2000. For more help, refer to https://go.microsoft.com/fwlink/?linkid=842996
My understanding is that the name of a counter can be set to a variable and that this variable is then expanded to be the counter name. Then, when the variable value changes, the counter re-seeds. Is this correct?
If so, is it supported when the variable is from a variable group?
I have this yaml:
variables:
- group: Build_Identifiers
- name: BUILD_NUMBER
value: $[ counter( variables['majorMinorVersion'], 1) ]
majorMinorVersion
is from the Build_Identifiers
variable group. When the value of majorMinorVersion
is changed, the counter does not reset to 1.
This task fails because somewhere deep in $(Build.SourcesDirectory)/... is a broken symlink.
- task: CopyFiles@2
inputs:
sourceFolder: $(Build.SourcesDirectory)
contents: 'MLO'
targetFolder: $(Build.ArtifactStagingDirectory)
Unhandled: Failed find: ENOENT: no such file or directory, stat '/__w/1/s/arm-buildroot-linux-gnueabihf_sdk-buildroot/arm-buildroot-linux-gnueabihf/sysroot/bin/systemd-resolve'
I would not expect this task to fail on broken symlink but even less in this case because my content pattern resolves to a single file.
Anybody else think this is not the correct behavior?
How should I work around this given I cannot get rid of the symlink?
Is it possible now to join the elements of sequence to a string?
items:
- foo
- bar
result: ${{ join(items, ';') }}
where result
would contain foo;bar
string
One of possible scenarios for this could specifying MSBuild semicolon-separated property values using array syntax (instead of a very long string)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.