Giter VIP home page Giter VIP logo

bicep's Introduction

Build codecov Needs Upvote Good First Issues

Azure Bicep

For all you need to know about the Bicep language, check out our Bicep documentation.

What is Bicep?

Bicep is a Domain Specific Language (DSL) for deploying Azure resources declaratively. It aims to drastically simplify the authoring experience with a cleaner syntax, improved type safety, and better support for modularity and code re-use. Bicep is a transparent abstraction over ARM and ARM templates, which means anything that can be done in an ARM Template can be done in Bicep (outside of temporary known limitations). All resource types, apiVersions, and properties that are valid in an ARM template are equally valid in Bicep on day one (Note: even if Bicep warns that type information is not available for a resource, it can still be deployed).

Bicep code is transpiled to standard ARM Template JSON files, which effectively treats the ARM Template as an Intermediate Language (IL).

Video overview of Bicep

Goals

  1. Build the best possible language for describing, validating, and deploying infrastructure to Azure.
  2. The language should provide a transparent abstraction for the underlying platform. There must be no "onboarding step" to enable Bicep support for a new resource type and/or api version.
  3. Code should be easy to understand at a glance and straightforward to learn, regardless of your experience with other programming languages.
  4. Users should be given a lot of freedom to modularize and re-use their code. Code re-use should not require any 'copy/paste'-ing.
  5. Tooling should provide a high level of resource discoverability and validation, and should be developed alongside the compiler rather than added at the end.
  6. Users should have a high level of confidence that their code is 'syntactically valid' before deploying.

Non-goals

  1. Build a general purpose language to meet any need. This will not replace general purpose languages and you may still need to do pre or post-Bicep execution tasks in a script or high-level programming language.
  2. Provide a first-class provider model for non-Azure related tasks. While we will likely introduce an extensibility model at some point, any extension points are intended to be focused on Azure infra or application deployment related tasks.

Get started with Bicep

To get going with Bicep:

  1. Start by installing the tooling.
  2. Complete the Bicep Learning Path

Alternatively, you can use the VS Code Devcontainer/Codespaces repo to get a preconfigured environment.

If you have an existing ARM Template or set of resources that you would like to convert to .bicep format, see Decompiling an ARM Template.

Also, there is a rich library of examples in the azure-quickstart-templates repo to help you get started.

How does Bicep work?

First, author your Bicep code using the Bicep language service as part of the Bicep VS Code extension

Both Az CLI (2.20.0+) and the PowerShell Az module (v5.6.0+) have Bicep support built-in. This means you can use the standard deployment commands with your *.bicep files and the tooling will transpile the code and send it to ARM on your behalf. For example, to deploy main.bicep to a resource group my-rg, we can use the CLI command we are already used to:

az deployment group create -f ./main.bicep -g my-rg

For more detail on taking advantage of new Bicep constructs that replace an equivalent from ARM Templates, you can read the moving from ARM => Bicep doc.

Known limitations

  • Bicep is newline sensitive. We are exploring ways we can remove/relax this restriction (#146)
  • No support for the concept of apiProfile which is used to map a single apiProfile to a set apiVersion for each resource type. We are looking to bring support for this type of capability, but suspect it will work slightly differently. Discussion is in #622

FAQ

What unique benefits do you get with Bicep?

  1. Day 0 resource provider support. Any Azure resource — whether in private or public preview or GA — can be provisioned using Bicep.
  2. Much simpler syntax compared to equivalent ARM Template JSON
  3. No state or state files to manage. All state is stored in Azure, so makes it easy to collaborate and make changes to resources confidently.
  4. Tooling is the cornerstone to any great experience with a programming language. Our VS Code extension for Bicep makes it extremely easy to author and get started with advanced type validation based on all Azure resource type API definitions.
  5. Easily break apart your code with native modules
  6. Supported by Microsoft support and 100% free to use.

Why create a new language instead of using an existing one?

Bicep is more of a revision to the existing ARM template language rather than an entirely new language. While most of the syntax has been changed, the core functionality of ARM templates and the runtime remains the same. You have the same template functions, same resource declarations, etc. Part of the complexity with ARM Templates is due to the "DSL" being embedded inside of JSON. With Bicep, we are revising the syntax of this DSL and moving it into its own .bicep file format. Before going down this path, we closely evaluated using an existing high-level programming language, but ultimately determined that Bicep would be easier to learn for our target audience. We are open to other implementations of Bicep in other languages.

We spent a lot of time researching various different options and even prototyped a TypeScript based approach. We did over 120 customer calls, Microsoft Most Valuable Professional (MVP) conversations and collected quantitative data. We learned that in majority of organizations, it was the cloud enablement teams that were responsible for provisioning the Azure infra. These folks were not familiar with programming languages and did not like that approach as it had a steep learning curve. These users were our target users. In addition, authoring ARM template code in a higher level programming language would require you to reconcile two uneven runtimes, which ends up being confusing to manage. At the end of the day, we simply want customers to be successful on Azure. In the future if we hear more feedback asking us to support a programming language approach, we are open to that as well. If you'd like to use a high-level programming language to deploy Azure Infra we recommend Farmer, the Terraform CDK or Pulumi.

Why not focus your energy on Terraform or other third-party IaC offerings?

Using Terraform can be a great choice depending on the requirements of the organization, and if you are happy using Terraform there is no reason to switch. At Microsoft, we have teams actively investing to make sure the Terraform on Azure experience is the best it can be.

That being said, there is a huge customer base using ARM templates today because it provides a unique set of capabilities and benefits. We wanted to make the experience for those customers first-class as well, in addition to making it easier to start for Azure focused customers who have not yet transitioned to infra-as-code.

Fundamentally, we believe that configuration languages and tools are always going to be polyglot and different users will prefer different tools for different situations. We want to make sure all of these tools are great on Azure, Bicep is only a part of that effort.

Is this ready for production use?

Yes. As of v0.3, Bicep is now supported by Microsoft Support Plans and Bicep has 100% parity with what can be accomplished with ARM Templates. As of this writing, there are no breaking changes currently planned, but it is still possible they will need to be made in the future.

Is this only for Azure?

Bicep is a DSL focused on deploying end-to-end solutions in Azure. In practice, that usually means working with some non-Azure APIs (i.e. creating Kubernetes deployments or users in a database), so we expect to provide some extensibility points. That being said, currently only Azure resources exposed through the ARM API can be created with Bicep.

What happens to my existing ARM Template investments?

One of our goals is to make the transition from ARM Templates to Bicep as easy as possible. The Bicep CLI supports a decompile command to generate Bicep code from an ARM template. Please see Decompiling an ARM Template for usage information.

Note that while we want to make it easy to transition to Bicep, we will continue to support and enhance the underlying ARM Template JSON language. As mentioned in What is Bicep?, ARM Template JSON remains the wire format that will be sent to Azure to carry out a deployment.

Get Help, Report an issue

We are here to help you be successful with Bicep, please do not hesitate to reach out to us.

  • If you need help or have a generic question such as ‘where can I find an example for…’ or ‘I need help converting my ARM Template to Bicep’ you can open a discussion
  • If you have a bug to report or a new feature request for Bicep please open an issue

Reference

Community Bicep projects

Alternatives

Because we are now treating the ARM Template as an IL, we expect and encourage other implementations of IL (ARM Template) generation. We'll keep a running list of alternatives for creating ARM templates that may better fit your use case.

  • Farmer (@isaacabraham) - Generate and deploy ARM Templates on .NET
  • Cloud Maker (@cloud-maker-ai) - Draw deployable infrastructure diagrams that are converted to ARM templates or Bicep

Telemetry

When using the Bicep VS Code extension, VS Code collects usage data and sends it to Microsoft to help improve our products and services. Read our privacy statement to learn more. If you don’t wish to send usage data to Microsoft, you can set the telemetry.enableTelemetry setting to false. Learn more in our FAQ.

License

All files except for the Azure Architecture SVG Icons in the repository are subject to the MIT license.

The Azure Architecture SVG Icons used in the Bicep VS Code extension are subject to the Terms of Use.

Contributing

See Contributing to Bicep for information on building/running the code, contributing code, contributing examples and contributing feature requests or bug reports.

bicep's People

Contributors

alex-frankel avatar anthony-c-martin avatar asilverman avatar bhsubra avatar cqthyy avatar davidcho23 avatar dciborow avatar dependabot[bot] avatar egullbrandsson avatar fberson avatar github-actions[bot] avatar harshpatel17 avatar jason-dou avatar jeskew avatar jfleisher avatar jfolberth avatar johndowns avatar juanpablomontoya271 avatar kalbert312 avatar majastrz avatar marcusfelling avatar mcklmt avatar microsoft-github-policy-service[bot] avatar miqm avatar shenglol avatar stan-sz avatar stefanivemo avatar stephenweatherford avatar tsunny007 avatar usman0111 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bicep's Issues

Improve declarations of child resources

Could potentially set a property like parent on child resources, so that Bicep authors no longer need to handle:

  • segmentation on naming patterns (e.g. {parentName}/{childName})
  • dependsOn to parent
  • adding location if required
  • adding apiVersion (could be optional, where we assume parent)

Received this feedback in customer email, so opening this issue for it

nested deployment scenarios and migrating them to bicep

Brian and I were discussing nested deployments today as it relates to bicep so wanted to capture what we covered.

Today, there are three scenarios where you would need to do a nested deployment:

  1. breaking down code into multiple files
  2. deployment to a different scope than the initial deployment
  3. deploy a resource multiple times (this is an edge case and usually the result of a bad API design, but this scenario exists)

For breaking down code, this should be entirely replaced by modules.

For the latter two scenarios, we need to continue to support modules. A syntax like this should work for free:

resource nested 'Microsoft.Resources/deployments@2019-01-01' {
  name: 'mynested001'
  resourceGroup: 'myRg'
  templateLink: 'https://raw.github.com/myTemplate.json'
}

There are a few ways we could potentially improve this:

allow a special "nested deployment" syntax that allows you to point to a .arm file, and then we would compile just that arm file into a template w/ expressionEvaluationOptions.scope set to inner

resource nested 'Microsoft.Resources/deployments@2019-01-01' {
  name: 'mynested001'
  resourceGroup: 'myRg'
  template: './myTemplate.arm'
}

support a dedicated scope property on resource declarations (or a top level keyword) that we translate to a nested deployment accordingly:

resource storageInaNewRg 'Microsoft.Storage/storageAccounts@2019-01-01' {
  scope: 'myRg'
  ...
}

or

scope 'myRg' {
  resource storageInaNewRg 'Microsoft.Storage/storageAccounts@2019-01-01' {
    ...
  }
}

@bmoore-msft - let me know if I missed anything from our discussion this morning

Support STDOUT

Allow support for STDOUT in bicep:

bicep build --stdout file.arm

Team Meeting 7/16/2020

We spent the meeting mostly discussing language feedback from Brendan's meeting.

Brackets on resource loops

We discussed where to place [] to indicate that a resource loop is a resource loop using the following example:

resource storageAccountResources 'Microsoft.Storage/storageAccounts@2019-06-01' = [for (config, i) in storageConfigurations: {
  name: storageAccountNamePrefix + config.suffix + i
  location: resourceGroup().location
  properties: {
    ...
  }
  kind: StorageV2
}]

The placement of brackets will result in some syntax inconsistencies in all options:

Option 1

resource[] storageAccountResources 'Microsoft.Storage/storageAccounts@2019-06-01' = [for (config, i) in storageConfigurations: {
...
}]

Considerations:

  • Is it a keyword array? resource is both a keyword and a type, so it may be fine.
  • How will typed arrays look in the future in parameters? Will it be string[] to indicate an array of strings? If yes, then why not option 2?
  • Can potentially remove the enclosing brackets on the for construct as well to keep net number of brackets the same.

Option 2

resource storageAccountResources 'Microsoft.Storage/storageAccounts@2019-06-01'[] = [for (config, i) in storageConfigurations: {
...
}]

Considerations:

  • The bracket placement seems really awkward here and close to the brackets representing the array value.

Filters

We agree with feedback that filtering should be separated from looping so filtered arrays can be constructed and reused easily. It is, however, an advanced scenario that can't be implemented without IL changes. We should take it out of the template specs for now.

Consolidation of variables, outputs, and parameters

  • Parameters are very different from variables and cannot be consolidated with the other concepts. The remaining consolidation opportunity is variables and outputs.
  • Regardless of syntax decision, we should allow outputs to be referenced as if they were other variables in the language.
  • The concept of a variable, parameter, and output will remain regardless if the syntax is unified or not. Each aforementioned concept has a different purpose in templates and stands on its own and should not be removed.
  • Considered the following options for unifying variables and outputs (examples assume type inference - more on that below):

Option 0 - Keyword

output foo = myResource.properties.endpoint

Option 1 - Decorator

@export 
let foo = myResource.properties.endpoint

Option 2 - Modifier

out let foo = myResource.properties.endpoint

Option 3 - Export statement

var foo = myResource.properties.endpoint
export foo

Automatic type inference

  • Parameter type cannot be automatically inferred due to the nature of parameters
  • Outputs are contracts. A mistake in an expression can easily change the type, which causes a break elsewhere (a module referencing the outputs of the current module). If there's strong push back on this we can live with it, but prefer to leave it more strongly typed.
  • Regardless of the weak/strong typing on output, expressions that aren't of type bool, int, string, object, or array at compile time will result in compiler errors. (IL does not support any other output types.)

Looping & Conditionals syntax spec

Proposal - Looping & Conditionals

Goals

  • First class support for conditionally deploying resources or sets of resources
  • Simple to understand syntax without losing any of the power
  • Avoid introducing multiple ways to do the same thing - aim to unify resource-level and property-level looping constructs

Resource-Level Looping: Spec

Example of a regular resource with no modifiers:

resource <provider> <type> <identifier>: {
  ...
}

Conditional modifier

resource <provider> <type> <identifier> when <conditional_expression>: {
  ...
}

Example

resource azrm 'Microsoft.Network/networkInterfaces@2019-10-01' myNic when (deployNic == true): {
  ...
}

The type assigned to the declared <identifier> should be <resource type> | null. This should allow for compile time verification that the user is safely accessing the value with appropriate null checks.

Looping modifier

// element_identifier gives access to the array element or object key
resource <provider> <type> <identifier> for <element_identifier> in <array_expression>: {
  ...
}

// optional index_identifier to get access to a loop index
resource <provider> <type> <identifier> for <element_identifier>, <index_identifier> in <array_expression>: {
  ...
}

Examples

// using the range(..) function to create a list of integers
resource azrm 'Microsoft.Network/networkInterfaces@2019-10-01' myNics for i in range(5): {
  name: 'mynic-${i}'
  ...
}

// using index plus element access
resource azrm 'Microsoft.Network/networkInterfaces@2019-10-01' myNics for name, i in nicNames: {
  name: name,
  properties: {
    index: i
  }
}

The type assigned to the declared <identifier> should be an array of the resource being declared.

<loop_identifier> will be assigned access to the item in the current iteration of the loop.

Conditional and Looping modifiers

The MVP will not support both modifiers being combined. If a user wants to combine, they always have the option to use looping with <condition> ? <array_expression> : range(0) to access the same functionality.

Notes/Caveats

  • Looping is supported on arrays as well as objects. For arrays, the identifier allow iteration over elements. For objects, the identifier allows iteration over keys. We may want to consider allowing access to the index in both cases, as it's quite commonly used in ARM.
  • Line length could potentially get quite long, and the mixing of multiple keywords and identifiers may make it hard to understand which identifiers are doing what (e.g. which identifier is the resource).
  • Loop 'modifiers' such as ARM's batch count, and copy mode are not included in this spec, and some generic form of annotation will need to be added for this - being careful to avoid making it 'ARM-specific'.
  • The expression being passed to either the condition or loop statements cannot reference any resource directly or indirectly. This is so the full deployment graph can be built before any resources are deployed.

Potential future improvements not covered in this spec

  • Range expression (syntactic sugar for the range function): n...m
  • Combining both conditionals and looping on a resource

Property-Level Looping: Spec

Looping

Looping uses a very similar syntax to resource-level looping, and behaves like a map function, where the value generated is written inline, and has access to the item being iterated over. This syntax generates an array of items.

This syntax is valid in any place expecting an expression.

{
  <property>: for <identifier> in <array_expression> <value_expression>,
  ...
}

Example

{
  myLoop: for i in range(2) {
    name: i
  },
  myLoop2: for name in names '${prefix}-${name}',
}

Conditionals

Conditionals use a ternary syntax. This syntax is valid in any place expecting an expression.

{
  <property>: <conditional_expression> ? <val_a> : <val_b>
}

Notes/Caveats

  • It's difficult to unify with the resource-level conditional statement, so I've gone for a standard ternary syntax. We may want to think about making a syntax for a ? b : null quick to write as either setting some value or null is a common use case.
  • Again, optional access to an index would probably be quite useful.
  • The lack of visual separator between <array_expression> <value_expression> makes the looping syntax a little hard to parse.

What's in a namespace?

Discussion

Up until now, we have been assuming that the following programs would be valid:

Program 1

parameter resourceGroup string

variable location = resourceGroup().location

Program 2

variable resourceGroup = resourceGroup().location

In the above programs, the resourceGroup identifier points to a variable, parameter, or function depending on existing declarations or whether the identifier reference is a function invocation. Making identifiers ambiguous is a bad practice in language design.

(This was pointed out as a problem during my discussions with @bterlson and a few of the in-flight syntax proposals assume some solution to this problem as well.)

Options

1. Do Nothing

Leave as-is and ignore language best practices is technically an option we can choose, but I'm not advocating for it.

2. Built-in symbols reserved in global namespace.

This would prevent the user from declaring a resourceGroup variable or parameter, but they could call resourceGroup(). The approach works with current JSON but prevents us from adding new functions in the future without breaking users. This makes it ultimately non-viable.

3. Built-in symbols in the "system" namespace and user-defined symbols in global namespace.

variable location = System.resourceGroup().location. 

I'm not a fan of this because most functions in use are built-in functions and the user would have to type in a lot more text. (IntelliSense can help with writing it, but not with reading it.)

4. Built-in symbols in the global namespace and user-defined symbols in other namespace(s).

variable location = resourceGroup().location
variable locationPlusName = variable.location + ' ' + parameter.name

This option makes it easy for us to add new built-in functions to the global namespace, but makes referencing a variable or parameter harder because you have to specify the namespace. (We can play with shortening variable.location to var.location as well here.)

5. Everything in global namespace with enforced naming convention

If the compiler enforces a naming convention on user-defined symbols, we can allow them to coexist in the global namespace with built-in symbols. For example:

parameter $name string
parameter $storageAccountName string = uniqueString(resourceGroup().id) + 'sa'

variable $location = resourceGroup().location
variable $locationPlusName = $location + ' ' + $name

The question remains what gets generated in the JSON for parameters and outputs (variables are less important because they do not represent the "contract" of the template). For the above example, do we generate $name and $storageAccountName parameters or name and storageAccountName? Given that the parameters are visible outside and the user may want to interop or convert an existing template to bicep, we should consider doing the latter even if it means a discrepancy between JSON and bicep.

6. Everything is in a namespace. User controls what is available in their scope.

This would require a using or import declaration to be specified to control what is available in the "global" namespace, but the user would be in control.

7. Built-in symbols auto imported into global namespace, but user-defined symbols have precedence.

The built-in symbols are automatically imported into the global namespace, but user-defined symbols have precedence over the built-ins. As a result, resourceGroup() resolves to a built-in function unless the user declares a resourceGroup variable or parameter. If that is the case, the user will get an error indicating that resourceGroup is not a function.

If the user has a resourceGroup parameter or variable declared and wants to use the built-in resourceGroup() function, they will do so with azrm.resourceGroup() where azrm is the namespace for the built-in symbols.

Modules

Modules

What's a module?

A module contains a coherent set of resources to be deployed together. Using modules promotes cleaner resource composition in bicep and flexibility in reusing/sharing deployment building blocks.

What about user-defined functions? They will be modeled in a separate concept - library that contains commonly used but complex expressions. The goal is to ease the tediousness in using expressions. This proposal doesn't cover design details on library and user-defined function, which need further efforts to understand user scenarios and expectations.

A module can be composed of other modules or libraries, but a library can't reference a module.

Why distinguish module from library? Because:

  • They serve separate use cases: deploying resources vs. reusing expressions, which have different expectations and limitations.
  • It makes bicep compiler easier to compile and validate a deployment.
  • It makes engineering planning easier. Team can tackle a smaller problem area first and continue evolve bicep over iterations.

Proposal

Design considerations

  1. A module should be an opaque container of resources to be deployed, it only exposes parameters and outputs to external components.
  2. Parameters and outputs are optional, but must be strongly typed if specified.
  3. A module may be composed of multiple .arm files. The name of directory containing those files is the module's name. See meeting note with Anders
  4. The directory name serves as the module's namespace as well. Everything defined in the directory belongs to the namespace.
  5. A module's parameters and outputs are union of its composing files.
  6. There's no "main" file serving as entry point to a module. Bicep compiler puts together module files either as nested or linked deployments. Defer details to future discussions.

Module file structure

Content in an example sqlDatabases.arm file:

param accountName string {
    defaultValue: 'sql-{uniqueString(resourceGroup().id)}'
    minLength: 3
    maxLength: 44
    description: 'Cosmos DB account name'
}

param databaseName array {
    defaultValue: [
        {
            'raw'
        }
    ],
    description: The variable determines the name of each database that's created.  Duplicate names are not allowed'
}

resource[] sqlDatabases 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases@2020-03-01' [
    for dbindex in length(param.databaseName) {
        name: '{param.accountName}/{param.databaseName[dbIndex]}',
        dependsOn: param.accountName,
        properties: {
            resource: {
                id: param.databaseName[dbIndex],
                options: {}
            }
        }
    }
]

outputs: {
    sqlDatabases: {
        type: array,
        value: sqlDatabases[*].id
    }
}

A .arm file is used as a module file by default. We defer library files to future discussions.

To use the module:

module databaseIds '[email protected]' {
    accountName: 'fooAccount',
    // parameter with default value can be omitted.
}

module is a keyword that compiler understands to search and load the module file. The version name can be initially omitted for local modules only.

Module search path

The design boils down to what do we imagine bicep local development folder structure would look like.

Design goals:

  • Promote ease of sharing/reusing modules.
  • Easier to organize files as local development evolves.
  • Consistent experience when working with command line or VS Code.
  • Easier integration with CI/CD pipelines.
  • No friction when publishing to or pulling from future module registry.

Based on the discussions below, we recommend having module files in a separate folder from main files for ease of sharing/reusing modules. Compiler needs to know where to find module files. Several design options:

We decided to use relative paths as agreed in the design meeting.

  1. Having a "root" folder and specify its path in a bicep settings file (bicep.config in the example structure below).

    /bicep/modules/module1.arm
    /bicep/modules/module2.arm
    
    /bicep/main1/main1.arm
    /bicep/main2/main2.arm
    or:
    /bicep/main1.arm
    /bicep/main2.arm
    
    /bicep/bicep.config
    

    For VS Code, users can do code . under /bicep folder. This opens the bicep workspace.

    For command line, users can specify bicep config file via a compiler option, such as:
    bicepc -settings ./bicep.config in which bicepc means bicep compiler.

  2. Locating modules with relative paths.

BBurns meeting notes 7/23/2020

Parameter, variables and outputs

  • Brendan really liked separating the export from the variable assignments as a way to separate contract from the implementation. The syntax would be something like the following:
// contract section
parameter foo string
export bar string

...
...
...

// somewhere in the middle of the file
var bar = <expression>

...
...
...
...
  • The types on exports could be optional but not a strong opinion.
  • No issue with switching to decorators on parameters
  • We should support a multi-parameter and multi-export syntax (similar to #23). Suggested syntax is something like this:
parameters (
  foo string
  bar int
)

exports (
  someOutput bool
  otherOutput string
)

Brackets on resource loops

  • We're ok using the resource[] syntax. An alternative suggestion would be to make resource more a type than a keyword. For example:
// single resource
resource<Microsoft.Storage/storageAccounts@2019-07-01> myStorage = {
  name: '...'
}

// resource loop
resource<Microsoft.Storage/storageAccounts@2019-07-01>[] myStorage = [
  for foo in foos: {
  name: '...'
}]

Filters

We're fine to defer the filter implementation. Can potentially investigate whether resource loops and conditions can be combined to achieve resource filtering, but it won't work for value filtering.

String literals

  • No issue with defaulting to interpolated strings
  • Suggested adding string syntax without interpolation enabled (similar to verbatim @ strings in C#). It would make typing certain symbols like $ without requiring escapes.
  • Suggested implementing a multi-line/block strings. No preference on syntax. Suggested yaml block strings, but they do appear to have limitations around white-space sensitivity.
  • Discussed removing quotes from string literals to make it more like YAML, but cannot do that due to expressions.

Automatic Semicolon Insertion/whitespace sensitivity

  • We need to ensure that that there are no ambiguities due to ASI like in JS.

Decompiler/Round-tripping

  • Decompiler should be a separate tool from core bicep intended for migration only. Doesn't need to be perfect.

Resource declarations

  • Dependencies could be a decorator instead of a property in the resource body to separate metadata from resource body.

Modules

  • Module reference syntax looks good.
  • We should consider supporting "native modules" (referencing a JSON template as if it were a bicep module). This will simplify transition without forcing people to rewrite all their stuff.
  • Supportive of multi-file modules with single-file option.
  • We have a potential issue with current module design where the user could accidentally include a file in a module simply by dropping it in a folder. This could result in unnecessary resources being added in the module or simply squiggles in VS code due to symbolic name clashes. Suggestion is to have an optional module name declaration at the top of the file to make opt-in intentional. (Files in the same module would have to specify the same module name matching the directory name.)

[Vscode extension] Add telemetry

Add information such as:

  • Dotnet version running (after getting it from vscode-dotnet-runtime)
  • Workspace settings to define (ie. installation path)

Simplifying resource declarations

On Thursday, we had a review with the DevDiv PM team about the syntax we have closed on. We were reviewing the resource syntax and comparing it to it's HCL equivalent.

Bicep:

resource myStorageAccount `Microsoft.Storage/storageAccounts@2017-10-01` = {
  name: storageAccountName
  location: resourceGroup().location
  kind: 'StorageV2'
  sku: {
    name: 'Standard_GRS'
  }
  tags: {
      environment: 'staging'
  }
}

HCL:

resource "azurerm_storage_account" "example" {
  name                     = "storageaccountname"
  resource_group_name      = azurerm_resource_group.example.name
  location                 = azurerm_resource_group.example.location
  account_tier             = "Standard"
  account_replication_type = "GRS"

  tags = {
    environment = "staging"
  }
}

Things to note between the two:

  • In HCL, there is no equivalent of the properties property. Everything is flattened. This makes things simpler, but in order for us to do something similar, it means that we would need a method for resolving conflicts. For example, microsoft.web/sites has a standard name property and a separate properties.name property. From the TF docs, it looks like they use the same name for both: https://www.terraform.io/docs/providers/azurerm/r/app_service.html
  • resource_group_name is required because everything is in the context of the subscription, not a specific RG
  • kind is required in ARM, but is defaulted in HCL
  • In ARM, the property sku.name combines both the account_tier and account_replication_type
    • sku.tier is also a valid property, but I don't (think) it's required
    • the HCL property names are a bit easier to understand, but that is because they renamed properties that don't exist in ARM. We have heard feedback that Terraform is not an "honest" representation of Azure, and this feels like evidence of that.
  • IMO, this is a result of HCL making up for bad Azure resource APIs. We could attempt to make up for them as well, but this would put us on the same treadmill as TF. I'd rather us put more effort into things like ADL to fix the root problem, which is the APIs themselves.

cc @paulyuk @satyavel @neilpeterson

DevDivPM Bi-weekly meeting notes (7/30/20)

Modules

Spent nearly the whole hour walking through modules in-depth

  • With modules, what do we lead with in docs -- raw resource declaration or a module from the registry?
  • tooling for modules:
    • we agree parameters should validate against param modifiers like allowedValues, but also validate against the schema where possible even if it passes parameter validation
    • we only have so many modifiers in the ARM runtime, but we don't have something like a regex for more advanced validation. We could add something that would only run client side, since regex running server side has it's own challenges
    • aspirationally, we'd also like to add better custom object validation
  • can we output a list of resources? what is the type?
    • we haven't closed on this yet
  • can we have a top level import to simplify module filepaths
  • fix default -> defaultValue in module spec (cc @majastrz )
    • we should align to either default or defaultValue in both contexts
  • how do overrides work -- "I love this module and this bicep template is exactly what I want, but I want to use my custom vnet"
    • through docs and tooling, can we make it easy to author good modules
      • can we create a module that does the above?
      • how do we verify that "it's possible" vs "it's decent, and not disgusting"
  • databases.outputs.sqldatabases
    • should it just be databases.sqlDatabases
    • if we require output, then we can keep adding new functionality (e.g. databases.resources)
  • what is the method to supress schema warnings?
    • an explicit suppression operator
    • can we show "inner" error for "black box" or only outer error on parameter?
    • maybe we do for local modules, because they have a chance of fixing it
      • remote they may be able to fix it, but would take longer
    • what if the parameter is assigned to multiple things..
      • anywhere where we know we are giving a false positive, we can't block
  • can we get a telemetry event when the user suppresses a warning?
    • is that allowed in the language server?
    • should opt-in rather than opt-out?
      • could we annotate in the ARM template with a comment?
      • could add a metadata blob
      • could have an explicit "report this" action from the tooling

mocked up a list of modules example, since we don't have one in the spec yet. This would result in things.length number of nested deployments in the compiled JSON.

module databases '../sqlDatabases' = 
[for thing in things :  {
    accountName: '${thing.name}fooAccount'
    // parameter with default value can be omitted.
}]

Type Providers

Proposal - Type Providers

Motivation

  • Try and simplify the resource decleration syntax
  • Provide better autocompletion capabilities
  • Avoid repition of api-versions
  • Avoid magic string handling
  • Attempt to make the language less ARM-centric

Type Providers

The current resource decleration syntax that has been specced looks like:

resource azrm 'network/networkInterfaces@2019-01-01' myNic: {
    ...
}

We've got a jumble of identifiers and strings, and it's hard to follow what each token is doing. There's also a lot of repetition when multiple resources are being deployed, and the identifier that ends up being assigned is at the end of the line.

Proposal

Declare the imported providers at the top of the file (similar to a JS/TS import):

use 'arm/network/2019-01-01' as network

Consume the provider as follows:

resource network:networkInterfaces myNic: {
    ...
}
  1. The use of the : after the imported provider type helps clarify what is the identifier. Note that we may need to pick a different symbol if this clashes with other uses of :. This should also give us a better chance at IDE completion.
  2. This makes upgrading API versions very straightforward if desired.
  3. See proposal-annotations.md for a proposal on how this can be further split away from the resource declaration syntax.
  4. The namespacing story is much simpler for future extensibility - rather than forcing a dedicated azrm token to be included.

Notes

  • This will require us to maintain a set of type definitions, and we'll need to decide how to handle the case where a type definition is not yet available for a new api version.
  • This should also help to unify with the proposed module syntax.

user-defined functions

Proposal - Functions

Goals

  • Avoid code repetition both in-file and across files.

Non-goals

  • The cross-file sharing syntax should be defined in the modules spec, so is not currently included here.

Spec

The function keyword provides a very simple mechanism for defining a pure function:

function <identifier>(<<arg> ,>*) {
    variable <name>: <value>
    ...

    return <output>
}

A function creates a new scope, and cannot reference any identifiers from an outer scope; the only identifiers initially available are those defined as arguments. The function body can consist of variable definitions to break things up, and the function must terminate with a return value. Other keywords such as resource, input and output are not supported inside a function.

A function body may access another user-defined function, or a built-in function, but no form of recursion is allowed.

Functions are not annotated with types, but types should be inferred where possible when the function is called.

Example

// defining the function
function getConnectionString(storageAccount) {
    variable primaryKey: storageAccount.listKeys().primary

    return 'DefaultEndpointsProtocol=http;AccountName=${storageAccount.name};AccountKey=${primaryKey};'
}

// calling the function
variable myConnectionString: getConnectionString(myAccount)

// this should throw a type error because the boolean type does not support `listKeys()`:
variable myConnectionString: getConnectionString(false)

Region folding

Fold regions with matching # pairs, such as #parameters, or #variables.
Regions can be nested, but can't be partially overlapping. Language server should be able to calculate lines to fold/unfold.

DevDiv Meeting Notes 6/12/2020

Semantic Model

  • Allow invalid symbols to exist
  • Make sure that all duplicate declarations are invalid - don't pick favorites.
  • Make sure that all cyclic identifiers are marked cyclic - don't pick favorites
  • A contagious any type may also be useful to allow error recovery and for untyped declarations.
  • It may make sense to introduce an internal "error" type to distinguish from any type.

Erros

  • In cases of type conflicts on duplicate declarations, choose type that captures them both to avoid follow-on errors (with the worst case being any type if there's no commonality)
  • May consider a high upper limit on errors. (TypeScript rarely hits it.)

Telemetry

  • Don't phone home
  • Can consider crawling public repos as a test of the compiler and language server.

Unicode

  • Investigate UAX #31 as a baseline for identifier character patterns

Formatter

  • There may not be a single canonical format for a language declaration (one line small array vs. multi-line large array, for example)
  • Formatter may need to preserve some of the user's formatting.
  • Simplest option is to have a formatter that walks the AST and writes out text directly.
  • More complex formatter may involve tree transformations - more capable but more memory intensive.

"watch" command for bicep CLI

Enable quick iteration by watching a particular .arm file and continuously compiling it whenever changes are saved

"deploy" command for bicep CLI

This would both compile the bicep code into ARM JSON and deploy it. I'm not sure if it would need to generate a temporary azureDeploy.json file or if it could keep everything in memory.

As a prerequisite, you would need to have either Az CLI or Azure PowerShell installed and the Bicep CLI would simply call that. I'm imagining users can create a config file to specify all the parameters required for a deployment:

  • Az CLI or PowerShell
  • Name of parameters file (we will assume a default of parameters.arm or something like that, btw we need to figure out parameters :) )
  • target scope
  • use --confirm or not

Anonymous resources

Anonymous resources

Current Syntax

resource dnsZone 'Microsoft.Network/dnszones@2018-05-01': {
  name: 'myZone'
  location: 'global'
}

Proposal

Most resources in a template do not need to be referenced. Why does the user need to provide a 2nd name (in addition to the name property) that serves no purpose?

This could look like the following:

resource 'Microsoft.Network/dnszones@2018-05-01': {
  name: 'myZone'
  location: 'global'
}

Considerations

  1. Compiled template will deploy an anonymous resource like any other.
  2. We could use a discard syntax like in C# but that would force users to type _ every time. Seems annoying as well.
  3. Do we show resource declarations without identifiers in Show All Symbol Definitions in VS code? I think we should because they will be an interesting navigation point, but will need to make up a name somehow.

Input annotation spec (defaultValue, minValue, maxValue etc)

Proposal

Current Syntax

input string rgLocation;

New Syntax

Based on latest conversations with Anders, I'm proposing the following syntax for declaring inputs:

Minimal Declaration

input myString: string
input myInt: int
input myBool: bool
input myObject: object
input myArray: array

Secure parameters

input myPassword: string { 
  secure: true
}

input mySuperSecretObject: object { 
  secure: true
}

Enum Parameter

input myEnum: string {
  allowedValues: [
    'one'
    'two'
  ]
}

Default value

input myParam: string {
  defaultValue: 'foo'
}

String length constraint

input storageAccountName: string {
  minLength: 3
  maxLength: 24
}

Integer value constraint

input month: int {
  minValue: 1
  maxValue: 12
}

Description

input myObject: object {
  metadata: {
    description: "There are many like this, but this object is mine."
  }
}

Combined modifiers

input storageAccountName: string {
  minLength: 3
  maxLength: 24,
  defaultValue: concat(uniqueString(resourceGroup().id), 'sa'),
  metadata: {
    description: "Name of the storage account"
  }
}

Considerations

  1. Anders' feedback was to call this parameter instead of input to match the JSON. I don't have a strong opinion on this one. However, input is shorter.
  2. Declaring a non-secure parameter with a default value is the most common scenario. Can we simplify it somehow?

References

  1. Template Parameter Syntax Reference

Features and Milestones

When editing this, please cut and paste bullets around. The intent is to capture the differences between milestones.

M0

First engineering milestone.

Language Features:

  • Parameters
  • Variables
  • Resource declarations
  • Language validation (not resource schema validation)

VS Code Extension+Language Server Features:

  • Basic syntax highlighting (TextMate-based)

Misc. work:

  • Setup build, packaging, and test infra for .net and TS parts

M1

Second engineering milestone

Language Features:

  • Built-in functions

VS Code Extension+Language Server Features:

  • Squiggles
  • Basic statement completions

Misc. work:

  • MS Build task for compiling the DSL in a pipeline

v0.1 (MVP) - ETA 8/15

The first public release.

Language Features:

  • Outputs

VS Code Extension+Language Server Features:

  • Full syntax highlighting

Other stuff:

  • Web-based Bicep compiler/playground

Docs:

  • Quickstarts (?) for all supported syntax
  • Reference doc for converting ARM Templates to Bicep (until we have a decompiler)

v0.2 - ETA 9/15

Language Features

  • Modules
  • Allow the Microsoft.Resources/deployments to reference another .arm file
  • Expressions
  • Full go to def
  • Find all refs

v0.3 - ETA 11/15

Language Features

  • loops
  • conditionals
  • Property access on symbolic resource references
    • GET property access (sugar for reference(resourceId()).myProp)
    • PUT property access (e.g. location, apiVersion, sku, properties.*, etc.)

JSON Semantics:

  • JSON will need to be updated to handle property list comprehension

Ecosystem:

  • Convert ARM Tools snippets to Bicep extension
  • Decompiler

v0.4 - ETA 12/15

Ecosystem:

  • Module registry (includes support for retrieving from any URL)

Post-MVP

Features not included in or cut from previous milestones.

JSON semantics:

Language Features:

  • First-class support for deployments across scopes
    • this will already be possible via manually declaring a Microsoft.Resources/deployments resource
  • User-defined functions
  • case/switch (Azure/azure-resource-manager-schemas#1017 (comment))
  • External resource references
    • will be able to use reference() and resourceId() as soon as we support all built-in functions
  • create resources in alternative scopes

VS Code Extension+Language Server Features:

  • Full statement completions
  • TBD
  • Resource schema validation (with escape hatches)

Outputs syntax

Current Syntax

Here is an example of existing parameter and variable syntax:

parameter storageAccountName string
variable location = resourceGroup().location

Proposal

For consistency with the similar existing language constructs, I'm proposing the following syntax for outputs:

Basic output declarations

// output value referencing a resource identifier
output myEndpoint string = myResource.properties.endpoint

// hard-coded output
output myHardcodedOutput int = 42

// output with a value calculated by a loop (output copy loop in JSON):
output myLoopyOutput array = [for myItem in myArray {
  myProperty: myItem.myOtherProperty
}]

Outputs with modifiers

To declare an output that compiles into a template output with secure* type, we will need to leverage the modifier syntax.

Option 1

// output that returns a password
output password string { secure: true } = listKeys(myResource.id)

Option 2

// output that returns a password
output password string { 
  secure: true
  value: listKeys(myResource.id)
}

Considerations

  1. @lwang2016 in parts of #58 and @alex-frankel in #23 have proposals for grouping multiple outputs and multiple parameters into one declaration. If we decide to take it, the syntax should be applied consistently to both parameter and output declarations.
  2. The modifier syntax doesn't play nicely with = and feels rather awkward. Should we revisit the decorator approach?

DevDiv meeting notes 6/3/2020

Syntax feedback

  • Use = in variable assignments over : because it's more recognizable to everyone.
  • Keep modifier syntax for parameters (without :)
  • Use = there for the one-liner default value syntax.
  • Do not enforce any specific ordering of statements (like inputs at the top). Makes it easier for the user to merge files and does not force forward references. "You're just declaring stuff - order doesn't matter."

Compiler implementation feedback

  • For compile-time constants allow grammar to accept expressions even if not allowed. Reject usage when type checking time.
  • Consider constant folding if needed but ok to start with single literal at first.
  • Optimizations can be implemented wherever they make sense. No particular rules there.

Errors

  • Use error codes. Useful for stack overflow searches and other things.
  • Log errors in format file(line,column 1-based): message (should match what msbuild does). The format is well-recognized by tools.

support a multi-* declaration (should support params, outputs at least, but likely more)

Parameters probably should always be declared at the top of the file anyway, and this should help with readability and terse-ness:

parameters {
   prefix string = 'my string'
   suffix string {
    secure: 'true'
  }
  ipRange object {
    defaultValue: {
      myProp: 'my value'
    }
  }
}

would be nice if I can separate over multiple-lines:

parameters {
   prefix string = 'my string'
   
   suffix string {
    secure: 'true'
  }
  
   ipRange object {
    defaultValue: {
      myProp: 'my value'
    }
  }
}

Rough Compiler Notes

Lexer/Parser

  • Use a handwritten recursive-descent LL(r) parser for good error messages and error recovery. We should maintain a reference grammar with BNF syntax checked into this repo.
  • Largely follow architecture from Roslyn and Typescript as they are mature with strong built-in support for tooling & analysis.
  • Syntax should be round-trippable (notes from Roslyn here). We should have many tests to assert this.
  • String interpolation should be built into the grammar and not implemented as a separate grammar (follow Typescript, not Roslyn here).
  • Multiline strings must either keep or strip leading whitespace, and either keep or convert \r\n for newlines.
  • Keywords should be context-sensitive (handled by the parser, not the lexer). Users shouldn't be forced to escape keywords if used in e.g. resource properties declaration.
  • Keywords should be kept short where possible.

Middle End

  • The middle end should produce a platform-agnostic dependency graph of resource declarations, so that the backend can be swapped out to target e.g. OAM.

APIs

  • We should aim to expose a simple framework for linting and analysis to cover aesthetic as well as semantic problems. This could also be used internally for checks such as 'resource type cannot depend on deferred values' etc.

Useful Reference Links

  • Semantic & Syntax LSP hooks in Roslyn: semantic, syntax.
  • String interpolation scanner from TS compiler here

DevDiv meeting notes 6/24/2020

Loops

  • Unfortunately, I didn't keep detailed notes from loops. @shenglol's looping syntax seemed viable.

Decorators

  • Strongly discouraged use of decorators if we can help it. The alternative would be to add modifiers to the loop construct which isn't really great, either.

Modules

  • How do we version modules?
  • How does module naming work?
    • When you say import Foo, how does Foo get found?
    • Is there any correspondence between the physical naming (files on the file system) and the logical naming (Foo in the example above)
    • Using physical naming is simpler to implement and removes the need for project files (TS/JS follows this model.)
    • Using logical naming requires project files and complex reference resolution logic (C# follows this model)
  • Does importing a module create a namespace? Or does it work differently?
  • Try to avoid temptation to do import * because it requires clear lookup order and precedence)
  • Do we allow modules to be composed out of other modules?
  • Can items out of a module be imported separately? Is there a syntax to alias them?
  • Do we adopt a folder-based convention? (All files in a directory is considered a module in terraform, for example)
  • Do we specifically declare a particular file an "entry point" or is any file a module? (It didn't come up at the meeting, but if we choose to designate an entry point, how do you test a module?)
  • Due to the nature of ARM template JSON, modules will be instantiations based on parameter replacement. If multiple modules import the same module with the same or different parameters, we may not be able to avoid the duplication. Theoretically we could intern/dedup by parameter values, but it's not possible to do it completely by static analysis in the compiler for all possible cases.
  • There are 2 main use cases for modules: deploying resources and exporting user-defined functions or outputs. Should we split the latter into "libraries" which would be side-effect free, but could be shared between other modules without duplication. (At least for the same library version.)

Initial implementation language comparison

Goals

Document the choice of the the initial implementation language for both the ARM language compiler and the language server.

Non-Goals

Selecting the final (or GA) implementation language for the compiler and language server. As this project matures, we expect to revisit and change this decision.

Candidates

We've identified the following primary candidates for initial implementing the compiler and language server (listed alphabetically):

  • C# on .net Core
  • Go
  • TypeScript on Node

We have also considered C++ and Rust. We have found no compelling reason to use either for this project.

C#

Pros

  • Decent ecosystem
  • Fast to iterate on due to team expertise in language, runtime and tool chain.
  • Easy integration with ARM C# codebase and Azure PowerShell
  • Can leverage Roslyn (C# compiler) source code and design directly
  • Runtime dependency can be mitigated via single-file trimmed packaging (https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0#single-file-executables)
  • LSP packages are available, published and maintained.
  • Matches the code base of existing ARM deployment engine.

Cons

  • Requires a custom delivery mechanism for CLI
  • Least attractive to OSS development out of these options

Go

Pros

  • No runtime dependency
  • Small binary size
  • Attractive for OSS contributions

Cons

  • Unfamiliar development tool chain
  • More difficult to leverage existing solutions from Roslyn or TypeScript compilers as the source has to be ported to Go. This also constrains the help we can get from DevDiv SMEs.
  • Requires a custom delivery mechanism for Azure PS and CLI
  • LSP packages are not published (have to fork from go-langserver repo and get non-standard license cleared by LCA)

TypeScript

Pros

  • Strong ecosystem
  • Fast to iterate on as team has some pre-existing TS expertise (and it’s quick to learn for c# devs)
  • Attractive for OSS contributions
  • Straightforward to host in-browser
  • LSP packages are available, published and maintained.
  • Can leverage TypeScript compiler source code and design directly

Cons

  • Dependency on node runtime
  • Unusual choice for CLI tools
  • Requires a custom delivery mechanism for Azure PS and CLI

Conclusion

With everything being equal, Go would appear to be the best choice due to OSS attractiveness as well as small size of dependency-less native binaries. However, we lack the expertise in designing a languages, building compilers and language services, which are generally considered to be challenging problems to solve. As we're building up that expertise, we should not do so on a completely unfamiliar tool chain. In addition, Go is not common at Microsoft and constrains how much we can leverage from DevDiv solutions in this space. (There are open source Go libraries we can borrow from, but the in-person engagement we get with DevDiv can't be beat.)

C# on .net Core will be used as the initial implementation language. The team has the most familiarity with C# as well as the surrounding tool chain, which includes build, testing, packaging, debugging, etc. We are aware that this choice makes the project less attractive to external contributors. However, we expect the majority of contributions in the early phases will come from this team, so this should not be a concern. We are keeping the door open to do a full rewrite in Go in the future.

The issue of runtime dependency with .net Core can be mitigated via the single file executable feature available in .net Core 3.0 (https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotnet-core-3-0#single-file-executables). During our investigation, an experimental parser produced a 10MB executable.

Simple array types in parameters and outputs

Current

Current syntax for parameters and outputs allows the user to specify array as the type. This type is equivalent to any[] type in TS (each item can be of any other type and each item can be of a different type).

Proposal

Allow the user to declare the item type for arrays of bool, string, and int types.

Non-Goals

There exists a separate problem of user-defined object types. This proposal does not attempt to address that at all.

Examples

Simple one-level arrays could look like this:

parameter myStringArray string[]
parameter myIntArray int[]
output myBoolArray bool[] = <expression>

Nesting would be allowed and would look like the following:

parameter myArrayOfStringArrays string[][]
parameter myArrayOfIntArrays int[][]
output myArrayOfBoolArrays bool[][] = <expression>

Compiled JSON

These narrower array types do not exist in JSON. We would compile them all to array type until JSON supports those types. Until then, there would be no runtime checking for this. When referencing bicep modules, bicep would be able to use the enhanced type information to produce better/more errors.

In the native module case (referencing a compiled or hand-written JSON template), bicep would have to fall back to any[] type unless the extra metadata needed can be embedded in the JSON.

Type system

Types

We will support the following types:

  • Built-In Types
    • error
    • any
    • string
    • object
    • array
    • bool
    • int
    • number
    • resource
  • named non-resource types (such as azrm://ResourceGroup, azrm://Deployment, azrm://Parameter, etc.)
  • named resource types (such as azrm://Microsoft.Network/virtualNetworks@2019-06-01)
  • union types (needed to represent functions accepting multiple parameter types)
  • discriminated union types (needed to represent polymorphic resource types)

Union types

One example why we need union types is the concat() function (and any other function with multi-type arguments).

One possible way to model the concat function using pseudo-TypeScript:

concat(x: string | int, y: string | int, ...): string
concat(x: array, y: array, ...): array

Resource Managers

In past discussions, we kept calling azrm a "provider", but is too similar to an ARM Resource Provider. I propose we call this a "Resource Manager" instead. (RM for short.)

Considerations

  1. Built-in types (string, number, int, bool, array, object) are not specific to an RM.
  2. Named resource and non-resource types are RM-specific.
  3. Functions need to be split into RM-specific and non RM-specific groups.

Implicit conversion

Here are the rules that determine how values can be assigned.

  • all values are assignable to declarations of the same type
  • all types are assignable to any
  • int is assignable to number
  • int is assignable to int | string
  • string is assignable to int | string
  • resource is assignable to object
  • azrm://ResourceGroup is assignable to object but not to resource
  • azrm://Microsoft.Network/virtualNetworks@2019-06-01 is assignable to resource
  • azrm://Microsoft.Network/virtualNetworks@2019-05-01 is NOT assignable to azrm://Microsoft.Network/virtualNetworks@2019-06-01 or vice versa

Explicit conversion

Explicit type conversions should be performed via function invocations if such a function is available.

Resource symbolic name

Resource Symbolic Name

This article proposes a design for adding resource symbolic names to ARM template language in json.

Challenge

Bicep language makes ARM deployment composition simpler and more intuitive as compared to current template language in json, particulary in defining and referencing resource loops.

DSL compiler translates bicep files into an intermediate language in json that ARM template engine needs to understand as well in order to prepare for resource deployment. However this translation runs into challenges caused by template language's limited capability in referencing entities in resource arrays.

For example, a bicep file that creates multiple storage accounts and outputs their blob endpoints:

parameter storageAccountNames array {
    default: ["account1", "account2"]
}

resource[] storageAccounts 'Microsoft.Storage/storageAccounts@2019-04-01': [
    for storageAccountName in storageAccountNames: {
        name: storageAccountName
        location: resourceGroup().location
        sku: { name: "Standard_LRS" }
        kind: "Storage"
        properties: {
            storageProfile: {
                dataDisks: [
                    {
                        diskSizeGB: 1024
                        createOption: "Empty"
                    }
                ]
            }
        }
    }
]

output blobEndpoints array = [
    for storageAccount in storageAccounts: { storageAccount.primaryEndPoints.blob }
]

Notice Bicep outputs the blob endpoints easily because it can reference storage account resources using symbolic name storageAccounts and enumerate through entities by their symbolic names storageAccount as well.

A corresponding template snippet would look like:

"parameters": {
    "storageAccountNames": {
        "type": "array",
        "defaultValue": ["account1", "account2"]
    }
},
"resources": [
    {
        "type": "Microsoft.Storage/storageAccounts",
        "apiVersion": "2019-04-01",
        "name": "[parameters('storageAccountNames')[copyIndex()]]",
        "location": "[resourceGroup().location]",
        "sku": {
            "name": "Standard_LRS"
        },
        "kind": "Storage",
        "properties": {
            "storageProfile": {
                "dataDisks": [
                    {
                        "diskSizeGB": 1024
                        "createOption": "Empty"
                    }
                ]
            }},
        "copy": {
            "name": "storagecopy",
            "count": "[parameters('storageAccountNames').length]"
        }
    }
],
"outputs": {
    "blobEndpoints": {
        "type": array,
        "value": ?
    }
}

Notice outputs.blobEndpoints.value, there's no easy way in template language to reference the array of storage accounts and their primary endpoints. This is because resources property is defined as an array.

Proposal

Change resources property type to object instead of array. This allows asigning symbolic names via "<symbolic name>": "<resource declaration>" pairs. A resource's symbolic name represents its state at runtime.

Extend template engine so that function reference('<symbolic name>') is able to retrieve resource state using its symbolic name.

Above template snippet would then look like:

"parameters": {
    "storageAccountNames": {
        "type": "array",
        "defaultValue": ["account1", "account2"]
    }
},
"resources": {
    "storageAccounts": {
        "type": "Microsoft.Storage/storageAccounts",
        "apiVersion": "2019-04-01",
        "name": "[parameters('storageAccountNames')[copyIndex()]]",
        "location": "[resourceGroup().location]",
        "sku": {
            "name": "Standard_LRS"
        },
        "kind": "Storage",
        "properties": {
            "storageProfile": {
                "dataDisks": [
                    {
                        "diskSizeGB": 1024
                        "createOption": "Empty"
                    }
                ]
            }
        },
        "copy": {
            "name": "storagecopy",
            "count": "[length(parameters('storageAccountNames'))]"
        }
    }
},
"outputs": {
    "copy": [
        "name": "blobEndpoints",
        "count": "[length(reference('storageAccounts'))]",
        "input": "[reference('storageAccounts')[copyIndex()].primaryEndPoints.blob]"
    ]
}

Compatibility with existing template pipeline

Existing template engine pipeline needs to be extended to support both current (v1) and the new template schema (v2).

Template function reference takes resource name or id as argument, it also needs to be updated to understand resource symbolic names. Consider a scenario in which a resource with symbolic name x and name y and another resource with symbolic name y and name x. Calling reference('x') would be ambiguous because 'x' could either refer to the resource name or symbolic name.

To eliminate the ambiguity, function reference takes symbolic name or resource id as argument for template v2. Template engine can easily tell if a template is v1 or v2 by checking the resources JToken type, array for v1 and object for v2.

Similarly, dependsOn function allows providing either resource name or resource id and the behavior is changed to support either symbolic name or resource id in the new schema.

Current template pipeline has restrictions on certain template functions that they must be evaluated before calculating deployment dependencies in order to have a deterministic dependency graph. For example, resource condition must be evaluated to a boolean value to determine if a resource should be included in dependency; loop count must be evaluated to decide number of resources to be created in deployment.

To address the above restrictions, symbolic name in bicep will be compiled into separate template functions, one for template-phase evaluation the other for deployment-phase evaluation. We introduce a new template function "[resource('symbolicName')]" for template-phase evaluation, function properties must have values already specified in template. We keep "[reference('symbolicName')]" to refer to deployment-phase evaluations.

Symbolic Name Compilation Outputs

Scenarios:

Reference resource name

Bicep: symbolicName.name

Json: "[reference('symbolicName', 'Full').name]"

Reference resource property

Bicep: symbolicName.properties.propertyName

Json: "[reference('symbolicName').propertyName]"

Reference array of resources

Bicep: arraySymbolicName.length

Json: "[length(reference('arraySymbolicName'))]"

Reference an array item

Bicep: arraySymbolicName[index].properties.propertyName

Json: "[reference('arraySymbolicName')[copyIndex()].propertyName]"

Reference nested deployment outputs

Bicep: deploymentSymbolicName.outputs.outputName.value

Json: "[reference('deploymentSymbolicName', 'Full').outputs.outputName.value]"

dependsOn resource

Bicep: dependsOn: [symbolicName1, symbolicName2]

Json: "dependsOn": ["symbolicName1", "symbolicName2"]

BT Meeting Notes 7/30/2020

We talked about a few things today, but the main items are as follows:

Type System

TS treats any type in a special way. All types are assignable to any and also any type is assignable to all types. This eliminates the need for an unknown type to represent the type of an expression for which we don't have schema. (For example, JSON schemas may be incomplete or we may lack the type info for the result of listKeys() function on a storage account.)

It comes at the cost of having to check for any in a lot more places, but it's a validated pattern, so we will adopt it most likely.

ASI vs auto newline skipping

For the purpose of making the language less white space sensitive, we were considering adopting automatic semicolon insertion (with simpler rules than JS). The issue I have with that is that it requires us to add semicolons to the language when we don't really want to. (The semicolons would be optionals, but they would exist.)

A simpler approach would be to do automatic newline skipping instead. The lexer would continue emitting the newline tokens as it does today, but the parser would only use them as-is in places where a newline is expected. In other places, the newline tokens would be incorporated in the existing tokens as trivia (this is how we treat other white space and comments). Brian confirmed that the approach is viable and has done this before in a parser.

The downside is that all token consumption code in the parser will need to be aware of this. To avoid this becoming a bug factory, we will mitigate by creating tests that rewrite valid samples from the repo and insert extra white space and comments in every possible place in the file.

Support STDIN input

Add support for STDIN input like:

echo "..." | bicep build - or echo "..." | bicep build

IL Limitations

Problem

When implementing the code generation part of the bicep pipeline, we encountered the following limitations:

Complex literals

The IL lacks proper support for complex literals such as arrays and objects. This forces us to use the json() function to compile them. This works fine when a function or operator is operating on complex literals that themselves don't contain any expressions, but falls apart when they do.

Consider the following:

variable foo = concat([
  'a'
  concat('b', 'c')
], [
  'd'
  uniqueString('a')
])

One way to compile that is to generate the expression [concat(json('["a", "[concat(''b'', ''c'')]"]'), json('["d", "[uniqueString(''a'')]"]'))]. Unfortunately, expressions inside the string passed to the json() function are not evaluated by the runtime, so this will not work as expected.

An alternative approach would be to break the expression up into smaller variables that can be compiled. This will work as long as the expressions don't contain functions require inlining such as list* or reference().

It is theoretically possible to trick a copy loop into reevaluating expressions, but that will suffer from readability issues and will likely interact poorly with bicep for loop syntax.

The limitation applies to both arrays and objects equally.

Simple literals

There are also minor challenges with compiling true, false, and null in Bicep. The best options currently are compiling them into json('true'), json('false') and json('null'), respectively.

Unary Minus

In the IL unary minus is a component of integer literals and is not an operator. As such, we require a workaround when the unary minus is used with expressions in bicep. The workaround is to subtract the expression from 0 by compiling it as sub(0, <expression>).

Proposal

Complex literals

The issue with complex literals is blocking full expression support in bicep. I'm proposing that we add a parameter to the json() function to explicitly control whether expressions get evaluated or not.

json(jsonString, evaluateExpressions)

Parameter Name Required Type Description
jsonString Yes string The value to convert to JSON.
evaluateExpressions No bool Set to true to evaluate expressions in the JSON after deserializing. Set to false otherwise.

If the new evaluateExpressions parameter is omitted, the function would behave as it does today and does not evaluate expression. This is required for backwards-compatibility given that existing templates can contain strings that begin with [, which would be incorrectly interpreted as an invalid expression.

This appears to be an edge case, but it is very easy to chain expressions via variables. If an expression uses a function that forces inlining (list* or reference), the whole thing has to be inlined.
.

Simple literals

Generating json('false') and similar expressions works perfectly fine, but can be slightly harder to read for a user learning bicep. It is trivial to address it by adding the following functions to the IL:

Function Name Return type Description
true() bool Always returns true
false() bool Always returns false
null() null Always returns null

Unary minus

There's no compelling reason to reconcile the issue between bicep and IL.

IntelliSense meeting notes 6/16/2020

IntelliSense Feedback

  • For malformed parameter declarations, create a parameter declaration nodes in the parse tree with error tokens inside.
  • Parser can keep track of a completion context and select a completion value provider accordingly.
  • Parser can also keep a stack of known terminators to adjust recovery based on the situations.

Language Syntax feedback

  • Bring back ; and ,. (useful for error recovery and can reuse JSON muscle memory)
  • Complex expressions will be difficult to type on a single line

Comments in bicep file

More of a "so it doesn't get lost" than any serious discussion topic, but worth thinking about a bit from a couple angles:

(1) Will we allow comments in bicep - single line, multi line? If so, what syntax?
(2) Do we plan to copy all comments into the json IL (JIL?), or are comments more flexible in Bicep than JIL?

BT Meeting Notes 6/16/2020

Feedback

  • Do we need limits on identifier length? Could also be a warning.
  • Model newline in the grammar with the following production: NL -> ("\n" | "\r")+
  • Treat built-in functions as symbols with reserved names - cannot declare variable or parameter with the same name as a built-in function (this would prevent adding new functions, though :()
  • Should resourceGroup() be a function or a global variable?
  • Slurp all new lines into a single token in the lexer.

Unicode

  • Support line separator \u2029
  • Support paragraph separator \u2028
  • Check out UTR #13

String interpolation syntax

Current

The current string literals are simple single-quote enclosed strings, which look like the following:

'this is my string'
'this is another string'

The following escape sequences are currently supported:

Escape Sequence String value (is if it the string was printed out to screen)
\\ \
\' '
\n <LF character>
\r <CR character>
\t <tab>
\$ $

The \$ escape was added to reserve the $ character for future string interpolation work.

Examples with escape sequences:

'line one\r\nline two'
'I\tlike\ttabs'

Invalid escape sequences produce a lexer error.

Proposal

We should add string interpolation syntax to make it easier for users to format strings using inline expressions.

Option 1

'My {myResource.name} is {myResource.properties.enabled ? 'enabled' : 'disabled'}.'

This option requires adding an additional escape sequence of \{ and removing the existing \$ escape sequence from the lexer and TextMate grammar. (Trivial change.)

Option 2

'My ${myResource.name} is ${myResource.properties.enabled ? 'enabled' : 'disabled'}.'

No interpolation

The above options would be equivalent to the following alternatives:

  • concat('My ', myResource.name, ' is ', myResource.properties.enabled ? 'enabled' : 'disabled'}, '.')
  • format('My {0} is {1}', myResource.name, myResource.properties.enabled ? 'enabled' : 'disabled')

Consideration

The choice we have to make here is whether we require the $ as the interpolation trigger character. Option 1 is less typing overall but requires any content with { characters (such JSON embedded in strings) to be escaped with \{. These situations are rare but not impossible.

how would I reference this object value?

In my call with the Xbox team reviewing the DSL, they brought up the following code sample that we converted to the DSL and wanted to confirm the syntax for referencing a nested object:

An object like the following:

parameter environment string

variable environmentSettings = {
  dev: {
    name: 'dev'
  }
  prod: {
    name: 'prod'
  }
}

Would I reference environmentSettings.*.name this way? I wasn't sure how it would work with the parameter

resource site 'microsoft.web/sites@2018-11-01' = {
  name: environmentSettings[environment].name
  location: location
  ...

Command line experience and local file system conventions

Basic usage

Compile project
by looking for a "main" file with a specific convention (something like main.arm)

bicep build

Compile a specific file

bicep build file1.arm

Advanced usage

Compile multiple files

bicep build -- file.arm file2.arm C:\foo\bar\x.arm

Flags for output settings
Values should follow convention from other tools (e.g. basic, diagnostic, etc.)

bicep build -v diagnostic

Support for a config file
Can potentially support a .bicepconfig type file to specify settings for the CLI experience

DevDiv meeting notes 7/29/2020

Operators

  • The template JSON generation code should not depend on type information. This makes it diffcult for us to have a + operator that will sometimes compile to concat() and sometimes compile to add() function because type info may not always be available.

String literals

  • We should continue blocking invalid escapes which can allow us to add new escapes in the future if needed.
  • For uninterpolated strings we could choose a syntax that lets the user specify the delimiter character and the number of occurrences needed to begin and end such a string. Example: @##### content of the string {without any ${ interpolation #####
  • Some languages chose indentation-aware approaches to multi-line strings. Disagreements on tab length makes them challenging to say the least.

Type System

  • Type checking rules should surface errors that caused the problems and should not report errors caused by other errors. TS uses the approach of having the "error" type behave like any.
  • TSC source to look at under src\compiler path in the TS repo: types.ts, checker.ts, parser.ts
  • TS sometimes does subtype reduction in union types (quadratic operation) but not always useful.
  • In TS, a type string | string can't always be reduced to string because the 2 string types can differ in terms of modifiers. It can also be useful to keep modifier information around for other purposes (like IntelliSense)
  • Can use union types all the way. For example an integers that takes 3 values could be expressed as a 1 | 2 | 3 type.
  • TS type system operates on sets. No single root or top type. (never type is the empty set equivalent.)

Operator Precedence and Associativity

Operators

We will support the following operators in the bicep language:

Symbol Operator Name Operand Types Return Type Template Equivalent Description
! Unary NOT bool bool [not(<value>)] Negates the specified value
- Unary minus int int Direct value Multiplies number by -1
% Modulo int int int [mod(<value1>,<value2>)] Calculates the remainder from integer division
* Multiply int int int [mul(<value1>,<value2>)] Multiplies two integers
/ Divide int int int [div(<value1>,<value2>)] Divides two integers
+ Add int int int [add(<value1>,<value2>)] Adds two integers
- Subtract int int int [sub(<value1>,<value2>)] Subtracts two integers
>= Greater than or equal string string bool [greaterOrEquals(<value1>,<value2>)] Greater than or equal
>= Greater than or equal number number bool [greaterOrEquals(<value1>,<value2>)] Greater than or equal
> Greater than string string bool [greater(<value1>,<value2>)] Greater than
> Greater than number number bool [greater(<value1>,<value2>)] Greater than
<= Less than or equal string string bool [lessOrEquals(<value1>,<value2>)] Less than or equal
<= Less than or equal number number bool [lessOrEquals(<value1>,<value2>)] Less than or equal
< Less than string string bool [less(<value1>,<value2>)] Less than
< Less than number number bool [less(<value1>,<value2>)] Less than
== Equals any any bool [equals(<value1>,<value2>)] Less than
!= Not Equal any any bool [not(equals(<value1>,<value2>))] Less than
=~ Equals (case-insensitive) string string bool [equals(toLower(<value1>),toLower(<value2>))] Less than
!~ Equals (case-insensitive) string string bool [not(equals(toLower(<value1>),toLower(<value2>)))] Less than
&& Logical AND bool *n bool [and(<value1>, ...)] Returns true if all values are true
|| Logical OR bool *n bool [or(<value1>, ...)] Returns true if any value is true
? : Conditional expression bool any any <true type> | <false type> [if(<condition>, <true value>, <false value>)] Returns a value based on whether the condition is true or false

Operator Precedence and Associativity

We will support the following operators. Operators are listed in descending order of precedence (the higher the position the higher the precedence). Operators listed at the same level have equal precedence.

Symbol Type of Operation Associativity
( ) [ ] . Parentheses, property access and array access Left to right
! - Unary Right to left
% * / Multiplicative Left to right
+ - Additive Left to right
<= < > >= Relational Left to right
== != =~ !~ Equality Left to right
&& Logical AND Left to right
|| Logical OR Left to right
? : Conditional expression (ternary) Right to left
= default : Assignment None

Considerations

  • An operator must have a 1:1 match to the corresponding template functions. Otherwise, we would have to rely on type information which is not available in all cases. (As a result the + operator will be restricted to numeric types until we add a type agnostic function that can deal with strings, arrays, and integers). (This is based on feedback from Anders.)

References

Single line comment eats a newline

@alex-frankel found this issue. The file below reports the following error:

{
	"resource": "Untitled-1",
	"owner": "_generated_diagnostic_collection_name_#0",
	"severity": 8,
	"message": "Expected a new line character at this location.",
	"source": "bicep",
	"startLineNumber": 2,
	"startColumn": 1,
	"endLineNumber": 2,
	"endColumn": 9
}

File contents. VS code puts the squiggle on the variable keyword.

parameter storageName string default 'alex' // playing around with '=' for default (like powershell)
variable location = 'eastus'

resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
    // expressions not yet working, so everything is hardcoded
    name: 'test'
    location: 'eastus'
    sku: {
        name: 'Standard_LRS'
    }
    kind: 'StorageV2'
}

output stgId string = 'my output' // stg.resourceId


BBurns meeting notes -- 7/13/20

  • Can we consolidate parameters/variables/outputs into a single concept/keyword with modifiers to capture the different behaviors?
    • we should try to keep the number of concepts low and progressively disclose these capabilities to the user. params/vars/outputs are all in some sense different types of variables, where params/outputs have more specific functionality
    • will need to decide if all variables are default "private" or if they are default "public". You can imagine us having import and export keywords to capture the params/outputs distinction
  • current resource loop syntax should work fine, but it's confusing to overload the resource keyword. Sometimes it is a single resource and sometimes it is a list of resources, which is confusing. We should require resource[] when declaring a loop
  • separate filtering from the loop syntax to allow for filtering outside of a loop. should cover @bmoore-msft's scenario of skipping over null elements in an array.

cc @brendandburns - let us know if we missed anything

Decorator syntax

Overview

ARM copy loops support a mode (serial or parallel) as well as a batchSize parameter to control how looping is done. We seem to have agreement that we should use decorators to express a modification of the loop behavior, but we have not formally agreed on the syntax for them.

Decorators are coming up in other places as well (parameters and outputs), but I put those areas of the language aside in this proposal.

Mode

Option 1

We can set serial mode on a resource loop as follows:

@serial
resource myAccounts 'Microsoft.Storage/storageAccounts?2020-01-01' = [for account in accounts: {
  location: account.location
  name: account.name,
  ...
}]

Parallel mode is the default, but it could be optionally expressed using @parallel instead of @serial in the example above.

Option 2

Instead of two different decorators, we could use one @mode decorators like these:

  • @mode('serial')
  • @mode('parallel')

Batch size

Similar to loop mode, batch size could be expressed with something like @batchSize(2)

Combining decorators

Option 1

We could allow one decorator per line like this:

@parallel
@batchSize(42)

Option 2

We could allow combining decorators on a single line using commas:
@parallel, @batchSize(42)

bicep module registry (BMR)

Not sure if this should be a part of #58, but wanted to capture my thoughts.

Requirements

  • I should be able to very easily reference a module from the registry
    • module myMod 'bmr:://simple-vm' {}
  • I should be able (required) to reference a specific version of a module so we don't introduce breaking changes
    • we could have a dedicate package.json type file to keep track of versions, but that feels like overkill
    • module myMod 'bmr:://[email protected]' {}
  • Any user should be able to submit a GitHub PR with module updates or a new module
  • It should be easy to generate a nicely formatted docs site for the module that shows required & optional dependencies, outputs, etc.

Resource Annotations

Proposal - Control Flow via Annotations

Resource Annotations

resource declarations can be annotated with optional annotations - starting with #, and only permitted directly before a resource is declared. Generally these annotations can be thought of as providing metadata about the resource rather than directly manipulating config. The following annotations are built into the language:

  • #type <type provider> - declares the type of the resource
  • #if <expression> - takes an expression which determines whether to conditionally deploy a resource.
  • #repeat <identifier>(, <index>) in <array/object> - takes an array or object, and provides an identifier which can be used to iterate over properties or keys respectively. Optionally provides an index identifier.

Extensibility

Custom annotations are allowed on a type-by-type basis. It is the responsibility of the type provider to indicate which annotations are allowed, to provide validation, and to instruct the compiler what effect the annotation will have on the resource.

To give some examples, in an ARM-specific implementation, we might have:

  • #parent <identifier> - declares that the resource is a child of a given other resource
  • #repeatMode / #repeatBatch - gives more control over how loops are evaluated in ARM
  • Annotations to control resource scoping
  • Annotations to default properties (e.g. #default/#inherit)

Theoretically if we're using the annotations to build a deployment graph for an engine other than the ARM deployment engine, we may want to prevent some combinations from existing and blocking this at compile time - for example if trying to use a 'serial' copy mode to generate something which isn't an ARM template.

Example

use 'arm/network/2019-10-01' as network

input bool deployNsg

// simple case - just reference the type

#type network:networkInterface
resource myNic: {
    name: 'myNic'
    properties: {
        ...
    }
}

// conditionally deploy an nsg

#if deployNsg
#type network:networkSecurityGroups
resource myNsg: {
    name: 'myNsg'
    properties: {
        nic: myNic
        ...
    }
}

// larger but probably atypical example with lots of annotations

#if myNsg
#repeat i in range(10)
#repeatMode 'serial' //arm-specific
#repeatBatch 2 //arm-specific
#type network:networkSecurityGroups.securityRules
#parent myNsg //arm-specific
resource myRule: {
    name: 'myRule${i}'
    properties: {
        ...
    }
}

Resource syntax modification

Proposal

Current syntax:

resource <provider> '<type>' <identifier>: <body>

Proposed syntax:

resource ( <provider> , '<type>' ) <identifier>: <body>

Example

Current syntax:

resource azrm 'Microsoft.Network/virtualNetworks/subnets@2019-01-01' mySubnet: {
  properties: ...
}

Proposed syntax:

resource(azrm, 'Microsoft.Network/virtualNetworks/subnets@2019-01-01') mySubnet: {
  properties: ...
}

This helps break up the spacing between identifiers and types.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.