Giter VIP home page Giter VIP logo

reference's Introduction

reference

This repository contains reference implementations / tooling related to OpenConfig based network management.

reference's People

Contributors

aashaikh avatar bormanp avatar chenxinming05 avatar clintbauer avatar dplore avatar gcsl avatar hellt avatar jcostaroberts avatar jxx-gg avatar kidiyoor avatar leongwang avatar marcushines avatar mhines01 avatar mike-albano avatar robshakir avatar tsuna avatar wenovus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reference's Issues

SubscribeResponse cannot return an error

It's convenient that an Error message was added for UpdateResponse, but a bummer that SubscribeResponse didn't also get it. One common case that has come several times on my end is subscribing to a non-existent path. There is no way to convey to the client that the path didn't exist.

It's also not clear what the implementation is supposed to do in this case. In the EOS implementation, we take the subscription into account anyway, so that if the path starts to exist in the future, notifications will be emitted as soon as it's created. This can be handy in some scenarios.

But I've also seen many first-time users get confused because they made a mistake in the path they subscribe to (e.g. misspell an interface name) and they're confused that no notifications are streamed back to them.

What is motivation of gNMI ?

I want to understand more about :

What motivation or rationale behind gNMI. I can do all types of operations (sync, async, streaming) using gRPC server/client. Can you pl. point out the material ?

Thanks,

Medha

The Python generated code for `openconfig.proto` cannot find the code for `any.proto`

As part of 8edb7a9 the import of any.proto was changed as follows:

@@ -15,7 +15,7 @@
 //
 syntax = "proto3";

-import "google/protobuf/any.proto";
+import "github.com/golang/protobuf/ptypes/any/any.proto";

 // Package openconfig defines the gRPC service for getting and setting the
 // configuration and state data of a network target based on OpenConfig models.

As a result the relative path doesn't match the one under Python's site dir anymore, resulting in an ImportError:

Traceback (most recent call last):
  File "./openconfig_client.py", line 16, in <module>
    import pyopenconfig.pb2
  File "/Users/gvalente/src/pyopenconfig/pyopenconfig/pb2.py", line 17, in <module>
    from github.com.golang.protobuf.ptypes.any import any_pb2 as github_dot_com_dot_golang_dot_protobuf_dot_ptypes_dot_any_dot_any__pb2
ImportError: No module named github.com.golang.protobuf.ptypes.any

I could generate the code for any.proto and start distributing it under $sitelib/github/com/golang/protobuf/ptypes but it doesn't seem very pythonic. What's the recommended way to handle this in Python?

I generated the code with

protoc -I .:$GOPATH/src --python_out=. --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_python_plugin` openconfig.proto

resulting in:

gvalente:openconfig gvalente$ grep "from github.com" openconfig_pb2.py                                                                                                                                                                        
from github.com.golang.protobuf.ptypes.any import any_pb2 as github_dot_com_dot_golang_dot_protobuf_dot_ptypes_dot_any_dot_any__pb2

Doing anything else doesn't seem to work, for example copying any.proto to .:

gvalente:openconfig gvalente$ ls -l any.proto 
-rw-r--r--  1 gvalente  staff  5281 Sep 20 17:59 any.proto
gvalente:openconfig gvalente$ protoc -I . --python_out=. --grpc_out=. --plugin=protoc-gen-grpc=`which grpc_python_plugin` openconfig.proto                                                                                                    
github.com/golang/protobuf/ptypes/any/any.proto: File not found.
openconfig.proto: Import "github.com/golang/protobuf/ptypes/any/any.proto" was not found or had errors.
openconfig.proto:376:3: "google.protobuf.Any" is not defined.

Add rpc for retrieving service version

We need an rpc (e.g., GetServiceVersion) that returns the version of the service to the caller. To be consistent with the versioning scheme for models, I would suggest to stick with a semantic version number (major.minor.bugfix). We should also add / maintain a service version in the .proto file -- backends would be expected to use this number when implementing this call.

An alternative would be to update the name of the file, e.g., openconfig_v0_1_1 but it seems a little unwieldy to keep all of these versions checked in as different names.

This request is based on asks from vendors using the proto for a way to manage versioning of the service at the "application" layer.

openconfig.pb.go is stale

openconfig.proto was changed in a874111 but the corresponding .pb.go file wasn't regenerated. We vendor this repo so we are reluctant to override this file manually, it messes up our automated vendoring/update process. Please regenerate the .pb.go and please ensure that future changes always keep both in sync.

gnoi.certificate: clarify service use of non-TLS server sockets

Looking at the gnoi.certificate service defined in gnoi_cert.proto, there's no mention of the gRPC server having to use a TLS socket with service gnoi.certificate, but that may have been an assumption.

I can see two primary scenarios for server implementors;

  • Servers and clients must use TLS sockets for all RPC methods.
    -- In this case, the target must have already had its initial certificate configured (e.g., via some external automation), and its certificate_id and related endpoints are registered in the GNOI certificate server, ready for further RPCs.
  • Servers serve Rotate via TLS socket, but could serve Install (perhaps only the first time?) and/or some other methods via a non-TLS socket.

Could the gnoi_cert.proto be updated with some words describing the server expectations made by clients?

using JSON-optimized-data-strucutere versus the existing YANG XML-optimized-data-strucutres in REST

Hi Anees

Standards are essential to the success of networking, and unless there is major improvement by deviating from the standard, it is best practice to follow standards.

Have there been thoughts to use a JSON-optimized-data-strucutere instead of the XML-optimized-YANG data-strucutres. I think Netconf and Yang and bound closely together. Netconf requires XML. Now, If we move to REST APIs, which offers JSON support, a more efficient data-structure could be applied:

In the examples below:
JSON optimized code has 10 lines and 6 levels of hierarchy.
the YANG compliant equivalent has 18 lines and 9 levels of hierarchy.
That is an imporvemnt of 30-40%, also the JSON model is much easier to read.

e.g. this JSON data-strucutre (in YAML represenation)

bgp:
  neighbors:
    33.33.33.33:
      description: BGP-neighbor-description-3
      peer-as: 3333
      peer-type: external
      afi-safis:
        ipv4-unicast:
          enabled: true
          max-prefixes: 900001
          restart-timer: 61

has the same information content as the YANG-OpenConfig compliant datastructure (also in YAML format):

bgp:
  neighbors:
  - neighbor:
      neighbor-address: 33.33.33.33
      config:
        description: BGP-neighbor-description-3
        peer-as: 3333
        peer-type: external
      afi-safis:
      - afi-safi:
          afi-safi-name: ipv4-unicast
          config:
            enabled: true
          ipv4-unicast:
            prefix-limit:
              config:
                max-prefixes: 900001
                restart-timer: 61

Basically the JSON model would not use YANG lists, but just containers, leaf and leaf-Lists.

feedback welcome ...

best regards Stefan

Clarifications for "Schema path encoding conventions for gNMI"

Several interpretations are possible for handling some "edge cases" with wildcards.
A suggested (opinionated^^) wording for these cases is proposed below.
The idea is that the server implementation might "canonalize" paths to limit the processing cases.

  1. Consecutive multi-level wildcards:
    "Wildcards before or after a multi-level wildcard are ignored."

For example:

/interfaces/.../.../counters
/interfaces/*/.../counters
/interfaces/.../*/counters
are handled as:
/interfaces/.../counters
  1. Wildcard appearing at the end of a Path in a GET rpc:
    /interfaces/interface[name=Ethernet1/2/3]/state/counters/*

Either (allows path canonalization preprocessing):
Returns the counters JSON Object in a single Update
"When the last Path Element of a Path is a wildcard (name '*' or '...'), it is ignored."

Or:
Returns the leaf attributes and children of counters in separate Updates.
"When the last Path Element of a Path is a wildcard (name '*' or '...'), its leafs and children are returned in separate Updates."

Structured data and use of wildcards in Get results in ambiguous results

Sections 2.3.1 and 2.4.3 suggest that that data returned should begin at the requested data element omitting all parent nodes (eg. the example json_val contents for path "/a/b[name=b1]/c" under 2.3.1). If however one of the parent nodes is wildcarded such as the case where you want the oper-status of all interfaces in the "gNMI Path Conventions" document this would result in ambiguously returning a series of "UP"s and "DOWN"s without knowing which state corresponds to which interface (since the "name" element was considered a parent of the requested path it should be omitted per 2.3.1 and 2.4.3. Is my interpretation of these sections correct?

Migrate gnoi to separate repository

Similar to gNMI, we'd like to migrate the gNOI service definitions to a new repo (openconfig/gnoi) to enable better structuring of the code.

gNMI Specification: JSON encoding of ±infinity and NaN

I have a problem with this:

The JSON type indicates that the value included within the bytes field of the node value message is encoded as a JSON string. This format utilises the specification in RFC7159.

RFC7159 Section 6 says:

Numeric values that cannot be represented in the grammar below (such as Infinity and NaN) are not permitted.

This detail has been a recurring source of pain for us. We worked around this by doing this:

func jsonFloat(v float64) interface{} {
        switch {
        case math.IsInf(v, 1):
                return "+Infinity"
        case math.IsInf(v, 0):
                return "-Infinity"
        case math.IsNaN(v):
                return "NaN"
        default:
                return v
        }
}

Which sucks but given the RFC I'm not sure how we can do better. What I would like to see, however, is an agree-upon solution to this problem in the gNMI spec. If the above (or a variant thereof) sounds reasonable, then I would vote for that, but I'm also open to hearing alternatives.

Need a gRPC interface for dial-out streaming

gNMI currently only supports dial-in (i.e. a client coming from the outside and connecting to the network element where the gRPC server implementing the gNMI spec listens), but we see a need for a dial-out streaming model as well.

Advantages of a dial-out approach:

  • No need to expose a service to the outside world (reduces attack surface, even if that can already be mitigated by using a management VRF and/or control-plane ACLs).
  • No need to have a system to manage the "shared responsibility" of collecting telemetry from each and every network element.
    • Instead of worrying which collector is responsible for collecting data from switch X and what to do when this collector does, the switch is responsible for streaming its telemetry out to a preconfigured list of targets.
    • The pre-configured list could be a static list of ip/ports, a DNS name that resolves to multiple IP addresses (and is periodically re-resolved), or better, some name to lookup in a service discovery system backed by something like etcd/Zookeeper. The network element just needs to connect to one at random, doesn't matter which.
  • It's easier to have a stateless collector backend (just accept connections, optionally authenticate devices, and store the incoming update stream in a database or Kafka-like bus or whatever) as opposed to maintaining state regarding what targets to collect from and what paths to subscribe to on each one of them.

This issue is about settling on a standard gRPC interface for gNMI dial-out streaming.

edit: issue #42! 🎉

gNMI Server or Client?

The current way of doing gNMI is that there is a gNMI server with the Openflow switch. Can it be a gNMI client instead of a server? The reason is that for remotely managing the switch, running a client on the switch is more efficient as we don't have to punch holes in the firewall + add routing policies. Being a server it cannot initiate connections.

My only use case would be for Openflow based switches (wired and wifi) - which is my concern. Owner of the switch/access point would configure the IP address and "Management/Control" provider operator by installing the startup keys. Then, the software on boot up would do the necessary things. Owner intervention would be required if the keys expired and not rolled over.

At the more fundamental level, gNMI would be just a protocol for facilitating a secure connection. Some have used NetConf underneath to do config updates and some native operations.

How are folks really implementing and deploying this at scale today? Any thoughts or pointers to items in the spec that I have missed is greatly appreciated.

The above questions are a result of my discussion with @anarkiwi

Thanks

GNMI Mutual Authentication

The spec mentions that both the client and target mutually authenticate each other. Additionally, it seems to me that TLS client authentication is desirable (or, is it required?).

Would it be okay to skip TLS client authentication? (and just perform username/password based client authentication)

Clarification request: initial updates for POLL subscription

It isn't, IMHO, clear if POLL subscription requires the target system to send an initial update in an "unsolicited" way.

The specification version 0.4.0, secion "3.5.1.5.3 POLL Subscriptions" does not mention any initial update from the target, instead it states:
"To retrieve data from the target, a client sends a SubscribeRequest message to the target, containing a poll field, specified to be an empty Poll message."

Based on that one can understand that for POLL subscription the target should not send any initial update by itself. Updates are triggered strictly by client via "Poll" message (of course the first update, triggered by client, would contain a complete data set and further updates just updated objects). Such behavior looks reasonable to me because POLL subscription implies... hmmm... polling operation mode.

The specification version "0.5.0" didn't change section 3.5.1.5.3.
However it added SubscribeRequest::updates_only field, described as:

3.5.1.2 The SubscriptionList Message
"updates_only - a boolean ...
For POLL and ONCE subscriptions, the target should send only the sync_response message, before proceeding to process poll requests (in the case of POLL) or closing the RPC (in the case of ONCE)."

3.5.2.3 Sending Telemetry Updates
"In the case where the updates_only field in the SubscribeRequest message has been set, a sync_response is sent as the first message on the stream, followed by any updates representing subsequent changes to current state. For a POLL or ONCE mode, this means that only a sync_response will be sent."

So specification for "updates_only" implies that initial update must be sent by target for POLL subscription (either update + sync_response or naked sync_response if updates_only is set).

What is the correct procedure - to send or not to send?
Would be great to get an authoritative clarification.
Please point me to an appropriate document if I missed something.

gNMI Specification Encoding Enum Inconsistencies

The table in section 2.2.3 of the gnmi-specification does not match gnmi.proto. Specifically, ASCII and JSON_IETF have reversed values.

Also, the link to the ASCII section is broken.

Also, section 2.3.4 uses TEXT instead of ASCII.

Proposal: Adopting Structured Paths

gNMI Proposal: Adopting Structured Paths

Contributors: {hines,csl,robjs,aashaikh}@google.com
Date: May 2017
Status: New Proposal

Summary

This contribution proposes to modify the Path message of gNMI to contain structured elements rather than simply consisting of simple string elements.

Problem Statement/Motivation

The motivation for this proposal is to simplify the parsing of path elements which contain keys. Particularly, the current format:

  • Requires an implementation to walk through each string element character-by-character to determine whether it contains key elements, and extract them. In addition, careful consideration must be made as to whether there are escaped characters such that keys are extracted successfully, and the name of key elements are correctly determined.
  • Loses type information for a schema-unaware entity parsing a path. For example, without the schema, an implementation that parses a path such a /interfaces/interface[name=eth0]/subinterfaces/subinterface[index=32] that does not have access to the schema, cannot know the type of 32. Hence, implementations must either assume all keys are strings, or have access to determine the type of the corresponding element.

This proposal aims to simplify the complexity of parsing paths within gNMI, and avoid type loss for schema unaware clients. Additionally, it seeks to provide a path format which can be converted to a number of human-readable formats, such that client systems can flexibly structure their path format in the most suitable format for their users.

Proposal

We proposed that the Path message in gNMI is modified, such that the element field is deprecated. It would be replaced with a repeated message (tenatatively named PathElement), which corresponds to structured data for each element of the path. Each PathElement has a name, corresponding to the node's name in the tree, and an optional map<string,TypedValue> field which is used to express keys.

A straw-man proposal for this change is below:

message Path {
  repeated string element = 1 [deprecated=true];
  string origin = 2;
  repeated PathElement elem = 3;
}

message PathElement {
  string name = 1;
  map<string, TypedValue> key = 2;
}

Examples

Taking the examples from the gNMI Path
Conventions
document, we can show their alternate encoding.

Example 1: /interfaces/interface[name=Ethernet1/2/3]/state/counters:

v0.3.1 encoding:

<
  element: "interfaces"
  element: "interface[name=Ethernet1/2/3]"
  element: "state"
  element: "counters"
>

Proposed encoding:

<
  elem: <
    name: "interfaces"
  >
  elem: <
    name: "interface"
    key: <
      name: "name"
      value: <
        string_val: "Ethernet1/2/3"
      >
    >
  >
  elem: <
    name: "state"
  >
  elem: <
    name: "counters"
  >
>

Example 2: /network-instances/network-instance/tables/table[protocol=BGP][address-family=IPV4]

v0.3.1 encoding:

<
  element: "network-instances"
  element: "network-instance"
  element: "tables"
  element: "table[protocol=BGP][address-family=IPV4]"
>

Proposed Encoding:

<
  elem: <
    name: "network-instances"
  >
  elem: <
    name: "network-instance"
  >
  elem: <
    name: "tables"
  >
  elem: <
    name: "table"
    key: <
      key: "protocol"
      value: <
        string_val: "BGP"
      >
    >
    key: <
      key: "address-family"
      value: <
        string_val: "IPV4"
      >
    >
  >
>

Example 3: /interfaces/interface[name=eth0]/subinterfaces/subinterface[index=42]/ipv4/neighbors/neighbor[ip=192.0.2.1]

v0.3.1 encoding:

<
  element: "interfaces"
  element: "interface[name=eth0]"
  element: "subinterfaces"
  element: "subinterface[index=42]"
  element: "ipv4"
  element: "neighbors"
  element: "neighbor[ip=192.0.2.1]"
>

Proposed Encoding:

<
  elem: <
    name: "interfaces"
  >
  elem: <
    name: "interface"
    key: <
      key: "name"
      value: <
        string_val: "eth0"
      >
    >
  >
  elem: <
    name: "subinterfaces"
  >
  elem: <
    name: "subinterface"
    key: <
      key: "index"
      value: <
        uint_val: 42
      >
    >
  >
  elem: <
    name: "ipv4"
  >
  elem: <
    name: "neighbors"
  >
  elem: <
    name: "neighbor"
    key: <
      key: "ip"
      value: <
        string_val: "192.0.2.1"
      >
    >
  >
>

Open Questions

  • We prefer the use of map over a repeated message for keys, due to the terseness and advantage of enforcing unique keys within language bindings. We are interested in other implementor's feedback as to their preferred approach.
  • The TypedValue message is re-used within the proposal. In theory this allows keys to include invalid types of data (e.g., proto.Any, or ascii_val) fields to be created. This is similar to ScalarArray - whereby invalid data can be included, though specification that this is not allowable.

Encoding of leaf values

There seems to be a discrepancy between the specification text in "2.2.3 Node Values" and the examples in "2.3.1" of the gNMI specification:

In the example tree, we have: /a/b[name=1]/c/e (uint32)

If we follow the text in 2.2.3, the server should return a TypedValue with uint_val: 123

But the example text in 2.3.1 shows:
json_val: 10042 // decoded byte array

Error reporting in GetResponse and SetResponse

In section 3.3.2 for GetResponse the spec dictates:

The target MUST generate a Notification message for each path specified in the client's GetRequest

Similarly in section 3.4.2:

response - containing a list of responses, one per operation specified within the SetRequest message. Each response consists of an UpdateResult message

However all uses of the "Error" message have been deprecated, so what is a "Notification" or "UpdateResult" message supposed to be populated with in the event of an error? Implementing the spec as written suggests we would simply construct messages of the respective type using the default constructor and attach empty messages but if that's the case why even include them at all?

Prior to this deprecation (version < 0.4.0) of the "Error" message the SetRequest had a great deal of flexibility with the ability to report exactly which operation within the request caused the error and I was hoping that same level of detail would eventually be available for Get RPCs but it seems the opposite stance was taken in this newest version of the spec. Is there a reason for reducing the level of error reporting available?

gNMI: how to indicate no data to return in GetResponse

Hi guys,

What should a GetResponse look like if there is no data to return ? I've seen some discussion of a similar scenario for Subscriptions in other git issues but not sure about response to a specific GetRequest.

A few examples of scenarios where there is no data to return (assume a request with just a single path):

  1. A Get that specifies a path to a leaf that is deleted
  2. A Get that specifies a path to a list member that doesn't currently exist (but could in the future/past)

If we're using JSON encoding, and there is no data to return:
a) return value.value=”{}” and encoding = JSON ?
b) return the null string value.value=”” and encoding = ?
c) return a response with the absence of the value.value field ?
d) return a response with the absence of the Value message ?
e) return a response with the absence of the Update message ?
f) return a response with the absence of the Notification message ?

Thx,
Jason

Clarifications of Aggregation

While trying to understand how to use aggregation, some points remained obscure.
The text below is a proposal for clarification (but is my understanding correct ?)

A same node cannot belong to two different Aggregates. For example, in the sample tree of section 2.4.3, it would be impossible to declare both ChildA and ChildA3 as Aggregates.

An Aggregate is defined as a set of siblings. Nodes from different Containers cannot be Aggregated together (this would require an extra specification of the form of returned objects).
For example, with the sample tree of section 2.4.3, one may specify that /childA/childA3 is an Aggregate, not that we have an Aggregate composed of leafA1 and leafA31.
All the elements of an Aggregate are always returned together. This implies that when a child element changes, the whole aggregate is always sent to the client: for example, with the same sample tree, for a Client having a STREAM/onChange subscription covering it, a value change of leafA31 would result in a SubscribeResponse with a Notification containing childA3, not just leafA31.

Clarification request: STREAM SAMPLE subscription sample_interval and heartbeat_interval

IMHO, it isn't clear what would be the expected target behavior when heartbeat_interval is less than sample_interval.
According to "3.5.1.5.2 STREAM Subscriptions" there are no restrictions for heartbeat_interval value.

So what should the target system send if, for example:

sample_interval == 10
suppress_redundant == true
heartbeat_interval == 3

Possible actions could be, say:
(1) Send updates according to heartbeat_interval every 3 sec., effectively ignoring suppress_redundant.
or
(2) Silently force heartbeat_interval = sample_interval, send updates every 10 sec, effectively ignoring suppress_redundant.
or
(3) Reject SubscribeRequest with an error.
or
(4) Something else?

What would be a reasonable option in that case?

gNMI Specification: using Unimplemented instead of InvalidArgument for unsupported encodings

This is a bit of a detail but since the spec specifies explicitly what to do in this case... From Section 3.3.1 (The GetRequest Message):

If the target does not support the specified encoding, the target MUST populate the error field of the GetResponse message, specifying an error of InvalidArgument.

InvalidArgument is for malformed input, but here if a client requests JSON_IETF encoding and the target doesn't support it, the request isn't malformed, it's just asking for something that is not supported by the target's implementation.

From https://godoc.org/google.golang.org/grpc/codes#Code

    // InvalidArgument indicates client specified an invalid argument.
    // Note that this differs from FailedPrecondition. It indicates arguments
    // that are problematic regardless of the state of the system
    // (e.g., a malformed file name).
    InvalidArgument Code = 3

vs

    // Unimplemented indicates operation is not implemented or not
    // supported/enabled in this service.
    Unimplemented Code = 12

There may be other instances of this issue in the spec.

Details on Closing GNMI/GRPC Session

Hi,

The GNMI spec is clear and explicitly calls out instances when the GRPC session (channel) needs to be closed. For example:

  • Target to close the session after transmission of GetResponse as part of Get RPC (here)

  • Target to close session after sending the SubscribeResponse as part of the Subscribe RPC if the subscription mode is ONCE (here)

Are there other (similar) instances when the target needs to close/terminate the sessions? For example, after the completion of the Set RPC, should the session be kept open? (I assume so since the spec doesn't talk about it). Would be good if this can be explicitly added.

Thanks!

gNMI subscription updates for leaf list and unkeyed list elements

From reading issues #66 and #76, I understand that leaf list and unkeyed list elements are unaddressable in gNMI. In particular, the response from issue #66 states that

... There is no way to
delete an individual element and a delete or addition to the list would be
achieved by replacing the value of the leaf-list with a new leaf-list. ...

I am wondering if the word "replacing" is meant to be take literally, as in gNMI spec section 3.5.2.3 which states

To replace the contents of an entire node within the tree, the target populates the delete field with the path of the node being removed, along with the new contents within the update field.

My current assumption is that for both leaf list and unkeyed list, if there is an addition or deletion to the list we will send out a gNMI replace subscription notification. Please let me know if that is correct.

errors in SetResponse (proto vs. spec)

Looking at the latest gnmi.proto, I see that Error has been deprecated, as has all its uses, including within SetResponse and UpdateResult.

However Section 3.4 still says that "In the case of a failure of an operation, the status of the UpdateResult message MUST be populated with error information as per the specification in Section 3.4.7.".

Similarly, 3.4.7 still says that we "MUST set the message field of the UpdateResult corresponding to the failed operation to an Error message indicating failure. In the case that the processed operation is not the only operation within the SetRequest the target MUST set the message field of the UpdateResult messages for all other operations, setting the code field to Aborted (10).".

I woulds image the proto is correct and the spec just hasn't been updated to agree yet - can this be clarified?

Regarding gnoi.cert CSRParams message

The CSRParams message in cert.proto makes no requirements for any CSR parameters to be set, either formally or informally.

Tooling such as openssl req ... doesn't appear to require any field to be set, either; as long as the subject starts with a slash.

However, one of the purposes of creating an x509v3 certificate is to assign an intended usage to the key.

While it appears to be valid to provide empty CSR parameters, would this be considered an operator mistake by some? Or is it common practice?

Path to leaf-list element

Has a gNMI path to leaf-list elements been considered? I can see it's not too useful to do a Get on a leaf list element. But what would the path look like to Delete a leaf list element? Or suppose there is a subscription to a leaf-list and an element is added what path would be used in the Subscribe Response?

Clarification request: 3.5.2.3 Sending Telemetry Updates, "delete" field.

Would be great to get clarification regarding update.delete field in SubscribeResponse.

The 3.5.2.3 chapter says just:
"Where a node within the subscribed paths has been removed, the delete field of the Notification message MUST have the path of the node that has been removed appended to it."

It isn't clear if "delete" field should contain all leaf nodes under the deleted node or just deleted node.

Say, client is subscribed to monitor ietf-interfaces objects and target already sent updates for eth0 corresponding leaf nodes:

/interfaces/interface[name="eth0"]/name
/interfaces/interface[name="eth0"]/destription
/interfaces/interface-state[name="eth0"]/name
/interfaces-state/interface[name="eth0"]/if-index
/interfaces-state/interface[name="eth0"]/statistics/rx-octets
...skipped other leafs...

Now eth0 gets deleted.
What should be send by the target in "delete" field?
(1) The whole set leaf nodes that were previously reported to the client, as listed above.
or
(2) Just the minimal set, namely in this case two nodes:
/interfaces/interface[name="eth0"]
/interfaces-state/interface[name="eth0"]
or
(3) Are both variants acceptable?

Please point me to an existing document if I missed something.

Ilja.

Migrate gnmi.proto to gnmi repo

Since we've now created the gnmi repository, we are planning to move the gnmi.proto service definition and associated files to that repo. The documentation (protocol spec, etc.) will stay in the reference repo for now.

Any automation that pulls gnmi.proto from this repo will need to be updated to point to the gnmi repo.

timestamp in SetResponse (proto vs. spec)

Looking at the latest gnmi.proto, the use of timestamp within an UpdateResult has been deprecated, in favor of a single timestamp within the SetResponse.

However Section 3.4.2 still discusses using the timestamp within the UpdateResult, and makes no mention of the higher-level timestamp.

I presume that the proto is correct and the spec just hasn't been updated to agree yet - can this be clarified?

Add explanation of error handling for requests

It's not clear to some vendors that we expect errors to be returned using the canonical error codes provided by gRPC (e.g., here). E.g., they were expecting to have an error message in the service definition.

Suggest to add a brief explanation, for example with the Subscribe and Get calls.

multiple Notifications in SubscribeResponse (proto vs spec)

Section 3.5.2.1 "Bundling of Telemetry Updates" says the following:
"Since multiple Notification messages can be included in the update field of a SubscribeResponse message..."

But the gnmi.proto has this (Notification update is not a repeated field):
message SubscribeResponse {
oneof response {
Notification update = 1; // Changed or sampled value for a path.
bool sync_response = 3; // Indicate target has sent all values once.
Error error = 4; // Report an occurred in the Subscription.
}
}

https://developers.google.com/protocol-buffers/docs/proto3 says the following about oneof:
"You can add fields of any type, but cannot use repeated fields."

Is the spec incorrect or is the intention to support multiple Notifications in a single SubscribeResponse message ? (or am I misunderstanding something ?).

Note that a single Notification can have multiple Updates & Deletes (but there is only a single timestamp for the Notification as a whole).

Regards,
Jason

Indexing unkeyed lists

We have a similar question to #66, but about unkeyed lists. How are paths to elements in an unkeyed list supposed to be represented in gNMI?

One convenient answer would be to outlaw unkeyed lists in OpenConfig models. My colleague did some research with grep and found just one unkeyed list in all the models. I opened an issue about that here openconfig/public#94.

Proposal: Adding Optional Extensions to gNMI

Adding Optional Extensions to gNMI

Contributors: Rob Shakir ([email protected]), Carl Lebsack ([email protected]), Nick Ethier ([email protected]), Anees Shaikh ([email protected])
Date: November 2017
Status: New Proposal

Summary

This contribution proposes to modify the gNMI protocol to allow extensions to be added without requiring the core specification to be modified. The intention of this is to allow implementors to add additional functionality to existing gNMI RPCs.

Problem Statement/Motivation

In particular use cases, there are requirements for data that is not currently specified within the existing RPC payloads to be carried. Some examples of this include:

  • Implementations of gNMI which act as a proxy to set of downstream devices. In this case, there is a requirement to convey, along with each RPC, the target to which the request is being transmitted.
  • Scenarios where the network element participates in arbitration between multiple writers. In this case, the payload of a Set RPC carries additional data that allows the target to determine whether the client is the most up-to-date writer, and hence determine whether to accept a Set request.
  • Implementations of gNMI wherein the target wishes to supply additional (non-fatal) errors or warnings to the client in Get or Subscribe RPCs. For example, conveying that a warning condition occurred when retrieving the configuration (e.g., the device was not able to map a part of a native schema back to the requested schema).

The common requirement of these cases is that they require additional data to be supplied in addition to the existing payload without changing the RPC's semantics. If the semantics of an RPC are to be changed, defining an additional service (e.g., FooGNMIExtService with a new MagicSubscription RPC) provides a clean way to extend the service definition. However, for the cases above, where the semantics stay fundamentally the same, bifurcating RPC paths does not seem an optimal solution.

One proposed solution to this problem is to add additional fields in the gRPC metadata. There are a number of downsides to this approach:

  • Logging, tracing or debugging frameworks must be extended to support logging gRPC metadata to allow the whole context of a request to be understood. Where logging frameworks simply take the serialised protobuf input as the logging payload, metadata is omitted.
  • Semantically, since in some cases (e.g., that of the proxy) the information is really a first class requirement of the RPC payload, it seems fragile to rely on metadata for such information.

This proposal seeks to define a means by which existing gNMI RPC payloads can be extended using a non-metadata approach.

Proposal

We propose to introduce an additional repeated field to all top-level gNMI RPC messages (i.e., Capability(Request|Response), Get(Request|Response), Set(Request|Response) and and Subscribe(Request|Response)) named extension. This field is defined to a carry an gnmiext.Extension message.

We propose that the Extension message is defined as follows:

message Extension {
	oneof ext {
		ProxyDetails proxy = 1;
		...
		RegisteredExtension ext = NN;
	}
}

message RegisteredExtension {
	enum ExtensionID {
		UNSET = 0;
		EXPERIMENTAL = 999;
	}
	ExtensionID id = 1;
	bytes msg = 2;
}

This splits gNMI extensions into two sets:

  • Well-known extensions: These are defined to the set of extensions that are expected to be required by multiple implemenations. For example, numerous use cases have been described for carrying information that is useful to a target which acts as a proxy for other gNMI targets. For such common use cases, we propose to define messages directly within the gNMI extensions module.

The following example for a ProxyDetails extension is below.

// Credentials stores the authentication information for a proxy.
message Credentials {
  string username = 1;   // The login username.
  string password = 2;   // The authentication password.
  // TODO: Extend to other auth, e.g., SSL certificates.
}

message ProxyDetails {
  string address = 1;           // Address of the proxy.
  Credentials credentials = 2;  // Authentication credentials.
  proto.Any extra = 3;          // Additional (arbitrary) details.
}
  • Registered Extensions: In other cases, extensions may be more esoteric, or required only in a subset of cases. In this case, we propose to register such extensions (particularly, to ensure that extensions do not make undefined modifications to the base behaviour expected of an RPC), but leave their payload defined by contrib definitions. We propose that an extension ID is allocated from the ExtensionID simply by requesting such an extension, and providing a link to a reference to the specification for the extension. The payload of the RegisteredExtension msg field is the marshalled contents of the message. The ExtensionID acts in the same manner as the type field of google.protobuf.Any, albeit with more restrictions as to the allowed set of types.

If a registered extension becomes popular, it may be promoted to a well-known extension. Initially, we propose that RegisteredExtension IDs are assigned simply through an email request. We do not propose to assign an extension ID to a well known extension. When extensions are promoted to well known, their extension ID will be deprecated over time.

Open Questions

  • Currently, this proposal does not allow supported extensions to be discovered. The Capability RPC could be extended to add such discovery.

Clarification request: Initial updates for STREAM SAMPLE request?

IMHO, it isn't clear if for STREAM SAMPLE subscription the target system should send an initial update set followed by sync_response.

I am referring to chapter "3.5.1.5.2 STREAM Subscriptions" of the specification.

Paragraph for ON_CHANGE mode explicitly states that:
"For all ON_CHANGE subscriptions, the target MUST first generate updates for all paths that match the subscription path(s), and transmit them. Following this initial set of updates, updated values SHOULD only be transmitted when their value changes."

Contrary, paragraph for SAMPLE mode does not specify sending of initial updates so one might conclude that initial updates are not required in SAMPLE case.

However, section "3.5.1.4 The SubscribeResponse Message", which is common for all STREAM subscriptions, specifies:
"update - a Notification message providing an update value for a subscribed data entity as described in Section 3.5.2.3."

Following to section "3.5.2.3 Sending Telemetry Updates":
"When the target has transmitted the initial updates for all paths specified within the subscription, a SubscribeResponse message with the sync_response field set to true MUST be transmitted to the client to indicate that the initial transmission of updates has concluded. This provides an indication to the client that all of the existing data for the subscription has been sent at least once."

Now, IMHO, it becomes bit confusing - looks like paragraph from 3.5.2.3 implicitly assumes that target do will transmit initial update set for all subscription modes.

Would be great to get clarification.

3.5.1.1 Additional SubscribeRequest message MUST respond with SubscribeResponse specifying error status

The 0.4.0 specification states "...If an additional SubscribeRequest message specifying a SubscriptionList is sent via an existing channel, the target MUST respond to this message with SubscribeResponse message indicating an error status, with a code of InvalidArgument (4), existing Subscribe RPCs on the channel, and other gRPC channels MUST not be modified or terminated...".

a) WRT to first reference to channel, should channel be changed to subscription ?? Otherwise this statement seems not to make sense to me

b) If my interpretation is correct and this statement addresses the case whereby a SubscribeRequest containing a Subscription list is received on an existing subscription, I am puzzling over how to return the appropriate Status without closing the existing Subscription. Since SubscribeResponse has deprecated the use of the Error attribute, there seems to be no way to generate an error response without triggering the closure of the existing subscription?

Clarification request: gnmi.proto ModelData.name field

The ModelData protocol, as used in a CapabilityResponse (among other places), defines a field name, described as the "Name of the model" (seen here).

The clarification requested is to describe what statement of a YANG module is used to provide the value for name. Is it the YANG module name, or the YANG module namespace which is used?

In the NETCONF equivalent of CapabilityResponse, the YANG module namespace is used as the value of the <capability> elements, with the module name appended as a query parameter.

For example, a NETCONF server indicates it supports the YANG module openconfig-bgp.yang by providing a capability like:

<capability>http://openconfig.net/yang/bgp?module=openconfig-bgp&amp;revision=2017-02-02</capability>

If your intent was to make name in the ModelData for this example be openconfig-bgp and for this field to be suitable as a schema collection key on a server, it is possible for module name collisions (as this repository saw early, leading to the module * -> module openconfig-* changes) if the module namespaces are ignored.

The above choice could be phrased with a question, "is a GNMI server implementation allowed to load two YANG modules with the same module name in different module namespaces?". If yes (to my mind an expectation of NETCONF servers), using the module name (as opposed to the namespace) has this problem.

GNMI RPC Authentication

Few questions on the authentication scheme as specified here

The spec mentions that username/password per-RPC can be carried within the RPC metadata. If the client (for example) uses C++ GRPC implementation, custom <key, value> pairs in the GRPC metadata can be added using the void ClientContext::AddMetadata method.

Is this sufficient? The issue here is that, the client and the target need offline agreement to ensure correct "keys" are used. That client is expected to use the keys "username" and "password" (Example: {"username": "johndoe", "password":"helloworld"}.

The spec also mentions "Leverage gRPC authentication support." - This is a bit generic and still would require some offline agreement between the server-side implementation and the client.

Any details will be much appreciated. Thanks!

Encoding scope of gnmi query

The path/element concept does not contain any information about the namespace to be used, while this is available with xpath. So the result becomes not well defined, if the NETCONF/gRPC server supports multiple YANG modules - which potentially partially overlap.

To keep things simple, I would propose just to define the name of the YANG module on path level. So the query would only be executed on objects which belong to the specific YANG module. This would be similar to xpath in YANG used for leafref/must statements - where you only can build references within the scope of the same module.

encoding the root node

The latest available version of gnmi specification (v0.2.2) is specifying the following:
The root node (/) is indicated by encoding a single path element which is an empty string

While in the gnmi path conventions (Feb 2017) the root path is defined as zero length list.

So need clearly to define if the root node is encoded as [] or [""] using python syntax.

Little remark:
The release date in the gnmi path conventions is Feb 2016 - but should be Feb 2017.

gNMI Specification: Root Path should be an empty slice

In the specification the root path is defined to be a path with a single empty element in it, but this is inconsistent with non-root paths, which don't start with an empty element. This makes writing code to deal with paths more complex. Instead the root path should be defined to be the empty path.

An example path:

path: <
  element: "a"
  element: "b"
  element: "c"
>

A root path from the specification

path: <
  element: ""
>

The root path should be a prefix of every other path. It is not in this case because other paths do not start with "". This also means code can't simply concatenate two paths without doing special checks for a root path. The result of appending a path to a root path should be the same as the path.

Consider the Go code:

func pathAppend(x, y gnmi.Path) gnmi.Path {
	return gnmi.Path{Element: append(x.Element, y.Element...)}
}

func main() {
	root := gnmi.Path{Element: []string{""}}
	full := gnmi.Path{Element: []string{"a", "b", "c"}}
	part1 := gnmi.Path{Element: []string{"a"}}
	part2 := gnmi.Path{Element: []string{"b", "c"}}

	pathEqual(
		pathAppend(part1, part2),
		full) // This is true, as expected

	pathEqual(
		pathAppend(root, full),
		full) // Not true!

	isPrefix(root, full) // Not true!
}

Full code here: https://play.golang.org/p/_DAiVl4_1z

Whereas, if you change the definition of root to be

	root := gnmi.Path{Element: []string{}}

Then all checks return true.

gNMI Specification for credentials does not allow for binary passwords

The specification currently says to include "username" and "password" in client metadata. According to the gRPC spec, only fields with "-bin" suffixes are subject to base64 encoding/decoding. We should update the spec to call out the need for this suffix explicitly as certain characters within a password will cause the RPC to fail at the protocol level with a vague error message.

rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR

The gRPC spec for reference.
https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md

And the relevant bit from the Custom Metadata section.
"Note that HTTP2 does not allow arbitrary octet sequences for header values so binary header values must be encoded using Base64 as per https://tools.ietf.org/html/rfc4648#section-4. Implementations MUST accept padded and un-padded values and should emit un-padded values. Applications define binary headers by having their names end with "-bin". Runtime libraries use this suffix to detect binary headers and properly apply base64 encoding & decoding as headers are sent and received."

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.