vmware-archive / cascade Goto Github PK
View Code? Open in Web Editor NEWA Just-In-Time Compiler for Verilog from VMware Research
License: Other
A Just-In-Time Compiler for Verilog from VMware Research
License: Other
FPGA world suffers a lot from fragmentation - some tools produce Verilog, some VHDL, some - only subsets of them, creating low-level LLVM-like alternative will help everyone, so HDL implementations will opt only for generating this low-level HDL and routing/synthesizers accept it. LLVM or WebAssembly - you can see how many languages and targets are supported now by both. With more open source tools for FPGA this is more feasible now than ever. Most of the people suggest to adapt FIRRTL for this. Please check the discussion and provide a feedback if you have any. There is a good paper on FIRRTL design and its reusability across different tools and frameworks.
See f4pga/ideas#19
Bug report. Commit 0b7d46fd
.
In certain instances, when loops are assigned the same name, Cascade crashes. In all of the following test cases, the typechecker should catch the error. The first 2 test cases demonstrate unhandled crashes, whereas the last 2 show some sort of inconsistency for when it crashes.
module DUMMY(input wire[31:0] in);
endmodule
genvar i;
for(i = 0; i < 2; i=i+1)
begin : loop_0
wire [31:0] num;
DUMMY dummy(num);
end
for(i = 0; i < 2; i=i+1)
begin : loop_0
wire [31:0] num2;
DUMMY dummy(num2);
end
initial $finish(0);
In debug mode, Cascade crashes with the message:
>>> cascade: ./src/base/undo/undo_map.h:121: cascade::BaseUndoMap<K, V, H, E>::const_iterator cascade::BaseUndoMap<K, V, H, E>::insert(K, V) [with K = const cascade::Identifier*; V = cascade::ModuleDeclaration*; H = cascade::HashId; E = cascade::EqId; cascade::BaseUndoMap<K, V, H, E>::const_iterator = std::__detail::_Node_const_iterator<std::pair<const cascade::Identifier* const, cascade::ModuleDeclaration*>, false, true>]: Assertion `map_.find(k) == map_.end()' failed.
Aborted (core dumped)
module DUMMY(input wire[31:0] in);
endmodule
genvar i;
for(i = 0; i < 2; i=i+1)
begin : loop_0
end
for(i = 0; i < 2; i=i+1)
begin : loop_0
wire [31:0] num2;
DUMMY dummy(num2);
end
initial $finish;
In debug mode, Cascade crashes with the message:
>>> cascade: src/verilog/transform/de_alias.cc:141: virtual void cascade::DeAlias::AliasTable::visit(const cascade::ContinuousAssign*): Assertion `rlhs != nullptr' failed.
Aborted (core dumped)
module DUMMY(input wire[31:0] in);
endmodule
genvar i;
for(i = 0; i < 2; i=i+1)
begin : loop_0
wire [31:0] num2;
DUMMY dummy(num2);
end
for(i = 0; i < 2; i=i+1)
begin : loop_0
end
initial $finish;
This case doesn't cause a crash. This is in contrast with the previous test case, where the order of the loops did cause a crash.
genvar i;
for(i = 0; i < 2; i=i+1)
begin : loop_0
wire [31:0] num;
end
for(i = 0; i < 2; i=i+1)
begin : loop_0
wire [31:0] num;
end
initial begin
$display("This doesn't cause an error. Only module instantiations cause errors.");
$finish;
end
This also doesn't cause a crash. It appears that if we don't use modules in these loops, it does not crash.
When I make a change in a .h file and try to remake Cascade, it does not detect the changes.
Repro steps
src/base/bits/bits.h
make
I get Nothing to be done for 'all'.
I expected the project to be recompiled.
In general, let's make the user's life easier: both for handled and unhandled errors. When we're done with this, let's update the README to show off the new interactions.
Let's hold off on this until we have a clear idea what we want out of runtime logging. There's no sense in doing it for its own sake as busy work right now.
Here's a minimal example:
localparam GLOBAL = 1;
module foo();
wire[GLOBAL:0] x;
endmodule
Interestingly, this works (meaning the typechecker emits a warning):
localparam GLOBAL = 1;
module foo();
wire x = GLOBAL;
endmodule
I would love to use this with yosys, is that possible?
Feature request.
I've been failing to learn Verilog/FPGA dev and would like something closer to a python REPL.
This looks like the project for me! Since I'm doing this only for fun, I'm using yosys and FPGAs supported by yosys.
Are there plans to add support for yosys?
Or is there a plan to fill out the documentation section on adding support for new backends, so I could add support for yosys myself?
Let's import some code!
We've kind of settled on what this should look like. Let's add a wrapper around Controllers and Views to record all inputs and outputs that cascade generates.
Since we're about to have support for variable arrays, it's probably worth revisiting the issue of why we have memories and fifos in the standard library.
Memories were originally introduced as a work-around for the lack of array support. If you believe in the idea that a standard library should contain ONLY what you CANT write in the host language, then this was a good decision. Eventually we strengthened the argument by adding read/write from/to file support. There's just no good way to do that in Verilog.
FIFOs appeared when we realized that it wasn't always realistic to store all of your inputs in a file. Sometimes you want to stream them. I think that here the argument was a little bit backwards. We wanted a file-streaming facility so we asked what the closest analogy to hardware was, we landed on FIFOs and because we still didn't have array support, it made sense to put their implementation in the standard library.
Once we have arrays, all of these arguments will get shaken up.
You can easily implement a memory with an array of registers, so that alone isn't a reason to keep it in the standard library. Worse, the fact that the standard memory is dual read port, single write port, looks EXTREMELY arbitrary now. Granted, it's what most hardware IP catalogs offer, but I don't see that as a compelling argument. Really the only thing that's interesting now is the ability to write values to and from files on shutdown and startup. If that's the case though, we should just move the (* __file *) annotation from memory directly onto registers and integer data types.
The same goes for implementing FIFOs, so keeping them in the standard library doesn't make a lot of sense either. They DO provide the ability to stream values to/from files, but that isn't really a FIFO concept at all. That's an OS abstraction. Better to call it what it is and offer a standard library component that looks like an iostream. There's precedent in the literature for this, so I think it's the direction we should move in. But does this need an enclosing data-type anymore? I don't think so. It seems like we can stick an annotation on a register and map reads and writes onto streaming operations.
From a performance point of view, standard library components are an obstacle to performance, as they block the open loop scheduling optimization. So this looks like a win from that perspective.
From the compiler point of view, putting annotations on variables also makes it easier for backend compilers to reason directly about this stuff. So far I'm the only backend developer, but I think that's going to change soon.
I won't move on this for a little while, so I'm open to some discussion as to whether this is a good idea.
Cascade assumes that reads and writes to out-of-bounds array indices are undefined. This is actually only half true. Reads are undefined, so it's perfectly reasonable to replace x[LARGE_CONSTANT] with x[0]. Writes on the other hand, are not. The spec requires them to evaporate.
Currently cascade uses the same strategy for reads and writes (maps both to index 0), so the following program behaves incorrectly:
reg x[7:0];
initial begin
x[0] = 1;
x[1000] = 0; // cascade thinks this is undefined and performs the write at x[0]
$display(x[0]); // should print 1, prints 0
end
These are part of the 2k5 spec (actually, they're part of the system verilog spec --- but they're nice to have) and they are already supported by the runtime. They just haven't been exposed at system tasks.
Cascade emits a lot of warnings and after a while they might not be all that helpful.
Right now the quartus_server blocks until it's done. If you edit the program while it's compiling, that second compilation will block until the first completes. And at that point, once you've submitted the second compilation, the first shouldn't EVER complete, since it's out of date.
Also ---
Let's say you add
$initial finish;
to your program, that will trigger a recompilation, when what you want is for the program to stop running. Shutting down the runtime should cancel all active compilations as well.
And since this is going to involve changing the internal compiler toolflow a bit, let's also add a request for support for informative error messages if a sw or de10 compiler fails.
We already have a disable warnings command line flag. We should have corresponding flags for info and error messages.
Since these are the sorts of things that users probably don't want to type all the time, lets add support for reading command line options out of a .cascaderc file in the user's home directory. This will have potentially strange interactions with make test (since the user's rc file can override outputs which tests depend on) so let's not do this.
@kroq-gar78 and I found this while working on our project. The last few bits seem to get clobbered when we read in a .mem file but only sometimes and we have absolutely no idea what this condition is
Steps to Reproduce
logistic_in.mem
containing:009666 660066 666600 333333 000666 660099 999a00 600000 002ccc cc0003 333300 cccccd 005666 6600a9 999a00 3ccccc 00b999 9a0056 666600 a33333 003ccc cc00a3 333300 6ccccd 003000 000006 6666
minimal.v
containing:(*__file="logistic_in.mem"*)
Memory#(5, 32) in_mem(.clock(clock.val), .wen(0), .raddr1(), .rdata1(), .raddr2(), .rdata2(), .waddr(), .wdata());
Result
When you open logistic_in.mem
after this operation, you will see the following:
9666 660066 666600 333333 666 660099 999a00 600000
2ccc cc0003 333300 cccccd 5666 6600a9 999a00 3ccccc
b999 9a0056 666600 a33333 3ccc cc00a3 333300 6ccccd
3000 6 0 0 0 0 0 0
Looking carefully, you will notice that the last few 6s get clobbered.
We have already checked to make sure that the memory module is large enough to hold all our values.
While working on our project, I felt that it would have been really useful for us to be able to use Macros in the form of define
.
if/else if/else generate blocks are treated as nested inside of the AST. This leads to the following (incorrect) set of automatically generated block names:
if (c1) : genblk0
else if (c2) : genblk0
else : genblk0
wire x;
where x
's full qualified name is genblk0.genblk0.genblk0.x
rather than just genblk0.x
.
While we're on the subject --- genblk naming starts with 1, not 0.
See https://codecov.io/gh/vmware/cascade/pull/48/changes
When using a large number of jobs with make -j<N>
(e.g. -j20
), I encounter one of two compile errors:
In file included from ./src/verilog/parse/parser.h:40:0,
from src/runtime/runtime.cc:46:
./src/verilog/parse/lexer.h:47:5: error: ‘yyParser’ does not name a type
yyParser::symbol_type yylex(Parser* parser);
^
compilation terminated due to -Wfatal-errors.
Makefile:144: recipe for target 'src/runtime/runtime.o' failed
make: *** [src/runtime/runtime.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from ./src/verilog/parse/parser.h:40:0,
from src/target/common/remote_runtime.cc:44:
./src/verilog/parse/lexer.h:47:5: error: ‘yyParser’ does not name a type
yyParser::symbol_type yylex(Parser* parser);
^
compilation terminated due to -Wfatal-errors.
Makefile:144: recipe for target 'src/target/common/remote_runtime.o' failed
Or (occasionally):
In file included from ./src/verilog/parse/lexer.h:38:0,
from ./src/verilog/parse/parser.h:40,
from src/target/common/remote_runtime.cc:44:
./src/verilog/parse/verilog.tab.hh:40:0: error: unterminated #ifndef
#ifndef YY_YY_VERILOG_TAB_HH_INCLUDED
^
compilation terminated due to -Wfatal-errors.
Makefile:144: recipe for target 'src/target/common/remote_runtime.o' failed
make: *** [src/target/common/remote_runtime.o] Error 1
make: *** Waiting for unfinished jobs....
In file included from ./src/verilog/parse/parser.h:40:0,
from src/runtime/runtime.cc:46:
./src/verilog/parse/lexer.h:47:5: error: ‘yyParser’ does not name a type
yyParser::symbol_type yylex(Parser* parser);
^
compilation terminated due to -Wfatal-errors.
Running make
again with any number of jobs produces a working executable. This suggests that there is a race condition between when the file src/verilog/parse/parser.h
(where yyParser
is declared) is created, and when src/target/common/remote_runtime.o
is attempted to be built. This seems like a Makefile dependency issue.
Now that we have a neater mechanism for printing info messages, let's move profiling from using clog to using Runtime::info()
There are still some places where cascade's behavior deviates from the 2k5 spec.
The title says it all.
Some small things that came up this afternoon:
Technically this is okay, but cascade doesn't support it:
wire x[0:7]; // little-endian array
These are all errors and can only lead to confusion and hurt feelings:
wire w;
reg r;
assign r = 1; // Can't use assign statements with registers
always @(...) begin // Can't use procedural assignments with wires
w = ...
w <= ...
end
Arrays of module instantiations actually look A LOT like generate statements. I guess that's not surprising since we handle instantiations and generates nearly identically everywhere else.
We've got a fancy static style checker now. Some of the things it complains about are legitimate (we should fix these). Others are in direct conflict with our (google code) style guideline. These should be turned off.
w = x + y * z
is parsed
w = (x + y) * z
rather than
w = x + (y * z)
When in doubt (aka always), use parens.
Several other language features are parsed awkwardly or incorrectly as well:
List of port declarations syntax isn't perfect, nor is support for if/then/else blocks
>>> assign led.val = 8'hF;//OE
*** Parse Error:
/
^
This code should parse correctly.
Probably a holdover from when we refactored some compiler stuff. Not a big deal. Just get rid of them.
0b7d46fd
.When a wire in an array is a non-trivial function of another wire (i.e. not simply assigning them to be equal) in that array, then Cascade hangs without output.
wire grid[1:0];
assign grid[0] = 1;
assign grid[1] = grid[0] + 1;
initial $finish;
Cascade becomes unresponsive and needs to be TERMed from another process.
wire grid[1:0];
assign grid[0] = 1;
assign grid[1] = grid[0];
initial $finish;
Cascade terminates normally, as expected.
For various reasons, there may be some value in a having a programmer- or administrator- exposed mechanism for telling cascade to drop whatever it's doing and move all modules of a particular type (eg. logic, led, etc...) to a new target/location (eg. sw, de10_jit, etc). Let's drop this into the runtime and find a light-weight way of exposing it.
Cascade doesn't currently support real variables, or real arithmetic. If we ever want to support neural nets, we'll need this.
The sections for properly setting hardware on a Terasic DE10 Nano SoC is incomplete
I'm trying to set up cascade with a de10, however the instructions linked here for setting the board for cascade (specifically sshing into the board) is missing.
Thanks
On Ubuntu 18.04.1:
E: Unable to locate package ncurses
It seems that the correct package is libncurses-dev
(https://packages.ubuntu.com/search?keywords=ncurses)
Currently there are some language features which are making it through the parser, but which aren't supported. In most cases, they're blocked by assertions further down the pipe, but that isn't all that much of a consolation for users who have build cascade in release mode. Let's seal this up in the typechecker.
Currently we have the following language features which make it through the parser and need to be turned off:
Since it's been a while, it's probably time to revisit some early design decisions.
Early on, we made the decision to separate engines into two parts: cores and interfaces. Cores encapsulate the implementation-specific logic for a module, and interfaces encapsulate the mechanism by which an engine can communicate back with the runtime. This allowed us to do something neat: support engines which were located in a different process than the runtime.
From a design point of view, this is nice. It's a composable feature, so it's possible to set up long chains of delegation between the runtime and an engine. In practice, it doesn't really do anything other than degrade performance, so it's hard to imagine WHY you would want to other than to say that you did.
The one place this feature has a chance to shine is in cleaning up the implementation of the sw_fpga. We instantiate a remote runtime (this thing that remote engines talk to) in another process and use it to manage sw engines corresponding to buttons and leds. This is good. But it's A LOT of code which hasn't aged all that gracefully just for the sake of cleaning up a barely-used corner of the code base.
Ultimately, we may very well want to have the flexibility to dynamically relocate engines. But inside of cascade's top-level compiler (which is where the decision is made currently) doesn't feel like the right place to do so. This feels more like a target-specific implementation detail. And if that's the case, then as much as it pains me to refactor out code that I worked so hard on (just kidding --- it feels great) I'd argue that the simplification to the codebase is worth a lot more than the functionality we have right now.
Lots of incorrect behaviors here:
This doesn't trigger an error:
genvar i;
for (i=0; i < 10; i=i+1) begin : FOO
wire x;
end
initial $display(FOO[10000].x);
Nor does the use of an undefined constant inside of a subscript
genvar i;
for (i=0; i < 10; i=i+1) begin : FOO
wire x;
end
initial $display(FOO[UNDEFINED].x);
Several key language features from the Verilog 2005 spec are still missing.
Run the following program
initial begin
$display("%d %d %h", 5);
end
Error message
>>> CASCADE SHUTDOWN UNEXPECTEDLY --- PLEASE FORWARD LOG FILE TO DEVELOPERS
Several students asked about this in chris' concurrency office hours. Some implementations of verilog support the following non-standard system tasks: save() and reset(). Respectively, these save program state to a file, and restore it. This is useful for particularly long-running applications where there may be some value in periodically backing up state.
Ever since we went performance improvement crazy, the jit handoff has been acting a little funny. It seems like system tasks are being spuriously generated. Best case this means that the simulation shuts down early with incorrect results. Worst case, we segfault when we try to handle a systask that doesn't exist.
Profiling says we spend more than 50% of our runtime in them!
Failed parses lead to memory leaks in the parser. It's a slow leak unless you really work for it, say by writing a script to spam cascade with failed parses, but it's still worth fixing.
I'm seeing this when trying to do a fresh make on MacOS High Sierra. I don't know if it matters, but I installed my dependencies with brew instead of ports. Here is the specific error:
ccache g++ --std=c++14 -Werror -Wextra -Wall -Wfatal-errors -pedantic -Wno-overloaded-virtual -Wno-deprecated-register -march=native -fno-exceptions -fno-stack-protector -O3 -DNDEBUG -Iext/googletest/googletest/include -I. -I./ext/cl -c src/runtime/runtime.cc -o src/runtime/runtime.o
In file included from src/runtime/runtime.cc:46:
In file included from ./src/verilog/parse/parser.h:40:
In file included from ./src/verilog/parse/lexer.h:38:
verilog.tab.hh:1001:82: fatal error: too many arguments provided to function-like macro invocation
basic_symbol (typename Base::kind_type t, YY_RVREF (std::pair<Identifier*, Maybe<RangeExpression>*>) v...
^
verilog.tab.hh:74:10: note: macro 'YY_RVREF' defined here
# define YY_RVREF(Type) Type&&
^
1 error generated.
make: *** [src/runtime/runtime.o] Error 1
Inlining is currently all-or-nothing. Either you run cascade and it inlines your entire program or you use the --disable_inlining flag and it doesn't bother. It would be nice to be able to annotation modules with a flag that cascade considers at a fine granularity. Once we have this, we can simplify the implementation of the --disable_inlining flag by simply using it to add the annotation to every module.
It seems like both operators do a logical right shift rather than a signed one when working with signed wires.
Commit: 835a7
Reproduce steps:
Here is my file:
wire signed[31:0] minus_one = -1;
wire signed[31:0] left_shifted, test1, test2;
assign left_shifted = minus_one <<< 1;
assign test1 = left_shifted >> 1;
assign test2 = left_shifted >>> 1;
initial begin
$display("minus_one %d", minus_one);
$display("left_shifted %d", left_shifted);
$display("ggt %d gggt %d", test1, test2);
end
When I run this, I get the following output:
minus_one -1
left_shifted -2
ggt 2147483647 gggt 2147483647
However, I expected the following output:
minus_one -1
left_shifted -2
ggt 2147483647 gggt -1
If we try to do this at the REPL, we get some mixed signals:
>>> parameter A=A;
ITEM OK
>>> Segmentation fault (core dumped)
If we debug with gdb, we see the segfault happens at:
0x00000000004c2f47 in cascade::Resolve::get_resolution (this=0x0, id=0x0)
at src/verilog/analyze/resolve.cc:47
47 const Identifier* Resolve::get_resolution(const Identifier* id) {
because id
is a nullptr
.
Now that the cascade lab is over, it would be nice to import the two main types of nw solutions (combinational and pipelined) as benchmarks.
While we're at it, there's some ugly redundancy in our test/benchmark tests. Everything which is a benchmark has a shorter running duplicate in the test folder. It would be nicer if the benchmark folders had multiple versions of each program (some shorter running, some longer running, this way we could see how performance scales, for instance). If we had this, the tests could just invoke the shortest running version of each benchmark.
Also while we're here --- FinishStatement takes a Number in the AST, but the spec allows it to take an expression as an argument. This is preferable if we're trying to control the output or lack thereof in a finish statement with a parameter.
Also also while we're here --- Let's add a make benchmark target so that we can automate timing experiments.
Cascade doesn't currently support arrays, either declarations or dereferences. Most of the infrastructure is there, we just haven't turned the crank on it.
Smaller memory footprints mean better support for larger programs on memory constrained devices. Let's focus this ticket on reductions that don't cost runtime performance (a hit to typechecking is okay). Here are some ideas:
Tokenize Node::source_. AST nodes store a reference to the location where they appear in the user's code. Currently these are stored as strings, which is less than optimal.
Dead code elimination. Every time we unroll a loop generate statement, we introduce an implicit localparam declaration for its genvar. Most of the time, we'll constant propagate these values away. When we do, there's no reason to keep the declaration around.
full_id caching. We cache the results of Resolve().get_full_id() in the ast. That's a lot of pointers and a lot of memory for a value that rarely (if ever) access more than once.
Monitors. We attach a monitor to every node in the ast, while for the most part, the only ast node with a monitor that points to something other than its parent, is an identifier. If we update the logic for SwLogic::notify(), we can move monitor into Identifier and save some space.
State tracking. The only place we use state counters in SwLogic is in Statements. There's also no reason for them to be size_ts. State counters are pretty much bounded between 0 and 3.
AST flags. We have a couple of flags floating around the AST. They're stored as bools, which is good, but we really only need one in Node. Derived classes can index into this single variable and sets bits as they need to.
Vectors. We keep A LOT of vectors in the AST. It seems negligible, but STL vectors keep four values worth of meta-data when really we only need 3 (the fourth is a pointer to an allocator). There's also the issue that vectors grow logarithmically. If we're not careful, we could be allocating a lot more space than we actually use. Let's try implementing a stripped down version of vectors and see what that nets us.
Source and line numbers. We keep source and line number decorations for EVERY NODE IN THE AST. That's 8 bytes overhead per node! But really, we only need to remember source and line number for the last thing that we parsed, since the only place we use this information is when we emit type checker errors. Let's excise these values altogether and store location information temporarily in the parser for the last thing that it parsed. (Incidentally, not keeping track of all of this data --- we copy location information whenever we clone a node, leads to a huge performance boost as well).
A pretty minor improvement --- some elements in the AST have enums as members. c++11 lets us specify the size of these enums instead of the default (int). We can knock these all down to chars. Along similar lines, there's no reason for Tokens to be size_t (64 bits). 32 should suffice. This should make a dent in the size of Ids.
Expressions all have Bit decorations. This includes Numbers which also Bits as their value. The two are always the same, so there's no reason to duplicate (especially since Numbers make up a large fraction of the AST in any program).
Improved de-aliasing analysis. Our current dealiasing analysis only works on continuous assigns of the form assign x = y;
. This misses continuous assignments to bit slices and leaves us with code which is longer than it needs to be.
Improved constant propagation. Nets which are bound by continuous assignment to static constants can be replaced by runtime constants. This should improve performance a bit and make the code which we're left with a little bit smaller.
Here's a good one: nested expressions. Any time you wrap an expression in parens you end up with an AST node called a nested expression which contains the expression you put in the parens. That's just wasted space. The AST already stores precedence relationships implicitly in its structure. I think we put this in back when we didn't have support for operator precedence and we were worried about losing it via repeated scan/print/scans.
We store the inverse of resolution (reference) as a decoration on identifiers in the AST. This is kind of wasteful, since only identifiers which appear inside of declarations need to track this information. Let's move this decoration into declarations and shrink the AST even further.
One last major improvement, then it's time to call it quits on this ticket for a little while, I think. Combinators in the AST are a huge waste of space. A Maybe is a 40 byte wrapper around an 8 byte pointer. Many adds similar overheads and appears more frequently in the AST. Getting rid of these is... decidedly non-trivial. But I started doing it before I wrote this note (and I'm almost done so I'm relatively confident that it will work).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.