Giter VIP home page Giter VIP logo

gpt-2's People

Contributors

albertwujj avatar armaanbhullar avatar christopherhesse avatar github30 avatar imgntn avatar jackclarksf avatar madisonmay avatar memo avatar minimaxir avatar mrene avatar natemurthy avatar nshepperd avatar rkfg avatar tlkh avatar webproduktion01 avatar wuthefwasthat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

gpt-2's Issues

Creating dictionairy files

Right now I'm executing createspmodel.sh with a text-file containing all books from Project Gutenberg written in the Dutch language to generate the dictionary files. Do you think this is sufficient? Or should I also use a wikipedia-scraper for example, to extend the amount of text for creating the dictionary files?

To me it seems 'the more data, the better' when initialising the vocabulairy files. @rkfg could you give your opinion about this?

Loss calculation and updating weights

@rkfg kinda embarrassing to ask this after working with GPT-2 for a couple of weeks already, but here it goes. I thought I had it al clear in my mind, but as I started thinking about it: how does the loss actually gets calculated while finetuning in an unsupervised way? And how do the weights get updated? The longer I think about it, the more I start doubting about whether I fully understand the training code.

If it is supervised, I can imagine it works as follows:
While training, it take a sample of length defined in hparams.json, and predicts the next word. Then, the cross-entropy is calculated based on the predicted next word and the actual next word that should occur.

Is this correct? And based on the cross-entropy SGD/Adam is performed?

Sorry to bother you, but hopefully you have some spare time to give a short explanation.

GPT's (and GPT-2's) architecture

@rkfg This issue does not specifically concern this repository, but perhaps you could give some more insight into GPT's architecture. In their paper it is stated that GPT (and GPT-2) is a multi-layer decoder-only Transformer. From a higher perspective I can understand that an encoder+decoder architecture is useful for sequence2sequence applications, but becomes less attractive for language modeling tasks. Therefore, it seems logical OpenAI decided to stick with the multi-layer decoder only. However, during the training/fine-tuning stage of GPT, in these decoding-layers, tokens are still encoded and eventually decoded, right?

I'm not sure whether my question is clear, but it basically comes down to this: in GPT's paper it is stated that they use a decoder-only transformer, but I cannot find their arguments for this decision. Why not just use the regular Transformer architecture for example?

Thanks in advance!

Use of pre- and suffix to distinguish between documents

@rkfg for the past few days I have been downloading a bunch of data from wikipedia in order to train my model from scratch for the Dutch language. However, I'm wondering whether I would benefit from using <|startoftext|> and <|endoftext|> tags to distinguish between the wiki pages in my (large) .txt file, or that concatenating all pages together (so removing white lines) would be sufficient. Did you use these tags for your Russian book corpus?

Protobuf::FatalException

python3 src/interactive_conditional_samples.py
PATH models/345M/hparams.json
2019-07-08 11:06:36.791192: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-07-08 11:06:36.797641: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz
2019-07-08 11:06:36.798794: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x561e35855640 executing computations on platform Host. Devices:
2019-07-08 11:06:36.798849: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
PATH models/345M/checkpoint/run1
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
The model has 86505216 parameters
Model prompt >>> phones
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1506] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  CHECK failed: (index) < (current_size_):
Aborted

Duration of encoding a dataset ~2.4GB

Currently I have a dataset of roughly 2.4GB in size, and I am trying to encode it in Google Colab. However, after encode.sh finished task 'encoding with spm' it takes forever to finish the next task 'Loading the data and packing into encoded.npz'. It says it needs to read 236 files and the estimated remaining time is around 10+ days. is this normal for a dataset of this size? I expected the encoding to take only a couple of hours.

Training on Telugu-english corpus

Hey,
I wanted to train the model on a corpus of my own.It would be great if you could walk me through the procedure.I have a lot of telugu text in latin script i.e (english script).I was wondering how to generate the bpe encodings and the vocab files for this particular language and how to use them.It would be of great help if you could guide me
Thanks.

Mismatch between shape of tensors (due to vocabulairy size)

First, I call !sh scripts/createspmodel.sh combined_wo_startendtag.txt 8192 to generate hparams.json, sp.model and sp.vocab (with a vocabulairy size of 8192).

Then, when trying to train my gpt-2 model, I get the following error:

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [8192,768] rhs shape= [50257,768]
	 [[node save/Assign_147 (defined at ./train.py:139) ]]

This is caused in train.py by line 139:

        saver = tf.train.Saver(
            var_list=all_vars,
            max_to_keep=5,
            keep_checkpoint_every_n_hours=2)

and line 155: saver.restore(sess, ckpt)

My question is: do you have any idea where rhs shape = [50257,768] comes from? I know the LHS comes from my hparams.json, and when I generate sp.model and sp.vocab on a vocabulairy size of 50257, everyting works fine. However, I need a smaller size currently.

I was wondering whether you also ran into this problem and if so, how you fixed it.

Many thanks in advance.

Since the merge from nshepperd, the state of the Adam optimizer is no longer saved

Code before the merge:

gpt-2/train.py

Line 95 in 2239a41

saver = tf.train.Saver()

Code after the merge (only storing a subset of all variables):

gpt-2/train.py

Lines 148 to 150 in c57000e

saver = tf.train.Saver(
var_list=all_vars,
max_to_keep=5)

This makes the checkpoints incompatible between the two versions. Not storing the optimizer state makes the checkpoints a lot smaller, but resuming the optimization slower.

The best option would be to add two command line parameters for loading and saving, so you can e.g. load a checkpoint without optimizer state and store it back with the new optimizer state.

In the next run you then would load and save the checkpoints including their optimization state. When you're finished with training, you then can write a snapshot without optimizer parameters.

I am not sure, if the old version stores data that is not even needed to resume training, so maybe all_vars needs to be updated corresponding to what kind of snapshot you want to load/save.

models/345M/checkpoint/run1; Not a directory

I am assuming that "token_count" means the number of space separated words.

./createspmodel.sh ../src/dataset.txt 863
Creating model from /home/psharma/gpt-2_fork/src/dataset.txt, vocabulary size is 863, sampling 172600 random lines
sentencepiece_trainer.cc(49) LOG(INFO) Starts training with :
TrainerSpec {
  input: /home/psharma/gpt-2_fork/src/dataset.txt
  input_format:
  model_prefix: sp
  model_type: BPE
  vocab_size: 863
  self_test_sample_size: 0
  character_coverage: 0.9995
  input_sentence_size: 172600
  shuffle_input_sentence: 1
  seed_sentencepiece_size: 1000000
  shrinking_factor: 0.75
  max_sentence_length: 16384
  num_threads: 16
  num_sub_iterations: 2
  max_sentencepiece_length: 16
  split_by_unicode_script: 1
  split_by_number: 1
  split_by_whitespace: 1
  treat_whitespace_as_suffix: 0
  user_defined_symbols: <|n|>
  user_defined_symbols: <|endoftext|>
  hard_vocab_limit: 1
  use_all_vocab: 0
  unk_id: 0
  bos_id: 1
  eos_id: 2
  pad_id: -1
  unk_piece: <unk>
  bos_piece: <s>
  eos_piece: </s>
  pad_piece: <pad>
  unk_surface:  ⁇
}
NormalizerSpec {
  name: nmt_nfkc
  add_dummy_prefix: 1
  remove_extra_whitespaces: 1
  escape_whitespaces: 1
  normalization_rule_tsv:
}

trainer_interface.cc(267) LOG(INFO) Loading corpus: /home/psharma/gpt-2_fork/src/dataset.txt
trainer_interface.cc(315) LOG(INFO) Loaded all 3100 sentences
trainer_interface.cc(330) LOG(INFO) Adding meta_piece: <unk>
trainer_interface.cc(330) LOG(INFO) Adding meta_piece: <s>
trainer_interface.cc(330) LOG(INFO) Adding meta_piece: </s>
trainer_interface.cc(330) LOG(INFO) Adding meta_piece: <|n|>
trainer_interface.cc(330) LOG(INFO) Adding meta_piece: <|endoftext|>
trainer_interface.cc(335) LOG(INFO) Normalizing sentences...
trainer_interface.cc(384) LOG(INFO) all chars count=95595
trainer_interface.cc(392) LOG(INFO) Done: 99.954% characters are covered.
trainer_interface.cc(402) LOG(INFO) Alphabet size=40
trainer_interface.cc(403) LOG(INFO) Final character coverage=0.99954
trainer_interface.cc(435) LOG(INFO) Done! preprocessed 3100 sentences.
trainer_interface.cc(441) LOG(INFO) Tokenizing input sentences with whitespace: 3100
trainer_interface.cc(451) LOG(INFO) Done! 522
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=1863 min_freq=1
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=686 size=20 all=749 active=708 piece=▁b
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=456 size=40 all=905 active=864 piece=lap
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=309 size=60 all=991 active=950 piece=ec
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=240 size=80 all=1065 active=1024 piece=ot
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=189 size=100 all=1148 active=1107 piece=▁blu
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=189 min_freq=0
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=163 size=120 all=1180 active=1032 piece=▁au
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=138 size=140 all=1213 active=1065 piece=ght
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=128 size=160 all=1217 active=1069 piece=▁fit
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=115 size=180 all=1258 active=1110 piece=▁ssd
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=94 size=200 all=1278 active=1130 piece=▁windows
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=93 min_freq=0
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=92 size=220 all=1289 active=1012 piece=▁touch
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=69 size=240 all=1303 active=1026 piece=im
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=69 size=260 all=1302 active=1025 piece=▁inter
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=54 size=280 all=1327 active=1050 piece=nce
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=48 size=300 all=1363 active=1086 piece=▁qu
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=48 min_freq=0
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=46 size=320 all=1385 active=1021 piece=olby
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=46 size=340 all=1373 active=1009 piece=▁lightning
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=29 size=360 all=1401 active=1037 piece=ome
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=25 size=380 all=1436 active=1072 piece=di
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=24 size=400 all=1454 active=1090 piece=tro
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=24 min_freq=0
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=420 all=1459 active=1003 piece=pu
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=440 all=1466 active=1010 piece=nty
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=460 all=1465 active=1009 piece=▁vo
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=480 all=1470 active=1014 piece=trap
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=500 all=1465 active=1009 piece=ector
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=23 min_freq=0
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=520 all=1459 active=994 piece=▁octa
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=540 all=1447 active=982 piece=▁drive
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=560 all=1434 active=969 piece=▁amoled
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=580 all=1417 active=952 piece=▁printer
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=23 size=600 all=1398 active=933 piece=▁geotagging
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=23 min_freq=0
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=14 size=620 all=1390 active=993 piece=ite
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=9 size=640 all=1404 active=1006 piece=urity
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=7 size=660 all=1412 active=1014 piece=▁baby
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=5 size=680 all=1410 active=1012 piece=vo
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=4 size=700 all=1402 active=1004 piece=ik
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=4 min_freq=0
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=4 size=720 all=1408 active=1005 piece=▁nikon
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=3 size=740 all=1413 active=1010 piece=▁ama
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=2 size=760 all=1411 active=1008 piece=se
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=2 size=780 all=1422 active=1019 piece=mah
bpe_model_trainer.cc(257) LOG(INFO) Added: freq=2 size=800 all=1428 active=1025 piece=omen
bpe_model_trainer.cc(166) LOG(INFO) Updating active symbols. max_freq=2 min_freq=0
trainer_interface.cc(507) LOG(INFO) Saving model: sp.model
trainer_interface.cc(531) LOG(INFO) Saving vocabs: sp.vocab

When running train.py using the following command:

PYTHONPATH=src ./train.py --dataset models/345M/00000001.npz
Traceback (most recent call last):
  File "train.py", line 221, in <module>
    main()
  File "train.py", line 92, in main
    os.path.join(CHECKPOINT_DIR, args.run_name))
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/summary/writer/writer.py", line 367, in __init__
    filename_suffix)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/summary/writer/event_file_writer.py", line 67, in __init__
    gfile.MakeDirs(self._logdir)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/lib/io/file_io.py", line 442, in recursive_create_dir
    recursive_create_dir_v2(dirname)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/lib/io/file_io.py", line 458, in recursive_create_dir_v2
    pywrap_tensorflow.RecursivelyCreateDir(compat.as_bytes(path), status)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.FailedPreconditionError: models/345M/checkpoint/run1; Not a directory

Error after 100th training epoch

[99 | 3832.66] loss=0.5202 avg=1.9767
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
         [[{{node sample_sequence/while/model/GatherV2_1}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./train.py", line 221, in <module>
    main()
  File "./train.py", line 188, in main
    generate_samples()
  File "./train.py", line 160, in generate_samples
    feed_dict={context: args.batch_size * [context_tokens]})
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,0] = 1024 is not in [0, 1024)
         [[node sample_sequence/while/model/GatherV2_1 (defined at /home/psharma/gpt-2_fork/src/model.py:157) ]]

Caused by op 'sample_sequence/while/model/GatherV2_1', defined at:
  File "./train.py", line 221, in <module>
    main()
  File "./train.py", line 74, in main
    top_k=40)
  File "/home/psharma/gpt-2_fork/src/sample.py", line 76, in sample_sequence
    back_prop=False,
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 3556, in while_loop
    return_same_structure)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 3087, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 3022, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 3525, in <lambda>
    body = lambda i, lv: (i + 1, orig_body(*lv))
  File "/home/psharma/gpt-2_fork/src/sample.py", line 50, in body
    next_outputs = step(hparams, prev[:, tf.newaxis], past=past)
  File "/home/psharma/gpt-2_fork/src/sample.py", line 33, in step
    lm_output = model.model(hparams=hparams, X=tokens, past=past, reuse=tf.AUTO_REUSE)
  File "/home/psharma/gpt-2_fork/src/model.py", line 157, in model
    h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length))
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/array_ops.py", line 3273, in gather
    return gen_array_ops.gather_v2(params, indices, axis, name=name)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 3748, in gather_v2
    "GatherV2", params=params, indices=indices, axis=axis, name=name)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): indices[0,0] = 1024 is not in [0, 1024)
         [[node sample_sequence/while/model/GatherV2_1 (defined at /home/psharma/gpt-2_fork/src/model.py:157) ]]

AttributeError while running train.py

I'm running into trouble while trying to train the gpt-2 model on my own (Dutch) corpus.

I first create the files hparams.json, sp.model and sp.vocab by executing the following command:
!PYTHONPATH=src ./scripts/createspmodel.sh combined_wo_startendtag.txt 64000

Then, after copying those files to models/117M/ I execute the following command:
!PYTHONPATH=src ./train.py --dataset /content/gpt-2/combined_wo_startendtag.txt --model_name '117M' --restore_from 'latest'

However, without much luck because when calling encoder.get_encoder(args.model_name) an AttributeError occurs, saying that module sentencepiece has no attribute SentencePieceProcessor. Stacktrace:

Traceback (most recent call last):
  File "./train.py", line 208, in <module>
    main()
  File "./train.py", line 48, in main
    enc = encoder.get_encoder(args.model_name)
  File "/content/gpt-2/src/encoder_sp.py", line 17, in get_encoder
    return Encoder(os.path.join('models', model_name, 'sp.model'))
  File "/content/gpt-2/src/encoder_sp.py", line 7, in __init__
    self.sp = spm.SentencePieceProcessor()
AttributeError: module 'sentencepiece' has no attribute 'SentencePieceProcessor'

Do you have any idea how to fix this problem? As I don't have GPU, I'm trying to run this repo in colab.google.com, could this be the reason?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.