Giter VIP home page Giter VIP logo

tensorboard's Introduction

Python PyPI DOI CII Best Practices OpenSSF Scorecard Fuzzing Status Fuzzing Status OSSRank Contributor Covenant TF Official Continuous TF Official Nightly

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working within the Machine Intelligence team at Google Brain to conduct research in machine learning and neural networks. However, the framework is versatile enough to be used in other areas as well.

TensorFlow provides stable Python and C++ APIs, as well as a non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

Other devices (DirectX and MacOS-metal) are supported using Device plugins.

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Forum for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development.

Patching guidelines

Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities:

  • Clone the TensorFlow repo and switch to the corresponding branch for your desired TensorFlow version, for example, branch r2.8 for version 2.8.
  • Apply (that is, cherry-pick) the desired changes and resolve any code conflicts.
  • Run TensorFlow tests and ensure they pass.
  • Build the TensorFlow pip package from source.

Continuous build status

You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux GPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows GPU Status Temporarily Unavailable Nightly Binary Official GCS

Resources

Learn more about the TensorFlow community and how to contribute.

Courses

License

Apache License 2.0

tensorboard's People

Contributors

arcra avatar benoitsteiner avatar bileschi avatar bmd3k avatar caisq avatar chihuahua avatar davidsoergel avatar dependabot[bot] avatar dsmilkov avatar erzel avatar groszewn avatar gunan avatar hoonji avatar jameshollyer avatar jameswex avatar japie1235813 avatar jart avatar martinwicke avatar nfelt avatar psybuzz avatar qiuminxu avatar renatoutsch avatar rileyajones avatar sparkier avatar stephanwlee avatar teamdandelion avatar tensorboard-gardener avatar tensorflower-gardener avatar wchargin avatar yatbear avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorboard's Issues

[DASHBOARD] Static Variable Display Tab

Migrated from tensorflow/tensorflow#4714

I do a lot of different runs which contain descriptions as well as different meta parameters. It would be nice to have a tab on TensorBoard so I can tell which experiment I am viewing.

My thought would be to simply add static variables from the Graph which might include Strings, Ints, Floats, (maybe matrix in table format if possible, like PCA parameters, maybe confusion matrix, etc). These may include a text description, current learning rate, network meta parameters, etc. It would be good to simple present the "current" value of these variables so if there is a search on Discount Rates for RL or variation such as learning rate, those can be viewed. This would be different from simply graphing the Learning rate as that shows the history rather than "current". I think of this as a TensorFlow "Dashboard" that I can setup.

Something like:
tf.dashboard_summary(tags, tensor/value, collections=None, name=None)

eg:
tf.dashboard_summary("Description:",description_string_tensor_or_python_string)
tf.dashboard_summary("Learning Rate:",lr_tensor)
tf.dashboard_summary("DiscountRate:",discountRate_python_variable)

In this case, a "DASHBOARD" tab on tensorflow would contain the 3 labels above and the tensor/value. If the value is a python variable, then it should be considered constant and will not change over the graph lifecycle. If it is a tensor/variable, then it should be pulled from the graph at each iteration.

I can help but not sure where to jump in to get this started.

Make TensorBoard side bar resizable?

This issue had been migrated from tensorflow/tensorflow#1464.

There's often a bunch of unused space on the right side of the TensorBoard interface, and if we have long directory names on the left (for example listing parameters during validation), they end up getting wrapped to multiple lines. Right now it's possible to adjust the sidebar width with Chrome's developer tools, but it seems natural for it to be resizable in the first place.

tensorboard not able to read large event file (~600MB)

This issue had been migrated from tensorflow/tensorflow#836.

I am training a network on 4 GPUs. The event file is too large to be parsed by tensorboard. How can I increase the limit?

RNING:tensorflow:IOError [Errno 2] No such file or directory: '/home/pc/anaconda2/lib/python2.7/site-packages/tensorflow/tensorboard/TAG' on path /home/pc/anaconda2/lib/python2.7/site-packages/tensorflow/tensorboard/TAG
WARNING:tensorflow:Unable to read TensorBoard tag
Starting TensorBoard on port 6006
(You can navigate to http://0.0.0.0:6006)
[libprotobuf ERROR google/protobuf/src/google/protobuf/io/coded_stream.cc:207] A protocol message was rejected because it was too big (more than 67108864 bytes). To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/pc/anaconda2/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/home/pc/anaconda2/lib/python2.7/threading.py", line 1073, in run
self.function(*self.args, **self.kwargs)
File "/home/pc/anaconda2/lib/python2.7/site-packages/tensorflow/python/summary/event_accumulator.py", line 242, in Update
self.Reload()
File "/home/pc/anaconda2/lib/python2.7/site-packages/tensorflow/python/summary/event_accumulator.py", line 175, in Reload
for event in self._generator.Load():
File "/home/pc/anaconda2/lib/python2.7/site-packages/tensorflow/python/summary/impl/directory_watcher.py", line 82, in Load
for event in self._loader.Load():
File "/home/pc/anaconda2/lib/python2.7/site-packages/tensorflow/python/summary/impl/event_file_loader.py", line 53, in Load
event.ParseFromString(self._reader.record())
DecodeError: Error parsing message

Support SVGs on the image dashboard

Migrated from tensorflow/tensorflow#10207.

@thinxer says:

TensorBoard supports image summaries, which is great for visualization. However, TensorFlow doesn't have rich operators for drawing, and people have to rely on Python libraries like PIL or cv2 for rendering. Those libraries typically use software rendering and are not very fast.

Considering TensorBoard is a web app, and browsers already have rich support for rendering, why don't we offload the rendering to browsers? For example, SVG is a simple XML format, which supports vector graphics, text and embedded bitmap images. Outputting an SVG seems to be a fairly easy job with some text templates.

TensorBoard feature request: Search for Node.

This issue had been migrated from tensorflow/tensorflow#2730.

I have been trying to retrain the inception v3 network and TensorBoard has been the only real way to introspect the network and it's implementation so far.

It would be very useful to be able to find nodes in the graph, by name or other properties. E.g. in my case I am interested in seeing the total_loss node that is being defined here: https://github.com/tensorflow/models/blob/master/inception/inception/inception_train.py#L120 To get a greater idea of what it is linked to.

FR: support for x-axis semantics other than "step"

This issue had been migrated from tensorflow/tensorflow#10292.

Often I log scalar summaries at intervals that are meaningful in their own right — e.g. at epochs or min-batch boundaries — and report this, rather than the global step, as the x-value with something like

some_writer.add_summary(a_value, epoch_number)

or

some_writer.add_summary(a_value, batch_number)

But the API has no way of indicating that these x-values are not "steps", and reports them in the TensorBoard UI as "Steps", which they are not.

It would be nice to have a way of changing the "Steps" label to some other string (here, for example "Epoch" or "Batch").

Tensorboard has wrong logdir on Windows

This issue had been migrated from tensorflow/tensorflow#6313. The thread mentions a workaround, but the root problem of unsupported logdir paths on windows is unresolved.

I use Tensorflow 0.12rc1 on Windows. "Tensorboard" has problems parsing the logdir. My log-directory is at E:\tmp\tflogs. When I start tensorboard from the windows shell (cmd.exe) with:

tensorboard --logdir=E:\tmp\tflogs

it depends on the current directory / drive if it works or not (i.e., if tensorboard finds the runs and shows the data). If the current directory is, e.g., on the C:\ drive, tensorboard tries to add runs from c:\tmp\tflogs instead of E:\tmp\tflogs.

Example:
C:\Users\wrammer>tensorboard --logdir=E:\tmp\tflearn_logs --debug
The log contains lines such as:
INFO:tensorflow:Starting AddRunsFromDirectory: C:\tmp\tflearn_logs

TensorBoard Embeddings fail if a "+" character is in LOG_DIR subdirectory filename

@austinstover said seventeen days ago in tensorflow/tensorflow#10301:

System information

  • Occurs in Tutorial: DandelionMane's TF Dev Summit 2017 TensorBoard Tutorial
    • If learning_rate >= 1 (not that you would want to do that)
  • OS Platform: Windows 10
  • TensorFlow install: Binary
  • TensorFlow version: 1.1.0
  • Exact command to reproduce: Use a plus sign in the filename of one of LOG_DIR's subdirectories

Problem

The tensorboard embeddings tab throws an "Error fetching projector config" if a + sign is used in a log_dir subdirectory; that is, if the projector_config.pbtxt is in that subdirectory.

Hide empty panes in TensorBoard

Migrated from tensorflow/tensorflow#6768 (there is some addl. discussion there)

When filtering runs, empty panes show up if a hidden run has a summary in a scope that the visible runs don't have. This makes TensorBoard very messy when comparing different machine learning models with different variable scopes, and so on. Could an option to hide empty panes be added?

Example (note the empty LSTM scope):
image

Backslash-separated logdir paths have issues on Windows

Migrated from tensorflow/tensorflow#9854.

@DarrellThomas says:

[After running tensorboard --logdir='D:\logs_dt'] I'm not seeing anything on the tensorboard at all. I can see where the log director(ies) is/are created by the tutorial script, and the subdirectories /test and /train are there with the event data present. I point the tensorboard to the populated log directory with the following command, but it cannot see the event files. Nothing is present in tensorboard, and I'm redirected back to the tutorials.

Allow programmatically configuring the graph visualization

Migrated from tensorflow/tensorflow#5305.

@eamartin says:

I'm repeatedly making small tweaks to graphs and visualizing them in TensorBoard.
Here are two feature requests for TensorBoard that would make this workflow significantly smoother:

(1)
Allow the user to specify that a node should always be drawn in the main graph or as an auxiliary node. Alternatively, add an option to draw all nodes in main graph.

Currently, TensorBoard chooses to put nodes in a subgraph as auxiliary nodes (within the subgraph). Adding a node to the main graph consists of "open subgraph, click auxiliary node to add to main graph", which needs to be repeated for each auxiliary node to move because "add to main graph" collapses all subgraphs. This pain could also be partially alleviated by an option to "Shift + Click" to add/remove multiple nodes from the main graph at once.

(2)
Allow the user to force edges to be drawn when both nodes are in the main graph.

Example case:
Let op X be in the top level graph. Let SG be a subgraph and let SG contain 10 subgraphs F_i for i=1,...,10 that all take X as input.
TensorBoard will draw an edge from X to SG, but each F_i will have it's dependence on X noted as an auxiliary input. There is no way to add X to the main graph because it is already in the main graph (but visualized as an aux input).
It would nice for there to be a way for there to be one edge from X to SG, and then 10 edges connecting the entry point of X into SG with each of the F_i. Ideally this option would also have a programmatic interface or at least a default to set.

Re-enable explicit control of summary tag

Migrated from tensorflow/tensorflow#6150

The basic issue is:
It used to be possible to precisely control the display name (or tag) of summary data in TensorBoard, because users had direct control over the tag field.

However, this meant that summary code ignored tf name scopes when generating summaries and tags. This was frustrating because it meant the following code was broken:

def layer(input):
  W = get_weights_variable(somehow)
  b = get_bias_variable(somehow)
  activation = tf.nn.relu(input * W + b)
  s = tf.summary.histogram("activation_histogram", activation)
  return activation

input = tf.placeholder(...)
layer1 = layer(input)
layer2 = layer(layer1)
...
merged = tf.summary.merge_all()
....

This code would be broken because the tag "activation_histogram" would be used twice, creating a conflict.

To fix this, the summary module (which was added in tf1.0) removes explicit control of the tag from the user. Instead, the first argument to the summary is the "name", and the generated node_name is used as the summary tag. As discussed in the linked issue, however, this creates its own set of frustrations and headaches.

I think a robust solution would be to add a display_name argument to summaries, which is used to create a SummaryMetadata that TensorBoard parses. Then, TensorBoard can use the display_name as the default name in tb, and fall back to the tag in case of conflicts.

I'm not sure how to implement this in a way that generalizes to the plugin system, though. I don't want to fix it just for individual plugins. We also need to continue to support the case where users manually generate the summary proto with an explicit tag.

@jart @wchargin @chihuahua thoughts?

Feature Request: Add an optional command-line argument for a prefix URL

(I've migrated this issue from tensorflow/tensorflow#10655. It was originally posted by @mebersole. It had four thumbs up emoji there.)

Would like to add a command-line argument that allows TensorBoard to run at a different URL location than the root domain. So for example:

  • command-line: --base_url runhere
  • URL location: http://<address>:6006/runhere/

Motivation for request: There are locations where only minimal ports are open and it would be great to use nginx (or similar) to route TensorBoard through port 80.

Compress audio summaries with OGG or similar

Migrated from tensorflow/tensorflow#4207.

@carlthome says:

When logging an AudioSummary every couple of epochs, the logs grow quickly (as in several gigabytes), and checking TensorBoard remotely (as in not on the same LAN) also becomes very slow. Would it be possible to have builtin support for Ogg compression or an equivalent lossy format? There are multiple use cases where this would be beneficial.

Polarized gradient in tensorboard embeddings

This issue had been migrated from tensorflow/tensorflow#10105.

Currently, default tensorboard color gradient goes from light yellow to dark blue (see screenshot), which works great for highlighting positive values (yellow dots become nearly invisible).

But it doesn't work well for displaying, let's say, sentiment scales (going from -1 to +1). The ability to choose a gradient going from red to blue would be helpful for visualizing polarized scales.

Warn on the frontend when the backend is not finished loading summaries

Migrated from tensorflow/tensorflow#1405.

@bernardopires says:

Since I started using Tensorboard, I noticed some odd behavior that sometimes I'd open it and it seemed like a lot of the summary values were missing, i.e, even though it's already at 10k steps, the summary only shows 2k steps or so. Today I actually finally noticed that it simply meant that it was just still being loaded. I refreshed the page after a minute or so and all data was there. My PC is not bad (i5-2500k) so this was quite surprising. Any reason why the summaries sometimes take so long to load? Also can we get some sort of warning that not all summaries have been completely loaded yet?

load data from relative path with the projector plugin

Migrated from tensorflow/tensorflow#8342

This is a problem similar with issue #7382 but it has been closed.

@dandelionmane
Another place that still has the similar problem in #7382 is the request of project plugin data in Embeddings panel, e.g. /data/plugin/projector/runs still requests in a absolute path.
I found the code here , there is a leading slash for the route-prefix property in vz-projector-dashboard.
If that is correct, I will submit a PR. Thanks!

Another question is when will the tensorboard be recompiled. As now in version 1.0 these features are not included.

Audio thumbnails/summaries in the embedding visualization

Migrated from tensorflow/tensorflow#7106

We (@PFCM and I) have been working with audio data (bird songs) and would like a way to inspect (via listening) embeddings of small chunks of audio files. The main use would be to help us to validate and explore our data. In our setting we have few labels, so we are especially interested in using it to inspect clusters and validate (with humans) their accuracy/reliability/clusteryness.

We would like to know;

  • Are you guys planning on adding this sort of feature to tensorflow? If so, can we help to speed it up?
  • If we were do it how would you recommend that we implement it so that it is supported in the future?

We would like audio summaries to work with the embedding visualiser. Much like there are thumbnails of each mnist image (see link), we would like to be able to click on an embedded datapoint; to listen to it (currently we are just using the spectrograms as image thumbnails) and to figure out where it came from (which part of which audio recording).

Make TensorBoard aware of hyperparameters

Migrated from tensorflow/tensorflow#7708.

@mmuneebs says:

I am wondering if it's possible to store the arguments that we pass to the training script to be stored as part of other summaries, and have a way to see those arguments/parameters as notes in a separate tab in Tensorboard.
Reason: It's often hard to track individual training runs and relate them to the training/network config if you change training and network parameters frequently which always happens during hyperparameter search. We can do it manually too but if included in Tensorboard, it would make it one go-to visualizer and comparison tool for everything.
One method of doing this that comes to mind is using the tf.app.flags.FLAGS arguments. This would keep everything standardized. Or we could also support argparse directly.
Is this something in line with Tensorboard's philosophy or is it too straightforward to be a special feature?

TensorBoard doesn't respect drive names on Windows

Migrated from tensorflow/tensorflow#7856

@enpasos

OS: Windows 10 (64 bit)
TensorFlow Version: 1.0.0
CUDA: 8.0
GPU: yes

Problem:
C:>tensorboard --logdir=E:\tmp\tensorflow\mnist\logs
=> tensorboard starts without loading data (not working and difficult to detect reason)

E:>tensorboard --logdir=E:\tmp\tensorflow\mnist\logs
=> tensorboard starts with loading data (works perfectly)

@mrry

Unfortunately TensorBoard uses a colon as the separator between the optional "run name" and the path in the --logdir flag. That means that --logdir=E:\tmp\tensorflow\mnist\logs is interpreted as a run named E with path \tmp\tensorflow\mnist\logs, which explains why it's sensitive to the current working directory.

As a quick workaround, you could avoid this problem by always specifying an explicit run name as part of the --logdir flag, e.g.:

tensorboard --logdir=training:E:\tmp\tensorflow\mnist\logs

(Reassigning to @dandelionmane, who shows up on the blame for the relevant code.)

Unexpected categorizer output regarding underscores

Migrated from tensorflow/tensorflow#9053.

The following script reproduces:

import tensorflow as tf

def main():
  k = tf.constant(0)
  tf.summary.scalar('loss', k)
  tf.summary.scalar('loss/regularization', k)
  tf.summary.scalar('loss_extra_stuff', k)
  summ = tf.summary.merge_all()

  sess = tf.Session()
  writer = tf.summary.FileWriter('/tmp/cat2')
  writer.add_graph(sess.graph)
  writer.add_summary(sess.run(summ))
  writer.close()

if __name__ == '__main__':
  main()

In TensorBoard, the categorizer emits two categories called "loss", one with tag "loss" and one with "loss/regularization". This should not happen. It does not occur if the third summary is removed.

Embedding Projector: "Color by" drop-down not showing up on multi-column .tsv

Migrated from tensorflow/tensorflow#9688

Describe the problem

Tensorboard does not display the Color By dropdown menu on multi-columnar data. Label by and search by displaying columns normally.

tensorboard dropdown issue

Source code / logs

Sample of metadata.tsv file:

Name	Genre
(Sandy) Alex G	Alternative/Indie Rock	
*NSYNC	Pop/Rock	
Acollective	Pop/Rock	
Ahmet Özhan	International	
Ahu	Club/Dance	
Alex Ferreira	Pop/Rock	
Alex Winston	Pop/Rock	
Ali Azimi	Pop/Rock	
Alphamama	Pop/Rock	
Amaryllis	International	
...
Yomo Toro	Latin
Youssou N'Dour	International
Zafra Negra	Latin
Zany	Electronic	
Zeki Müren	International
iSHi	Electronic	

Code to generate embeddings and metadata:

def list_to_tsv(filenames, metadata_dir):
    with open(os.path.join(metadata_dir,'metadata.tsv'), 'w') as tsvfile:
        writer = csv.writer(tsvfile, delimiter = "\t")
        for record in filenames:
            writer.writerow(record)

def save_down_tensors(tensor_dir, name_and_embedding):
    embeddings = [i[2] for i in name_and_embedding] 
    names = [[i[0],i[1]] for i in name_and_embedding]
    names.insert(0,['Name','Genre'])
    with tf.Session() as sess:
        embeddings_tf = tf.Variable(np.array(embeddings), name = "embeddings")
        tf.global_variables_initializer().run()
        # save the tensor down
        saver = tf.train.Saver(tf.global_variables())
        saver.save(sess, tensor_dir, 0)
        # associate metadata with the embedding
        summary_writer = tf.summary.FileWriter(tensor_dir)
        config = projector.ProjectorConfig()
        embedding = config.embeddings.add()
        embedding.tensor_name = embeddings_tf.name
        #save filenames to tsv
        list_to_tsv(names, metadata_dir)
        embedding.metadata_path = os.path.join(metadata_dir, "metadata.tsv")
        # save a configuration file that TensorBoard will read during startup.
        projector.visualize_embeddings(summary_writer, config)

Tensorboard Graph vizualisation crashes with Chrome/Safari

Migrating tensorflow/tensorflow#9681

Describe the problem

I'm running Tensorboard on a server and visualizing the graph on my local machine.

With the attached file that I generated using a custom script, Tensorboard crashes after a while of just moving the graph around or when I try to remove some modules from the main graph (100% crash after 4 modules removed).

More specifically:

  • In Safari only the browser freezes but I can still close it.
  • In Chrome the browser freezes and then the whole OS, I don't even have haptic feedback anymore. On the otherhand Tensorboard is still running on the server and the Tensorboard webpage can be accessed by other computers.

Source code / logs

Here is the log-file containing just the graph that reproduces this issue 100% time on my end.
events.out.tfevents.1493971052.c941b31be93a.zip

Video summary support

Migrated from tensorflow/tensorflow#3936

Just wondering if there are any plans to add video summaries to tensorboard??

@falcondai offered to implement it as a plugin. @falcondai, I think we are almost at the point where we're ready to accept a plugin like that. Give us a few more weeks to clean up the plugin API and get some good examples you can work off of.

[Windows] TensorBoard doesn't work

This issue is migrated from tensorflow/tensorflow#5983.

I installed the tensorflow for windows with
pip install --upgrade --ignore-installed https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl
(I added the '--ignore-installed', for the official one didn't work on my pc)

Then I tried some code and ran tensorboard, however the browser shows nothing
When running with '--debug' there appears some 404.
I also tried tensorboard on mac os, it worked well, so the problem is not caused by the .tfevent file
I've read through a bunch of related issues, and found that maybe the pip install didn't install tensorflow completely. And I searched the tensorflow file and did not found some certain files appears in mac os tensorflow, such as 'paper-toolkit'

tensorboard server: base_url support

This issue had been migrated from tensorflow/tensorflow#2602.

What have you tried?

Running the tensorboard service through a proxy server, which does map a subdirectory to the tensorboard server' root.

Desired outcome

By providing a way to specify a base_url, the server would understand how to treat incoming requests in such a situation. e.g. ://server/subdir/tensorboard/://server/

Shouldn't tensorboard warn you that not all summaries have loaded yet?

Migrated from tensorflow/tensorflow#1405

Hi,

since I started using Tensorboard, I noticed some odd behavior that sometimes I'd open it and it seemed like a lot of the summary values were missing, i.e, even though it's already at 10k steps, the summary only shows 2k steps or so. Today I actually finally noticed that it simply meant that it was just still being loaded. I refreshed the page after a minute or so and all data was there. My PC is not bad (i5-2500k) so this was quite surprising. Any reason why the summaries sometimes take so long to load? Also can we get some sort of warning that not all summaries have been completely loaded yet?
👍 1

Embedding projector forces square aspect ratio

Migrated from tensorflow/tensorflow#8610.

@jongchyisu says:

I tried to use tensorboard embedding visualization with sprite images.
It works well except two things:

  1. Even though I set the image width and height as:
    embedding.sprite.single_image_dim.extend([my_width, my_height])
    and the projector_config.pbtxt file has:
    sprite {
    image_path: "sprite.png"
    single_image_dim: my_width
    single_image_dim: my_height
    }
    Tensorboard resizes the images to be square, though it can get correct part from the sprite image, except that:
  2. Every last image of each row is not correct. This happens when I have lots of images (like 200 images with size 70*100).

Seems like it only support square images?

@appierys says:

I can confirm that indeed Tensorboard resizes the images to be square.

Embedding Projector: some metadata unsearchable

Migrated from tensorflow/tensorflow#8177

There's a lot of additional conversation there.

Potential Bug

I'm using Tensorboard's embedding projector to perform PCA/t-SNE on a dataset of features I've extracted from ~4K MIDI files. I've created a metadata TSV and pointed my Tensorflow application to it as instructed in the documentation. The metadata parses correctly in Tensorboard, however, when I use the inspector on the right-most panel, many of the fields seem to be unsearchable, or at least yield zero results for any valid query. Searching the first 13 fields (columns) works correctly, however 3 of the last fields are unsearchable, specifically artist_terms, artist_mbtags, and artist_location in embedding_logdir/metadata.tsv.

I'm finding this to be the case independent of whether regex mode is enabled or disabled. I can also confirm that these fields are being parsed correctly as I can select a data point and view the values for these problematic fields. I've also tried creating a metadata file with only those three problematic fieldnames and they continue to misbehave in this test as well.

Reproducing

I've attached my logdir complete with a checkpoint and metadata.tsv. To reproduce my results, extract the folder and launch Tensorboard from inside embedding_logdir's parent directory like so:

tensorboard --logdir embedding_logdir

Tensorboard must be run from the parent directory of embedding_logdir to maintain the correct filepath I specified for the metadata.tsv file in my TF program.

embedding_logdir.tar.gz

TensorBoard Streaming Mode

Migrated from tensorflow/tensorflow#2603

There's some discussion on the original request, here is the parent comment:

The new Tensorboard refresh button looks great, and will certainly be a bit more civilized than rapidly swapping the horizontal axis view to update the data. :)

But what if Tensorboard could be optionally updated in real time, whenever a summary writer was flushed? I've written systems before where past data is loaded along with the page, and new data is added continuously via a web socket. Would this work in Tensorboard's current architecture?

If my understanding is correct, Tensorboard currently updates itself every 120 seconds by reading the event files in the log directory. Could Tensorboard also open a local socket and bounce summary updates from a connected Tensorflow process to websocket clients?

I was thinking of hacking a streaming system together on my branch, but I figured I should reach out first to see if this is already in the roadmap. If it isn't, is this feature something you might like to merge if I can get it to work robustly?
👍 8

PIP: Don't lock markdown version to 2.2.0

tensorflow/tensorflow#10744

This markdown version lock could cause versioning conflict for downstream (e.g. if requirements are compiled and resolved by pip-compile). Wondering If there's a reason to lock this version. And it's already not sync with the CI install script (those --upgrade with locked version also doesn't seem right to me).

Document events file format

tensorflow/tensorflow#4181

@malmaud says:

Hello,
In the documentation for Tensorboard, I can't find a description for the format of the events file beyond that it contains Event protobufs. Is it recordio? Are there any tools for writing event files for 3rd-party Tensorflow clients that aren't using the Python API?

Improve TensorBoard embeddings search

Migrated from tensorflow/tensorflow#10106

Currently, when making a search query using a custom variable, let's say a relevance score, then results display relevance scores. This is not always helpful. In my application, I have word embeddings that I would like to filter by a custom variable (relevance score), and I can't see what word is corresponding to each relevance score (see screenshot).

It would be better to have a separate field so that users can define how they would like to see the results of their search query.
screen shot 2017-05-22 at 11 54 44

Can't load >5 Audio summaries, Broken pipe error

Migrated from tensorflow/tensorflow#8055.

There's a bunch more discussion on the original thread, plus some repro files from @lemonzi

Possibly related: tensorflow/tensorflow#4207
Setup: OSX 10.10, TF nightly build of 2017-03-02 without GPU, Python 2.7.13. Affects both Chrome 56.0 and Safari 10.0. I think the official 1.0.0 release also had this issue.

My model logs relatively long audio summaries (~3 minutes long at 16 kHz), and created around 60 of them overnight. The issue also appears with fewer summaries created, at least on Chrome.

When using Chrome, they only load upon playback. Only the first 5 or so that I play will work; after that (even if I pause them) I can't get the others to play unless I restart TensorBoard. When using Safari, all of them try to load at once and nothing works.

On the TensorBoard side, I see a bunch of error: [Errno 32] Broken pipe, which could mean the browser tries to manage the incoming streams and TensorBoard doesn't know how to defer the transfers (see this post).

For Chrome, there is a limit of 6 concurrent audio downloads. I think in my case the audio keeps the connection open even after I pause it, and instead of downloading the whole file it streams it, so it never closes and I can't play other files. I may need to restart TensorBoard to clear the browser cache, or maybe TensorBoard itself has a limit on the concurrent streams and they don't close even after a page reload. For Safari, I think it doesn't wait until playback to download the files, so all 60 players start the download concurrently and only the first ones survive.

If it's not a bug on the TensorBoard server and the summaries can't be prevented from keeping the connection open on pause or stop, I guess a possible solution could be to have a global player and load the audio files to it on-demand when requesting a summary, rather than each summary having its own player.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.