Giter VIP home page Giter VIP logo

fairness-indicators's Introduction

Python PyPI DOI CII Best Practices OpenSSF Scorecard Fuzzing Status Fuzzing Status OSSRank Contributor Covenant TF Official Continuous TF Official Nightly

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working within the Machine Intelligence team at Google Brain to conduct research in machine learning and neural networks. However, the framework is versatile enough to be used in other areas as well.

TensorFlow provides stable Python and C++ APIs, as well as a non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

Other devices (DirectX and MacOS-metal) are supported using Device plugins.

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Forum for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development.

Patching guidelines

Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities:

  • Clone the TensorFlow repo and switch to the corresponding branch for your desired TensorFlow version, for example, branch r2.8 for version 2.8.
  • Apply (that is, cherry-pick) the desired changes and resolve any code conflicts.
  • Run TensorFlow tests and ensure they pass.
  • Build the TensorFlow pip package from source.

Continuous build status

You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux GPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows GPU Status Temporarily Unavailable Nightly Binary Official GCS

Resources

Learn more about the TensorFlow community and how to contribute.

Courses

License

Apache License 2.0

fairness-indicators's People

Contributors

anirudh161 avatar brills avatar catherinaxu avatar chongkong avatar christinagreer avatar dhruvesh09 avatar embr avatar fhuanming avatar genehwung avatar jay90099 avatar jindalshivam09 avatar kevinrobinson avatar kumarpiyush avatar lamberta avatar markdaoust avatar mattdangerw avatar mdreves avatar paulgc avatar postmasters avatar rchen152 avatar rtg0795 avatar shuklak13 avatar tgreensp avatar venkat2469 avatar vkarampudi avatar yashk2810 avatar yilei avatar zhouhao138 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fairness-indicators's Issues

Need help with evaluating model!

Hi I am new to this, I am successfully able to train and evaluate my model, however now I am wondering how do I recompute the same metrics and performance gap using fairness indicators.

My model is something like this:

def model_func():   
    model = tf.keras.models.Sequential([
        keras.layers.Dense(units = 14, input_dim=14, activation='relu'),
        keras.layers.Dense(units = 28, activation='relu'),
        keras.layers.Dense(units = 1,  activation='sigmoid')
        ])

    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

    return model

Then I train model and test it on test data-set.

# Geting my trained model
model = model_func()

# Training my model
train = model.fit(X_train, y_train, epochs=50, batch_size=10, verbose = 1)

Now how do I recompute the same metrics and performance gap using fairness indicators?

Image in Fairness_Indicators_Lineage_Case_Study.ipynb is not publicly accessible

URL with the issue:

https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb

Description of issue (what needs changing):

The link to the image on line 899 in the notebook is not publicly accessible, you need to sign in with a Google account to view it.

        "![Type I and Type II errors](http://services.google.com/fh/gumdrop/preview/blogs/type_i_type_ii.png)\n",

Correct links

Link to a publicly accessible type_i_type_ii.png.

AttributeError in Facessd Fairness Indicators Example Colab.ipynb

System information

  • Running "Facessd Fairness Indicators Example Colab.ipynb" on Colab
  • TensorFlow version: 2.2.0-rc3
  • Python version: 3.6.9

Describe the current behavior

Getting the following error when running cell 3 line 2:

AttributeError: module 'tfx_bsl.coders.example_coder' has no attribute 'ExamplesToRecordBatchDecoder' [while running 'DecodeData/BatchSerializedExamplesToArrowTables/BatchDecodeExamples']

Standalone code to reproduce the issue
The error is easy reproduced running the "Facessd Fairness Indicators Example Colab.ipynb" on Colab.

Other info / logs

IndexError                                Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in get(self, instruction_id, bundle_descriptor_id)
    311       # pop() is threadsafe
--> 312       processor = self.cached_bundle_processors[bundle_descriptor_id].pop()
    313     except IndexError:

IndexError: pop from empty list

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._invoke_lifecycle_method()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnInvoker.invoke_setup()

/usr/local/lib/python3.6/dist-packages/tensorflow_data_validation/utils/batch_util.py in setup(self)
    106   def setup(self):
--> 107     self._decoder = example_coder.ExamplesToRecordBatchDecoder()
    108 

AttributeError: module 'tfx_bsl.coders.example_coder' has no attribute 'ExamplesToRecordBatchDecoder'

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
<ipython-input-3-31ccd38caa04> in <module>()
      1 data_location = tf.keras.utils.get_file('lfw_dataset.tf', 'https://storage.googleapis.com/facessd_dataset/lfw_dataset.tfrecord')
      2 
----> 3 stats = tfdv.generate_statistics_from_tfrecord(data_location=data_location)
      4 tfdv.visualize_statistics(stats)

/usr/local/lib/python3.6/dist-packages/tensorflow_data_validation/utils/stats_gen_lib.py in generate_statistics_from_tfrecord(data_location, output_path, stats_options, pipeline_options, compression_type)
    113             shard_name_template='',
    114             coder=beam.coders.ProtoCoder(
--> 115                 statistics_pb2.DatasetFeatureStatisticsList)))
    116   return load_statistics(output_path)
    117 

/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb)
    501   def __exit__(self, exc_type, exc_val, exc_tb):
    502     if not exc_type:
--> 503       self.run().wait_until_finish()
    504 
    505   def visit(self, visitor):

/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
    481       return Pipeline.from_runner_api(
    482           self.to_runner_api(use_fake_coders=True), self.runner,
--> 483           self._options).run(False)
    484 
    485     if self._options.view_as(TypeOptions).runtime_type_check:

/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
    494       finally:
    495         shutil.rmtree(tmpdir)
--> 496     return self.runner.run_pipeline(self, self._options)
    497 
    498   def __enter__(self):

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/direct/direct_runner.py in run_pipeline(self, pipeline, options)
    128       runner = BundleBasedDirectRunner()
    129 
--> 130     return runner.run_pipeline(pipeline, options)
    131 
    132 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_pipeline(self, pipeline, options)
    553 
    554     self._latest_run_result = self.run_via_runner_api(
--> 555         pipeline.to_runner_api(default_environment=self._default_environment))
    556     return self._latest_run_result
    557 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_via_runner_api(self, pipeline_proto)
    563     # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to
    564     #   the teststream (if any), and all the stages).
--> 565     return self.run_stages(stage_context, stages)
    566 
    567   @contextlib.contextmanager

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_stages(self, stage_context, stages)
    704               stage,
    705               pcoll_buffers,
--> 706               stage_context.safe_coders)
    707           metrics_by_stage[stage.name] = stage_results.process_bundle.metrics
    708           monitoring_infos_by_stage[stage.name] = (

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders)
   1071         cache_token_generator=cache_token_generator)
   1072 
-> 1073     result, splits = bundle_manager.process_bundle(data_input, data_output)
   1074 
   1075     def input_for(transform_id, input_id):

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
   2332 
   2333     with UnboundedThreadPoolExecutor() as executor:
-> 2334       for result, split_result in executor.map(execute, part_inputs):
   2335 
   2336         split_result_list += split_result

/usr/lib/python3.6/concurrent/futures/_base.py in result_iterator()
    584                     # Careful not to keep a reference to the popped future
    585                     if timeout is None:
--> 586                         yield fs.pop().result()
    587                     else:
    588                         yield fs.pop().result(end_time - time.monotonic())

/usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
    430                 raise CancelledError()
    431             elif self._state == FINISHED:
--> 432                 return self.__get_result()
    433             else:
    434                 raise TimeoutError()

/usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
    382     def __get_result(self):
    383         if self._exception:
--> 384             raise self._exception
    385         else:
    386             return self._result

/usr/local/lib/python3.6/dist-packages/apache_beam/utils/thread_pool_executor.py in run(self)
     42       # If the future wasn't cancelled, then attempt to execute it.
     43       try:
---> 44         self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))
     45       except BaseException as exc:
     46         # Even though Python 2 futures library has #set_exection(),

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in execute(part_map)
   2329           self._registered,
   2330           cache_token_generator=self._cache_token_generator)
-> 2331       return bundle_manager.process_bundle(part_map, expected_outputs)
   2332 
   2333     with UnboundedThreadPoolExecutor() as executor:

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
   2243             process_bundle_descriptor_id=self._bundle_descriptor.id,
   2244             cache_tokens=[next(self._cache_token_generator)]))
-> 2245     result_future = self._worker_handler.control_conn.push(process_bundle_req)
   2246 
   2247     split_results = []  # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse]

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in push(self, request)
   1557       self._uid_counter += 1
   1558       request.instruction_id = 'control_%s' % self._uid_counter
-> 1559     response = self.worker.do_instruction(request)
   1560     return ControlFuture(request.instruction_id, response)
   1561 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request)
    413       # E.g. if register is set, this will call self.register(request.register))
    414       return getattr(self, request_type)(
--> 415           getattr(request, request_type), request.instruction_id)
    416     else:
    417       raise NotImplementedError

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id)
    442     # type: (...) -> beam_fn_api_pb2.InstructionResponse
    443     bundle_processor = self.bundle_processor_cache.get(
--> 444         instruction_id, request.process_bundle_descriptor_id)
    445     try:
    446       with bundle_processor.state_handler.process_instruction_id(

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in get(self, instruction_id, bundle_descriptor_id)
    316           self.state_handler_factory.create_state_handler(
    317               self.fns[bundle_descriptor_id].state_api_service_descriptor),
--> 318           self.data_channel_factory)
    319     self.active_bundle_processors[
    320         instruction_id] = bundle_descriptor_id, processor

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in __init__(self, process_bundle_descriptor, state_handler, data_channel_factory)
    741     self.ops = self.create_execution_tree(self.process_bundle_descriptor)
    742     for op in self.ops.values():
--> 743       op.setup()
    744     self.splitting_lock = threading.Lock()
    745 

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.setup()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.setup()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.setup()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._invoke_lifecycle_method()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()

/usr/local/lib/python3.6/dist-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback)
    417         if traceback == Ellipsis:
    418             _, _, traceback = sys.exc_info()
--> 419         raise exc.with_traceback(traceback)
    420 
    421 else:

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._invoke_lifecycle_method()

/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnInvoker.invoke_setup()

/usr/local/lib/python3.6/dist-packages/tensorflow_data_validation/utils/batch_util.py in setup(self)
    105 
    106   def setup(self):
--> 107     self._decoder = example_coder.ExamplesToRecordBatchDecoder()
    108 
    109   def process(self, batch: List[bytes]) -> Iterable[pa.Table]:

AttributeError: module 'tfx_bsl.coders.example_coder' has no attribute 'ExamplesToRecordBatchDecoder' [while running 'DecodeData/BatchSerializedExamplesToArrowTables/BatchDecodeExamples']

Performance of CelebA constrained model

  • Have I written custom code (as opposed to using stock example code provided): No

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab

  • Fairness Indicators version: 0.30.1

  • TensorFlow version: 2.5.0

  • Python version: 3.7.10

  • TFMA version: 0.30.0

Hey there,

I have noticed that the constrained model in the CelebA example Notebook has a horrible positive rate: 0.1 @ 0.5 threshold.

I expected that the model would perform at least equally good on this metric.

WIT not rendering fairness indicators in Vertex Workbench (works in Colab)

I try to run the Fairness_Indicators_Example_Colab.ipynb in Vertex Workbench (user-managed notebook), but when I run this cell, it says Loading widget for ever and nothing is displayed. I restarted the Kernel after the pip installation. TFDV is shown, but not WIT with the fairness indicators. Any suggestions are appreciated.

widget_view.render_fairness_indicator(eval_result=eval_result,
                                      slicing_column=slice_selection,
                                      event_handlers=event_handlers
                                      )
Loading widget...

This problem does NOT happen in Colab.

No fairness indicators widget shown in Jupyterlab

Hi, now I am learning to use fairness indicators and what-if tool in my jupyterlab environment on SageMaker notebook instance. The witwidget of what-if tool is successfully shown, but the widget of fairness indicators cannot be visualized, with only text "Loading widget..." printed. So I am wondering if fairness indicator's visualization available in my environment. Appreciate it if anyone helps.

My python and pip package versions
Python 3.7.12
pip 22.0.4

(Jupyter):
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.3
jupyter-core 4.9.2
jupyterlab 1.2.21
jupyterlab-git 0.11.0
jupyterlab-server 1.2.0
jupyterlab-widgets 1.1.0

(TensorFlow):
TF: 2.6.2
TFDV: 1.3.0
TFMA: 0.34.1
Fairness Indicators version: 0.34.0

(jupyter labextension):
@jupyter-widgets/jupyterlab-manager v1.1.0 enabled OK
@jupyterlab/celltags v0.2.0 enabled OK
@jupyterlab/git v0.11.0 enabled OK
@jupyterlab/toc v2.0.0 enabled OK
nbdime-jupyterlab v1.0.1 enabled OK
sagemaker_examples v0.1.0 enabled OK
sagemaker_session_manager v0.1.0 enabled OK
wit-widget v1.8.1 enabled OK

And here is the sample code I am running.
https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb#scrollTo=MfBg1C5NB3X0

Plantilla de gato

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
  • Fairness Indicators version:
  • Python version:
  • Pip version:

Describe the problem

Provide the exact sequence of commands / steps that you executed before running into the problem

Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

Widget doesn't show up in Jupyter Notebook

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): RHEL 7
  • Fairness Indicators version:
fairness-indicators                    0.24.0
tensorboard-plugin-fairness-indicators 0.24.0
  • Python version: 3.7
  • Pip version: 20.2
  • NPM version: [email protected]
  • Jupyter Notebook version:
jupyter                                1.0.0
jupyter-client                         5.2.3
jupyter-console                        6.1.0
jupyter-contrib-core                   0.3.3
jupyter-contrib-nbextensions           0.5.1
jupyter-core                           4.4.0
jupyter-highlight-selected-word        0.2.0
jupyter-latex-envs                     1.4.6
jupyter-nbextensions-configurator      0.4.1
jupyterlab                             2.2.4
jupyterlab-launcher                    0.11.2
jupyterlab-pygments                    0.1.1
jupyterlab-server                      1.2.0

Describe the problem

tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)

does not display any widget in Jupyter Notebook. It shows these errors in the browser console,

manager-base.js:273 Could not instantiate widget
        (anonymous) @ manager-base.js:273
        (anonymous) @ manager-base.js:44
        (anonymous) @ manager-base.js:25
        a @ manager-base.js:17
        Promise.then (async)
        u @ manager-base.js:18
        (anonymous) @ manager-base.js:19
        A @ manager-base.js:15
        t._make_model @ manager-base.js:257
        (anonymous) @ manager-base.js:246
        (anonymous) @ manager-base.js:44
        (anonymous) @ manager-base.js:25
        (anonymous) @ manager-base.js:19
        A @ manager-base.js:15
        t.new_model @ manager-base.js:232
        t.handle_comm_open @ manager-base.js:144
        L @ underscore.js:762
        (anonymous) @ underscore.js:775
        (anonymous) @ underscore.js:122
        (anonymous) @ comm.js:89
        Promise.then (async)
        CommManager.comm_open @ comm.js:85
        i @ jquery.min.js:2
        Kernel._handle_iopub_message @ kernel.js:1223
        Kernel._finish_ws_message @ kernel.js:1015
        (anonymous) @ kernel.js:1006
        Promise.then (async)
        Kernel._handle_ws_message @ kernel.js:1006
        i @ jquery.min.js:2

utils.js:119 Error: Could not create a model.
        at utils.js:119
        (anonymous) @ utils.js:119
        Promise.catch (async)
        t.handle_comm_open @ manager-base.js:149
        L @ underscore.js:762
        (anonymous) @ underscore.js:775
        (anonymous) @ underscore.js:122
        (anonymous) @ comm.js:89
        Promise.then (async)
        CommManager.comm_open @ comm.js:85
        i @ jquery.min.js:2
        Kernel._handle_iopub_message @ kernel.js:1223
        Kernel._finish_ws_message @ kernel.js:1015
        (anonymous) @ kernel.js:1006
        Promise.then (async)
        Kernel._handle_ws_message @ kernel.js:1006
        i @ jquery.min.js:2

2kernel.js:1007 Couldn't process kernel message TypeError: Cannot read property 'FairnessIndicatorModel' of undefined
        at manager.js:153
        (anonymous) @ kernel.js:1007
        Promise.catch (async)
        Kernel._handle_ws_message @ kernel.js:1007
        i @ jquery.min.js:2

manager.js:153 Uncaught (in promise) TypeError: Cannot read property 'FairnessIndicatorModel' of undefined
    at manager.js:153
    (anonymous)	@	manager.js:153
    Promise.then (async)		
    t.register_model	@	manager-base.js:208
    (anonymous)	@	manager-base.js:248
    (anonymous)	@	manager-base.js:44
    (anonymous)	@	manager-base.js:25
    (anonymous)	@	manager-base.js:19
    A	@	manager-base.js:15
    t.new_model	@	manager-base.js:232
    t.handle_comm_open	@	manager-base.js:144
    L	@	underscore.js:762
    (anonymous)	@	underscore.js:775
    (anonymous)	@	underscore.js:122
    (anonymous)	@	comm.js:89
    Promise.then (async)		
    CommManager.comm_open	@	comm.js:85
    i	@	jquery.min.js:2
    Kernel._handle_iopub_message	@	kernel.js:1223
    Kernel._finish_ws_message	@	kernel.js:1015
    (anonymous)	@	kernel.js:1006
    Promise.then (async)		
    Kernel._handle_ws_message	@	kernel.js:1006
    i	@	jquery.min.js:2

Provide the exact sequence of commands / steps that you executed before running into the problem
I was just following an example from TF docs which uses bar_pass_prediction.csv dataset. Below code shows how the config is created from sample data.

# Specify Fairness Indicators in eval_config.
eval_config = text_format.Parse("""
  model_specs {
    prediction_key: 'dnn_bar_pass_prediction',
    label_key: 'pass_bar'
  }
  metrics_specs {
    metrics {class_name: "AUC"}
    metrics {
      class_name: "FairnessIndicators"
      config: '{"thresholds": [0.50, 0.90]}'
    }
  }
  slicing_specs {
    feature_keys: 'race1'
  }
  slicing_specs {}
  """, tfma.EvalConfig())

# Run TensorFlow Model Analysis.
eval_result = tfma.analyze_raw_data(
  data=_LSAT_DF,
  eval_config=eval_config,
  output_path=_DATA_ROOT)

Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
None

Update example and tutorial links in blog posts

hello! ๐Ÿ‘‹

In looking at some blog posts that are ranked highly when typing "fairness indicators" in search engines (Google AI blog, TF blog), I found that many of the links are broken.

In the blog posts:

Screen Shot 2020-12-08 at 9 53 28 AM

Screen Shot 2020-12-08 at 9 53 32 AM

I submitted a fix for this in the REAMDE in This is the same problem as #196, but since the blog posts from December 2019 come up so highly in search engines, it might be good to backport the link fixes and republish those posts too. Thanks! ๐Ÿ‘

Widget Not Working in Jupyter Notebook Environment

Hi Dev team,

I am studying the fairness indicator when trying to use it locally instead of colab. When I am trying to render the widget from the following example, the results are not showing up:

event_handlers={'slice-selected':
wit.create_selection_callback(wit_data, DEFAULT_MAX_EXAMPLES)}
widget_view.render_fairness_indicator(eval_result=eval_result,
slicing_column=slice_selection,
event_handlers=event_handlers
)
Later I checked the source code and find out when the environment is not colab, it will pass to a empty class called FairnessIndicatorViewer. I wonder is it intended or it will be fixed later? Many thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.