openxaiproject / pnpxai Goto Github PK
View Code? Open in Web Editor NEWPlug and Play XAI: Explain Your AI Models with Ease
Home Page: https://openxaiproject.github.io/pnpxai/
License: Apache License 2.0
Plug and Play XAI: Explain Your AI Models with Ease
Home Page: https://openxaiproject.github.io/pnpxai/
License: Apache License 2.0
We are not providing target layer as a kwarg, by automatically locating candidate layer. So, I think we have two choices for the next step
attribute_to_layer_input
, relu_attributions
and attr_dim_summation
I think the second one is more desirable.
The robustness test for RAP fails with memory issue on the gpu server. This is not the matter of robustness in computing time, but I guess the testing should cover this issue. One of my idea is to make separate testing code to check robustness of explainer about memory. What do you think about this @enver1323 ?
Evaluation Specification
I've tried to implement an experiment on quality of metrics over parameter settings. For example, I wanted to check whether MuFidelity(n_perturbations=20)
returns similar value with MuFidelity(n_perturbations=200)
. To do this, I tried replacing auto-evaluator to my own evaluator (This is not elegant way. I think we need to talk about this as a separate issue) and running it.
proj = Project("test proj")
expr = proj.create_auto_experiment(model=model, data=data)
# replace to my own
expr.evaluator = XaiEvaluator(metrics=[
MuFidelity(n_perturbations=20),
MuFidelity(n_perturbations=200),
])
expr.run()
As a result, run.evaluations
returned the last metric values only, even though the first metric values were calculated. The reason is that the current version of evaluator is collecting the values by name of metric, not the index of it. There are several ways to implement such experiment in the current version i.e. creating multiple experiments over different param settings, However, it is not efficient because it runs explainer twice.
Currently,
from pnpxai.evaluator.mu_fidelity import MuFidelity
from pnpxai.evaluator.sensitivity import Sensitivity
from pnpxia.evaluator.complexity import Complexity
I think the following is more convenient, flexible and intuitive
from pnpxai.evaluator.metrics import MuFidelity, Sensitivity, Complexity
Currently, experiments inputs request is initialized with page loading, which causes a delay in page loading. Let's bind this request to the modal window openning condition. This would allow quick initial page load
Draft
아래의 사항을 참고하여 Figma로 UI Design
UI Design in Figma following details:
Main Page Structure
How can we address the Plotly Object?
Suggesttion
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.