getredash / redash-toolbelt Goto Github PK
View Code? Open in Web Editor NEWAPI client and utilities to manage a Redash instance
License: BSD 2-Clause "Simplified" License
API client and utilities to manage a Redash instance
License: BSD 2-Clause "Simplified" License
I already published the initial version to PyPi to reserve the name. The steps to publish are:
poetry build
poetry publish
We need to automate this using GitHub Actions.
I have a query but one of parameters is type "query based dropdown list". I do not know the format of json for this parameter.
Please help me!!!
Here is the example I've followed: https://github.com/getredash/redash-toolbelt/blob/master/redash_toolbelt/examples/refresh_query.py
getting the below error while migrating users from redash source version 10 ( on virtual machine using docker-compose ) to redash target version 10 ( on k8s ):
importing: 50
importing: 51
Could not create user: probably this user already exists but is not present in meta.json
importing: 52
Could not create user: probably this user already exists but is not present in meta.json
Only 50 users are getting migrated, after that it is giving the above error, tried again by increasing the below env variable which defaults to 50, but that also not worked:
throttleLoginPattern: 1000/hour
can anyone help with this issue ?
When I run redash-migrate data_sources
, I get the following error message:
Script could not proceed because of a problem reading meta.json
Please run redash-migrate init before proceeding
And yes, I did run redash-migrate init
and it produced a valid-looking meta.json.
I took a peek under the hood and it turned out this is the exception in the python script:
invalid literal for int() with base 10: '[email protected]'
This is what my meta.json looked like:
{
"users": {
"[email protected]": {
"id": "[email protected]",
"email": "[email protected]"
}
},
...
}
After changing it to look like this, redash-migrate data_sources
worked:
{
"users": {
"1": {
"id": "[email protected]",
"email": "[email protected]"
}
},
...
}
But I'm sure something is not right. Why was my "user" not an integer?
clone dashboard page
Per this comment.
When moving the ORIGIN admin
and default
groups, we can just copy users into the existing groups on the DESTINATION.
It's time-consuming to scrub data from a large Redash instance (500+) queries. gdpr-scrub is an O(N) operation. We can drop it to approximately O(1) by allowing it to check for multiple email addresses in the same pass.
function fails when given the slug from get_dashboard.
does work if given the id of the dashboard.
This shows up in duplicate_dashboard(). supplying a known good slug fails.
redash-toolbelt/redash_toolbelt/client.py
Line 83 in f6d2c40
Hi,
When performing the file mentioned here: https://github.com/getredash/redash-toolbelt/blob/master/redash_toolbelt/examples/refresh_query.py
I am getting all the needed rows, but I do need to work on it more to sort them as I see in the UI.
My question is, is there is a possibility to get those results already sorted as I see in the UI?
And in the Python script, this is what I'm getting (unsorted like my initial query) :
Thanks for helping me out.
I executed redash-migrate data_sources
to migrate data sources for BigQuery from the hosted redash to the self-hosted redash. In reality, I am able to see the list of data sources on the self-hosted environment. But, the connection tests don't work.
It seems that redash-migrate check_data_sources
looks good.
$ redash-migrate check_data_sources
You have entered 7 data sources into meta.json. Your source instance contains 7 data sources.
Check complete. OK
Saving meta...
The migrated data sources work without re-registering the credentials.
The table names are supposed to be the query id, but these are left untouched in the SQL string when migrating. So they are not in working state after migration.
I was hoping someone might be able to help me understand what settings were in place on the Redash hosted version, that we'd need to specify on our own version when we move from hosted to self hosted AWS.
I ask because we're running our instance on an AWS EC2 instance and any query that tries to return too much data creashes the whole instance due to out of memory. Apart from moving to kubernetes or something similar, it would be good to know if there was a setting on hosted that stopped these queries finishing.
Note we have already set REDASH_SCHEDULED_QUERY_TIME_LIMIT=120
and REDASH_ADHOC_QUERYTIME_LIMIT=120
and are using instance type r5.4xlarge
.
Thanks!
I already wrote this. Just need to clean it up and migrate it in.
I didn't see an example that allows exporting a dashboard (much like for export_query). I have made one. I can submit a pull request for it, if you want. It can export the full dump from the redash python handle or a condensed version that just shows the key elements and queries while suppressing a lot of other stuff like position information.
dougstarfish is my github acct.
Hi, We have successfully migrated to a new server but the new server is working really slow. Queries are taking more time than app.redash.io.
How can we decrease the run time of queries on the new server?
Thanks
Should give a command to see which DB tables that are queried in a Redash instance.
FROM
or JOIN
statement (like OUTER APPLY
in TSQL).When redash-migrate raises an exception, we catch it so that we can save the meta.json file. Then we print the Exception text to the screen. But the exception text alone is not useful, because it doesn't reveal where the script failed.
I can't migrate queries with redash-migrate queries
from the hosted redash to OSS redash v10.0.0, even though I was able to migrate data sources, groups, users and destinations. When I tested migrating the hosted redash to a local docker-compose of the redash repository, redash-migrate queries
worked. However, it doesn't work on the destination instance in Google Kubernetes Engine.
$ redash-migrate queries
Import queries...
Query 172644 - OK - importing
Query 172644 - FAIL - Query creation at destination failed: 403 Client Error: FORBIDDEN for url: https://redash-qa.xxx.dev/api/queries
Query 173527 - OK - importing
Query 173527 - FAIL - Query creation at destination failed: 403 Client Error: FORBIDDEN for url: https://redash-qa.xxx.dev/api/queries
...
In our case, some stuff in the settings were not migrated over.
To be more specific, checkboxes under "Feature Flags" and domains under "Allowed Google Apps Domains" were not migrated.
I don't think this is a big deal, but it might surprise some people.
Instructions here
Since we're planning to deprecate mssql anyway, it makes sense to handle this during migration.
I think the README.md can be more helpful on how to actually make use of the toolbelt.
I cloned the repo, installed poetry, ran poetry install
but this failed because the lock file is probably out of date.
Creating virtualenv redash-toolbelt-KQO_4Cgq-py3.7 in /Users/frankie/Library/Caches/pypoetry/virtualenvs
Installing dependencies from lock file
Warning: The lock file is not up to date with the latest changes in pyproject.toml. You may be getting outdated dependencies. Run update to update them.
[SolverProblemError]
Because redash-toolbelt depends on click (^7.0) which doesn't match any versions, version solving failed.
I ran poetry update
and poetry install
worked but it took me a while longer to figure out I need to run with poetry run python redash_toolbelt/examples/gdpr_scrub.py
.
If this is the right path I think the lock file and the readme should be updated.
If I try get data by date range parameters like below:
i use get_fresh_query_result(redash_url, query_id, api_key, params)
api and build params like that :
params = {'p_train_id': 'xxx',
'p_depature_date.start': '2020-07-01',
'p_depature_date.end': '2020-07-02'}
but it can not work , i see the request detail it's that:
SELECT *
FROM train_time_table
WHERE train_id = 'xxx'
AND depature_date >= ''
AND depature_date <= ''
ORDER BY start_time DESC
how can i build the request params for date range parameters
The migration script does not carry over visualisation settings for the default table visualisation of a query. Only a stock / default table is displayed.
H/T @smaraf in this comment
Under the hood, this is the full error message including the exception:
Could not create user: probably this user already exists but is not present in meta.json 429 Client Error: TOO MANY REQUESTS for url: http://13.228.231.8//api/users?no_invite=1
importing: 365880
I can see this kind of error was expected and this comment shows how this can be fixed:
#23 (comment)
However, even after setting RATELIMIT_ENABLED=False
in redash (in /opt/redash/env
for AWS AMI) I am still getting rate limited with the same message. What did change, however, is how many calls succeeded before getting rate limited. Before disabling RATELIMIT_ENABLED
rate limit would kick in after creating 50 users, and after, it would start after creating 100 users.
client.py
pyproject.toml
for the scriptThe export-queries
script was added to redash-toolbelt from an existing script. It writes for metadata to the top of each file in addition to the query text:
It would be nice to write this metadata in a machine readable format (YAML probably). And to write all of the metadata available including:
From this gist
The primary users of the forthcoming redash-migrate
command are users of Hosted Redash, which includes a data source called csvurl
which is not part of the open source version. redash-migrate
ignores these data sources and their queries as a result. But we should add a mapper that creates a CSV
type data source. It should also convert the queries like so:
-- Plain text format used by csvurl
https://www.website.com/path/to/file.csv
-- YAML format used by CSV
url: "https://www.website.com/path/to/file.csv
It's not clear from the CLI that 'no news is good news'.
client.py
pyproject.toml
for the scriptHi,
I am using Redash-migrate to migrate the Redash settings from existing instance to another , though the users , datasources and alert destinations are migrated successfully but it is giving error on queries and dashboard migration:
Queries error:
Query 1201 - OK - importing Query 1201 - FAIL - Query creation at destination failed: 403 Client Error: FORBIDDEN for url: https://redash/api/queries
Dashboard error
Importing dashboards...
Dashboardsales
- SKIP - Already imported
importing: report
403 Client Error: FORBIDDEN for url: https://redash/api/dashboards
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.6/site-packages/redash_toolbelt/examples/migrate.py", line 1233, in wrapped
func(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.6/site-packages/redash_toolbelt/examples/migrate.py", line 767, in import_dashboards
new_dashboard = user_client.create_dashboard(d["name"])
File "/home/ubuntu/.local/lib/python3.6/site-packages/redash_toolbelt/client.py", line 91, in create_dashboard
return self._post("api/dashboards", json={"name": name}).json()
File "/home/ubuntu/.local/lib/python3.6/site-packages/redash_toolbelt/client.py", line 206, in _post
return self._request("POST", path, **kwargs)
File "/home/ubuntu/.local/lib/python3.6/site-packages/redash_toolbelt/client.py", line 214, in _request
response.raise_for_status()
File "/home/ubuntu/.local/lib/python3.6/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: FORBIDDEN for url: https://redash/api/dashboards
Saving meta...
The thing to note here is , only one dashboard and one query is migrated here and is giving error for rest of the queries and dashboard.
$ uname -a
Linux redash 4.15.0-158-generic #166-Ubuntu SMP Fri Sep 17 19:37:52 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ pip install --upgrade redash-toolbelt
Requirement already up-to-date: redash-toolbelt in /usr/local/lib/python3.6/dist-packages
Requirement already up-to-date: requests<3.0.0,>=2.22.0 in /usr/local/lib/python3.6/dist-packages (from redash-toolbelt)
Requirement already up-to-date: click<8.0,>=7.0 in /usr/local/lib/python3.6/dist-packages (from redash-toolbelt)
Requirement already up-to-date: charset-normalizer~=2.0.0; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.22.0->redash-toolbelt)
Requirement already up-to-date: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.22.0->redash-toolbelt)
Requirement already up-to-date: idna<4,>=2.5; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.22.0->redash-toolbelt)
Requirement already up-to-date: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.22.0->redash-toolbelt
$ redash-migrate --version
redash-migrate: command not found
As seen here.
Hi,
I'm trying to migrate all settings of an instance of Redash, but when i run the command redash-migrate dashboards
, i get this error:
500 Server Error: INTERNAL SERVER ERROR for url: http://localhost:5000/api/dashboards/edifici Traceback (most recent call last): File "C:\.......\examples\migrate.py", line 1233, in wrapped func(*args, **kwargs) File "C:\.......\examples\migrate.py", line 752, in import_dashboards d = orig_client.dashboard(dashboard["slug"]) File "C:\.......\redash_toolbelt\client.py", line 85, in dashboard return self._get("api/dashboards/{}".format(slug)).json() File "C:\.......\redash_toolbelt\client.py", line 203, in _get return self._request("GET", path, **kwargs) File "C:\.......\redash_toolbelt\client.py", line 214, in _request response.raise_for_status() File "C:\.......\requests\models.py", line 960, in raise_for_status raise HTTPError(http_error_msg, response=self)
I run commands using the order that commands are shown in the output of the --help option and i'm using redash-migrate version 0.1.9 with 2 instance of Redash version 10.1.0.
Can you help me?
Thanks
When I trid to migrate users with redash-migrate users
, it doesn't work because a couple of accounts in destination instance exists. One is the first user when setting up the instance. The other twos exists in both, but they logged in using Google OAuth. The most important problem is the first user doesn't exist in source instance. So, we can't prepare for the account information in meta.json
.
$ redash-migrate users
Importing users...
CAUTION: Some users are missing from the meta.json.
[email protected]
[email protected]
[email protected]
ERROR: not all users in meta are present in destination instance.
Saving meta...
This issue only applies after #23 merges. When the migration moves objects from an origin instance to a destination instance, the object ID's are reset. For example <Query 45321>
in the origin might be <Query 6>
in the destination. But queries using the QRDS data source depend on the query ID's not changing.
I propose an additional step in the migration script that examines the text of QRDS queries and substitutes the correct query_id.
-- Before
SELECT * FROM query_45321
-- After
SELECT * FROM query_6
The current script refreshes a dashboard's queries using their respective default parameter values. But if the default value from one query is mapped to the others then the dashboard will not appear to have updated.
When I executed redash-migrate favorites
, the errors appeared.
$ redash-migrate favorites
...
Favorites - OK - Imported 0 queries and 0 dashboards for orig user 364928
Favorites - ERROR - list index out of range
Favorites - ERROR - list index out of range
Favorites - ERROR - list index out of range
Favorites - ERROR - list index out of range
Favorites - OK - Imported 0 queries and 0 dashboards for orig user 365433
...
I run redash-migrate alerts
with a test environment from the hosted redash, the creator of migrated alerts are the account of redash-migrate
, not the original creators. In the case below, I created the test account to migrate whose name is redash-admin
.
The creators have to be the same as those in the original environment.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.