Giter VIP home page Giter VIP logo

dolphie's People

Contributors

charles-001 avatar chenrui333 avatar lefred avatar ottok avatar silverlee425 avatar spacentropy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dolphie's Issues

IndexError: tuple index out of range

As I try to connect to a 11.2.2-MariaDB I get the following Traceback:

IndexError: tuple index out of range

dolphie_traceback.txt

Connecting with normal MySQL/MariaDB Client is working. Connecting to other MariaDB and MySQL Server is working also.

View "sys.innodb_lock_waits" references invalid table(s) or column(s)

Hi!

I am seeing the error in the attachment when trying to connect to a MySQL server with the following version string:

mysql Ver 8.0.34-26 for Linux on x86_64 (Percona Server (GPL), Release '26', Revision '5dfee7bd')

Dolphie 3.1.2 worked and after upgrading to 3.3.1 this error appeared. I am guessing this is related to the added "Locks" panel in 3.2.0.

failed_to_execute_query

Installation doesnt work

Hi, I wanted to try Dolphie, but i've an error out of the box on Ubuntu 20.04 / Python 3.8.10 :

root! x:~$ dolphie
Traceback (most recent call last):
  File "/usr/local/bin/dolphie", line 5, in <module>
    from dolphie.app import main
  File "/usr/local/lib/python3.8/dist-packages/dolphie/__init__.py", line 16, in <module>
    from dolphie.Modules.MetricManager import MetricManager
  File "/usr/local/lib/python3.8/dist-packages/dolphie/Modules/MetricManager.py", line 192, in <module>
    class MetricData:
  File "/usr/local/lib/python3.8/dist-packages/dolphie/Modules/MetricManager.py", line 194, in MetricData
    color: tuple[int, int, int]
TypeError: 'type' object is not subscriptable

Freshly installed w/ pip

Regards,

Access denied; you need (at least one of) the PROCESS privilege(s) for this operation

#dolphie -V
3.1.3
#mysql -V
mysql Ver 8.0.33-25.1 for Linux on x86_64 (Percona XtraDB Cluster (GPL), Release rel25, Revision 0c56202, WSREP version 26.1.4.3)

#dolphie -u root -h localhost
Error
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Failed to execute query
/* dolphie */
SELECT
NAME,
COUNT
FROM
information_schema.INNODB_METRICS
WHERE
name IN ('adaptive_hash_searches', 'adaptive_hash_searches_btree', 'trx_rseg_history_len')
Access denied; you need (at least one of) the PROCESS privilege(s) for this operation

However, executing the same query with mysql works

#mysql -u root -h localhost
mysql> SELECT
-> NAME,
-> COUNT
-> FROM
-> information_schema.INNODB_METRICS
-> WHERE
-> name IN ('adaptive_hash_searches', 'adaptive_hash_searches_btree', 'trx_rseg_history_len');
+------------------------------+-------+
| NAME | COUNT |
+------------------------------+-------+
| trx_rseg_history_len | 0 |
| adaptive_hash_searches | 35910 |
| adaptive_hash_searches_btree | 64100 |
+------------------------------+-------+
3 rows in set (0.01 sec)

Thanks,

Command 'u' not working for Percona Server 5.6 (Unknown system variable 'default_password_lifetime')

MySQL version: Percona Server 5.6.15-rel63.0
Error: Unknown system variable 'default_password_lifetime'

Failed to execute query                                                                                                   
                                                                                                                            
  /* dolphie */                                                                                                             
          SELECT                                                                                                            
              u.user AS user,                                                                                               
              total_connections,                                                                                            
              current_connections,                                                                                          
              CONVERT(SUM(sum_rows_affected), UNSIGNED) AS sum_rows_affected,                                               
              CONVERT(SUM(sum_rows_sent), UNSIGNED) AS sum_rows_sent,                                                       
              CONVERT(SUM(sum_rows_examined), UNSIGNED) AS sum_rows_examined,                                               
              CONVERT(SUM(sum_created_tmp_disk_tables), UNSIGNED) AS sum_created_tmp_disk_tables,                           
              CONVERT(SUM(sum_created_tmp_tables), UNSIGNED) AS sum_created_tmp_tables,                                     
              plugin,                                                                                                       
              CASE                                                                                                          
                  WHEN (password_lifetime IS NULL OR password_lifetime = 0) AND @@default_password_lifetime = 0 THEN "N/A"  
                  ELSE CONCAT(                                                                                              
                      CAST(IFNULL(password_lifetime, @@default_password_lifetime) as signed) +                              
                      CAST(DATEDIFF(password_last_changed, NOW()) as signed),                                               
                      " days"                                                                                               
                  )                                                                                                         
              END AS password_expires_in                                                                                    
          FROM                                                                                                              
              performance_schema.users u                                                                                    
              JOIN performance_schema.events_statements_summary_by_user_by_event_name ess ON u.user = ess.user              
              JOIN mysql.user mysql_user ON mysql_user.user = u.user                                                        
          WHERE                                                                                                             
              current_connections != 0                                                                                      
          GROUP BY                                                                                                          
              user                                                                                                          
          ORDER BY                                                                                                          
              current_connections DESC                                                                                      
                                                                                                                            
                                                                                                                            
  Unknown system variable 'default_password_lifetime'    

DDL progression - FR

Would be nice to implement a panel to see the progression of the eventual ALTER statements.

Something inspired by

select stmt.thread_id, stmt.sql_text, stage.event_name as state, 
                   stage.work_completed, stage.work_estimated, 
                   lpad(concat(round(100*stage.work_completed/stage.work_estimated, 2),"%"),10," ") 
                   as completed_at, 
                   lpad(format_pico_time(stmt.timer_wait), 10, " ") as started_ago, 
                   lpad(format_pico_time(stmt.timer_wait/round(100*stage.work_completed/stage.work_estimated,2)*100), 
                        10, " ") as estimated_full_time, 
                   lpad(format_pico_time((stmt.timer_wait/round(100*stage.work_completed/stage.work_estimated,2)*100)
                        -stmt.timer_wait), 10, " ") as estimated_remaining_time, 
                   current_allocated memory 
            from performance_schema.events_statements_current stmt 
            inner join sys.memory_by_thread_by_current_bytes mt 
                    on mt.thread_id = stmt.thread_id 
            inner join performance_schema.events_stages_current stage 
                    on stage.thread_id = stmt.thread_id\G

FUNCTION GTID_SUBTRACT does not exist

Can't run it on MariaDB 11.1.2, got this error :

  Dolphie🐬

  Error
 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Failed to execute query

  /* dolphie */
          SELECT MAX(`lag`) AS Seconds_Behind_Master
              FROM (
                  SELECT MAX(TIMESTAMPDIFF(SECOND, APPLYING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP, NOW())) AS `lag`
                  FROM performance_schema.replication_applier_status_by_worker

                  UNION

                  SELECT MIN(
                      IF(
                          GTID_SUBTRACT(LAST_QUEUED_TRANSACTION, LAST_APPLIED_TRANSACTION) = '',
                          0,
                          TIMESTAMPDIFF(SECOND, LAST_APPLIED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP, NOW())
                      )
                  ) AS `lag`
                  FROM performance_schema.replication_applier_status_by_worker w
                  JOIN performance_schema.replication_connection_status s ON s.channel_name = w.channel_name
              ) required


  FUNCTION GTID_SUBTRACT does not exist

Add support for Percona XtraDB Cluster

The Percona Multi-source replication means that you can write to any node and be sure that the write will be consistent for all nodes in the cluster. https://docs.percona.com/percona-xtradb-cluster/5.7/features/highavailability.html
it doesn't use master-slave show run command

show slave status;

will return empty and it cause No data to display! This host is not a replica and has no replicas connected
show I suggest add feature to get cluster status by wsrep_local_state_comment

mysql> SHOW STATUS LIKE 'wsrep_local_state_comment';
+---------------------------+--------+
| Variable_name             | Value  |
+---------------------------+--------+
| wsrep_local_state_comment | Synced |
+---------------------------+--------+
1 row in set (0.00 sec)

Error after click on AHI Tab (Dolphie v3.1.2)

Doing really basic test of dolphie after clicking on differents tabs I've got some errors trying to access to HAI tab.
I tried again but couldn't reproduce the error.

  • Enviroment
Python version: 3.11

certifi==2023.7.22
cffi==1.15.1
charset-normalizer==3.2.0
cryptography==41.0.3
dolphie==3.1.2
idna==3.4
importlib-metadata==6.8.0
linkify-it-py==2.0.2
markdown-it-py==3.0.0
mdit-py-plugins==0.4.0
mdurl==0.1.2
myloginpath==0.0.4
packaging==23.1
plotext==5.2.8
pycparser==2.21
Pygments==2.16.1
PyMySQL==1.1.0
requests==2.31.0
rich==13.5.2
sqlparse==0.4.4
textual==0.36.0
textual-autocomplete==2.1.0b0
typing_extensions==4.7.1
uc-micro-py==1.0.2
urllib3==2.0.4
zipp==3.16.2
mysql> select version();
+-----------+
| version() |
+-----------+
| 8.0.20    |
+-----------+
1 row in set (0,02 sec)
  • Error
dolphie/venv/lib/python3.11/site-packages/dolphie/app.py:630 in tab_changed                                                      │
│                                                                                                                                                                       │
│   627 │   │   │   return                                                                                                                                              │
│   628 │   │                                                                                                                                                           │
│   629 │   │   metric_instance_name = event.tab.id.split("tab_")[1]                                                                                                    │
│ ❱ 630 │   │   self.update_graphs(metric_instance_name)                                                                                                                │
│   631 │                                                                                                                                                               │
│   632 │   @on(Switch.Changed)                                                                                                                                         │
│   633def switch_changed(self, event: Switch.Changed):                                                                                                            │
│                                                                                                                                                                       │
│ ╭───────────────────────────────────────────────────── locals ──────────────────────────────────────────────────────╮                                                 │
│ │                event = TabActivated(TabbedContent(id='tabbed_content'), ContentTab(id='tab_adaptive_hash_index')) │                                                 │
│ │ metric_instance_name = 'adaptive_hash_index'                                                                      │                                                 │
│ │                 self = DolphieApp(title='Dolphie', classes={'-dark-mode'})                                        │                                                 │
│ ╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯  

dolphie/venv/lib/python3.11/site-packages/dolphie/app.py:658 in update_graphs                                                    │
│                                                                                                                                                                       │
│   655 │   │   for metric_instance in self.dolphie.metric_manager.metrics.__dict__.values():                                                                           │
│   656 │   │   │   if tab_metric_instance_name == metric_instance.tab_name:                                                                                            │
│   657 │   │   │   │   for graph_name in metric_instance.graphs:                                                                                                       │
│ ❱ 658 │   │   │   │   │   self.query_one(f"#{graph_name}").render_graph(metric_instance)                                                                              │
│   659 │   │                                                                                                                                                           │
│   660 │   │   self.update_stats_label(tab_metric_instance_name)                                                                                                       │
│   661                                                                                                                                                                 │
│                                                                                                                                                                       │
│ ╭──────────────────────────────────── locals ────────────────────────────────────╮                                                                                    │
│ │               graph_name = 'graph_adaptive_hash_index_hit_ratio'               │                                                                                    │
│ │          metric_instance = AdaptiveHashIndexHitRatioMetrics(                   │                                                                                    │
│ │                            │   hit_ratio=MetricData(                           │                                                                                    │
│ │                            │   │   label='Hit Ratio',                          │                                                                                    │
│ │                            │   │   color=(84, 239, 174),                       │                                                                                    │
│ │                            │   │   visible=True,                               │                                                                                    │
│ │                            │   │   save_history=True,                          │                                                                                    │
│ │                            │   │   per_second_calculation=False,               │                                                                                    │
│ │                            │   │   last_value=None,                            │                                                                                    │
│ │                            │   │   graphable=True,                             │                                                                                    │
│ │                            │   │   values=[1.9230769230769231]                 │                                                                                    │
│ │                            │   ),                                              │                                                                                    │
│ │                            │   graphs=[                                        │                                                                                    │
│ │                            │   │   'graph_adaptive_hash_index_hit_ratio'       │                                                                                    │
│ │                            │   ],                                              │                                                                                    │
│ │                            │   smoothed_hit_ratio=1.9230769230769231,          │                                                                                    │
│ │                            │   tab_name='adaptive_hash_index',                 │                                                                                    │
│ │                            │   metric_source='none',                           │                                                                                    │
│ │                            │   datetimes=['08/09/23 11:59:52']                 │                                                                                    │
│ │                            )                                                   │                                                                                    │
│ │                     self = DolphieApp(title='Dolphie', classes={'-dark-mode'}) │                                                                                    │
│ │ tab_metric_instance_name = 'adaptive_hash_index'                               │                                                                                    │
│ ╰────────────────────────────────────────────────────────────────────────────────╯ 

dolphie/venv/lib/python3.11/site-packages/dolphie/Modules/MetricManager.py:154 in render_graph                                   │
│                                                                                                                                                                       │
│   151 │   │   if y_tick_interval >= 1:                                                                                                                                │
│   152 │   │   │   y_ticks = [i * y_tick_interval for i in range(max_y_ticks + 1)]                                                                                     │
│   153 │   │   else:                                                                                                                                                   │
│ ❱ 154 │   │   │   y_ticks = [i for i in range(max_y_value + 1)]                                                                                                       │
│   155 │   │                                                                                                                                                           │
│   156 │   │   format_function = get_number_format_function(self.metric_instance)                                                                                      │
│   157 │   │   y_labels = [format_function(val) for val in y_ticks]                                                                                                    │
│                                                                                                                                                                       │
│ ╭─────────────────────────────── locals ────────────────────────────────╮                                                                                             │
│ │     max_y_ticks = 5                                                   │                                                                                             │
│ │     max_y_value = 1.9230769230769231                                  │                                                                                             │
│ │     metric_data = ['08/09/23 11:59:52']                               │                                                                                             │
│ │ metric_instance = AdaptiveHashIndexHitRatioMetrics(                   │                                                                                             │
│ │                   │   hit_ratio=MetricData(                           │                                                                                             │
│ │                   │   │   label='Hit Ratio',                          │                                                                                             │
│ │                   │   │   color=(84, 239, 174),                       │                                                                                             │
│ │                   │   │   visible=True,                               │                                                                                             │
│ │                   │   │   save_history=True,                          │                                                                                             │
│ │                   │   │   per_second_calculation=False,               │                                                                                             │
│ │                   │   │   last_value=None,                            │                                                                                             │
│ │                   │   │   graphable=True,                             │                                                                                             │
│ │                   │   │   values=[1.9230769230769231]                 │                                                                                             │
│ │                   │   ),                                              │                                                                                             │
│ │                   │   graphs=[                                        │                                                                                             │
│ │                   │   │   'graph_adaptive_hash_index_hit_ratio'       │                                                                                             │
│ │                   │   ],                                              │                                                                                             │
│ │                   │   smoothed_hit_ratio=1.9230769230769231,          │                                                                                             │
│ │                   │   tab_name='adaptive_hash_index',                 │                                                                                             │
│ │                   │   metric_source='none',                           │                                                                                             │
│ │                   │   datetimes=['08/09/23 11:59:52']                 │                                                                                             │
│ │                   )                                                   │                                                                                             │
│ │            self = Graph(id='graph_adaptive_hash_index_hit_ratio')     │                                                                                             │
│ │               x = ['08/09/23 11:59:52']                               │                                                                                             │
│ │               y = [1.9230769230769231]                                │                                                                                             │
│ │ y_tick_interval = 0.38461538461538464                                 │                                                                                             │
│ ╰───────────────────────────────────────────────────────────────────────╯                                                                                             │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: 'float' object cannot be interpreted as an integer

NOTE: 1 of 2 errors shown. Run with --dev to see all errors.

Default config location

Hi,

firstly thanks for dolphie, love it!

Our use case for dolphie is to deploy it on every mysql instance in our company. Its better than locally connecting to mysql on server. Just ssh to server and type dolphie for super fast debuging.

However when i want to deploy dolphie on every instance via ansible, there is problem with config.
I have to copy dolphie config into every users home dir for easy use. I understand why you set default to ~/.dolphie but for easy deploy via ansible maybe using /etc/dolphie.conf (or somethink like that) would be better?

What do you think? I know its slightly different use case than connecting to mysql server from desktop, but in big enviroment is easier.

Thx.

Feature request: Metadata locks panel

I'm looking to add a metadata lock panel and think this query works well - anyone have any thoughts on if this is the right approach?

SELECT
    OBJECT_INSTANCE_BEGIN AS id,
    OBJECT_TYPE,
    OBJECT_SCHEMA,
    OBJECT_NAME,
    LOCK_TYPE,
    LOCK_STATUS,
    SOURCE,
    PROCESSLIST_ID,
    PROCESSLIST_USER,
    PROCESSLIST_TIME,
    PROCESSLIST_INFO
FROM
    `performance_schema`.`metadata_locks` mlb JOIN
    `performance_schema`.`threads` t ON mlb.OWNER_THREAD_ID = t.THREAD_ID
WHERE
    NOT (OBJECT_TYPE = 'TABLE' AND LOCK_STATUS = 'GRANTED') AND
    OBJECT_TYPE != 'COLUMN STATISTICS'
ORDER BY
    PROCESSLIST_TIME DESC

This is what the result looks like:
Screenshot 2024-02-22 at 8 14 09 PM

MariaDB - Unknown system variable 'server_uuid'

Hi. Seems like this should support MariaDB, but on 10.9.4 I get

Error: Failed to connect to database host host.docker.internal - Reason: Unknown system variable 'server_uuid'

It seems MariaDB has no server_uuid variable.

SELECT @@server_uuid;

https://mariadb.com/kb/en/system-variable-differences-between-mariadb-100-and-mysql-56/

I'm unsure if replacing that with server_id would be okay.

I got it to start with.

        db_cursor.execute("SELECT @@server_id as '@@server_uuid'")
        data = db_cursor.fetchone()
        c_data["server_uuid"] = str(data["@@server_uuid"])

MariaDB issue

Hi Charles,

It should work on MariaDB?
Trying to connect a 5.0.0 to MariaDB 10.6 or
a 5.0.1 (docker) to a MariaDB 11.2 get the same error:
KeyError: 'Slave_UUID'

...
│ │                       │   group_replication_container=Container(id='group_replication_container_1'),                                                                         │                                │
│ │                       │   group_replication_grid=Container(id='group_replication_grid_1'),                                                                                   │                                │
│ │                       │   group_replication_title=Label(id='group_replication_title_1'),                                                                                     │                                │
│ │                       │   group_replication_data=Label(id='group_replication_data_1'),                                                                                       │                                │
│ │                       │   replicas_container=Container(id='replicas_container_1'),                                                                                           │                                │
│ │                       │   replicas_grid=Container(id='replicas_grid_1'),                                                                                                     │                                │
│ │                       │   replicas_loading_indicator=LoadingIndicator(id='replicas_loading_indicator_1'),                                                                    │                                │
│ │                       │   replicas_title=Label(id='replicas_title_1'),                                                                                                       │                                │
│ │                       │   proxysql_hostgroup_summary_title=Label(id='proxysql_hostgroup_summary_title_1'),                                                                   │                                │
│ │                       │   proxysql_hostgroup_summary_datatable=DataTable(id='proxysql_hostgroup_summary_datatable_1'),                                                       │                                │
│ │                       │   proxysql_mysql_query_rules_title=Label(id='proxysql_mysql_query_rules_title_1'),                                                                   │                                │
│ │                       │   proxysql_mysql_query_rules_datatable=DataTable(id='proxysql_mysql_query_rules_datatable_1'),                                                       │                                │
│ │                       │   proxysql_command_stats_title=Label(id='proxysql_command_stats_title_1'),                                                                           │                                │
│ │                       │   proxysql_command_stats_datatable=DataTable(id='proxysql_command_stats_datatable_1'),                                                               │                                │
│ │                       │   cluster_data=Static(id='cluster_data_1')                                                                                                           │                                │
│ │                       )                                                                                                                                                      │                                │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯                                │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'Slave_UUID'

[Improvement] m command and MySQL 5.6

I tried m command on MySQL 5.6 and Dolphie crashes because of the absence of the sys.memory_by_user_by_current_bytes view. I don't think that memory views are present in MySQL 5.6 so, as MySQL 5.6 will soon no longer maintained, maybe it would be interesting not to display this information for this version of MySQL.

Dolphie Issue with MySQL Aurora on 3.04.0 with Backup_Admin Privilege Needed

On Aurora-MySQL version: 8.0.mysql_aurora.3.04.0, I get this error.

  Failed to execute query

  /* dolphie */
          SELECT
              STORAGE_ENGINES ->> '$."InnoDB"."LSN"' - STORAGE_ENGINES ->> '$."InnoDB"."LSN_checkpoint"' AS checkpoint_age
          FROM
              performance_schema.log_status


  Access denied; you need (at least one of) the BACKUP_ADMIN privilege(s) for this operation

add hostname and port in the TopBar

Currently Top Bar is using the host used to connect.

But maybe you connect using port forwarding or ssh tunnel, or using sandboxes... so it would be nice to see the details used to connect (host and port) but also the value of @@hostname and @@PORT

It would be great to see:

127.0.0.1:3307 - dell:3306

is sys.innodb_lock_waits safe to run in production on a busy system ?

we noticed that the dolphie query was taking an increasing long time. Maybe we should time it or run it on demand only ?

information_schema tables are unsafe to run on busy system unlike performance_schema ones.

# Time: 2024-02-05T15:25:35.470941Z
# User@Host:me[db] @  [9.99.99.999]  Id: 3963342006
# Schema:   Last_errno: 0  Killed: 0
# Query_time: 232.047229  Lock_time: 0.000005  Rows_sent: 15  Rows_examined: 15  Rows_affected: 0  Bytes_sent: 4207
SET timestamp=1707146503;
/* dolphie */ 
        SELECT
            wait_age,
            locked_type,
            waiting_pid,
            waiting_trx_age,
            waiting_trx_rows_modified,
            waiting_trx_rows_locked,
            waiting_lock_mode,
            IFNULL(waiting_query, "")  AS waiting_query,
            blocking_pid,
            blocking_trx_age,
            blocking_trx_rows_modified,
            blocking_trx_rows_locked,
            blocking_lock_mode,
            IFNULL(blocking_query, "") AS blocking_query
        FROM
            sys.innodb_lock_waits;

Search in error log - FR

It would be great to have the possibility to search in for some terms in the error log (table in performance_schema)

[BUG] Can't load replica panel

Hi !

Thanks for the correction of replica panel on source of RS. I have another problem with another ReplicaSet which works a lot more :

screen_4

Edit : It doesn't seem related, the panel doesn't load either with this load

screen_6

Replica panel never loads

screen_5

I don't see any queries coming from Dolphie on my instance. Are there any logs somewhere ?

I'm connecting on a remote Percona Server 8.0.29 instance from my computer.

Do not hesitate to ask me for more information

Allow paging for the error log screen - FR

When entering the error log panel, if there are a lot of entries this can take a while and if we press some keys to exit it, we got a stack trace.
If could be nice to have some paging mechanism and have the possibility to reach the end immediately.
Screenshot from 2023-08-16 19-04-57

AttributeError: 'Dolphie' object has no attribute 'notify'

Heya,

Installed via pip on ubuntu 22.04 (Python 3.10), I got past it by commenting out the dolphie.notify but I'm sure it'll be more obvious to you how to fix it properly. Sorry with the formatting below!

│ /home/felim/VENV/dolphie/lib/python3.10/site-packages/dolphie/Panels/processlist_panel.py:15 in create_panel │ │ │
│ 12 │ dolphie = tab.dolphie
│ 13 │ │
│ 14 │ if not dolphie.performance_schema_enabled and dolphie.use_performance_schema: │ ❱ 15 │ │ dolphie.notify("Performance Schema is not enabled on this host, using Informatio
│ 16 │ │ dolphie.use_performance_schema = False
│ 17 │ │
│ 18 │ columns = [

Improve replication dashboard

Hello,
For replication, I this this information could be valuable to have:

select * from performance_schema.global_variables where variable_name in ('binlog_transaction_dependency_tracking','replica_preserve_commit_order', 'replica_parallel_type','transaction_write_set_extraction', 'replica_parallel_workers');
+----------------------------------------+----------------+
| VARIABLE_NAME                          | VARIABLE_VALUE |
+----------------------------------------+----------------+
| binlog_transaction_dependency_tracking | WRITESET       |
| replica_parallel_type                  | LOGICAL_CLOCK  |
| replica_parallel_workers               | 16             |
| replica_preserve_commit_order          | ON             |
| transaction_write_set_extraction       | XXHASH64       |
+----------------------------------------+----------------+

And take a look at https://gist.github.com/lefred/5286cc89b4837ca92d4770a28a1aecda

You could use similar query to also add trending to replication:

select * from sys.replication_status;
+---------+----------+-----------+---------+----------------+-------------------+------------+--------------------------------------------+--------------------------------------------+
| channel | io_state | sql_state | latency | transport_time | time_to_relay_log | apply_time | last_queued_transaction                    | last_applied_transaction                   |
+---------+----------+-----------+---------+----------------+-------------------+------------+--------------------------------------------+--------------------------------------------+
|  (1)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          | 2.91 ms    | 00022233-1111-1111-1111-111111111111:10821 | 00022233-1111-1111-1111-111111111111:10821 |
|  (2)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (3)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (4)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (5)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (6)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (7)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (8)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (9)    | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (10)   | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (11)   | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (12)   | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (13)   | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (14)   | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (15)   | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
|  (16)   | ON       | ON        |   0 ps  | 98.80 us       | 15.00 us          |   0 ps     | 00022233-1111-1111-1111-111111111111:10821 |                                            |
+---------+----------+-----------+---------+----------------+-------------------+------------+--------------------------------------------+--------------------------------------------+
16 rows in set (0.00 sec)

TypeError: 'type' object is not subscriptable

$ python --version
Python 3.8.18

$ dolphie
Traceback (most recent call last):
  File "/home/aadant/gitLocal/jump-splunk-ansible/.venv/bin/dolphie", line 5, in <module>
    from dolphie.app import main
  File "/home/aadant/gitLocal/jump-splunk-ansible/.venv/lib/python3.8/site-packages/dolphie/app.py", line 27, in <module>
    from dolphie.Modules.TabManager import Tab, TabManager
  File "/home/aadant/gitLocal/jump-splunk-ansible/.venv/lib/python3.8/site-packages/dolphie/Modules/TabManager.py", line 105, in <module>
    class TabManager:
  File "/home/aadant/gitLocal/jump-splunk-ansible/.venv/lib/python3.8/site-packages/dolphie/Modules/TabManager.py", line 460, in TabManager
    def get_all_tabs(self) -> list[Tab]:
TypeError: 'type' object is not subscriptable

$ pip list | grep dolphie
dolphie                   4.3.1

Provide docker image

Hey @charles-001 👋

This looks like a (visually) cool tool, and I was wondering maybe providing a docker image would simplify setup/usage for those who do not want to install python (and corresponding packages) on their host system.

Let me know what you think.

Issue when monitoring mariadb

Hello Guys,
I am trying dolphie with Mariadb 10.4 version and it unable to connect it and throwing below exception.
That's maybe due to column alias name prefixed with @@.

root@522086de7808:~# dolphie --host=rds-host-name --user=root --password=xxxxxxx --use-processlist
  Dolphie🐬                                                                                                                                                 
                                                                                                                                                            
  Error                                                                                                                                                     
 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 
  Failed to execute query                                                                                                                                   
                                                                                                                                                            
  /* dolphie */ SELECT @@server_id AS @@server_uuid                                                                                                         
                                                                                                                                                            
  You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '@@server_uuid'   
  at line 1                                                                                                                                                 

I have tried running same query without @@ which is running fine.

mysql> SELECT @@server_id AS @@server_uuid;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '@@server_uuid' at line 1
mysql> SELECT @@server_id AS server_uuid;
+-------------+
| server_uuid |
+-------------+
|  1573754117 |
+-------------+
1 row in set (0.22 sec)

3.1.4 regression test failure

👋 trying to build the latest release, but run into some regression test failure on linux build. The error log is as below:

error build log
    Minitest::Assertion: Expected /Failed\ to\ connect\ to\ database\ host/ to match "\e[?1049h\e[?1000h\e[?1003h\e[?1015h\e[?1006h\e[?25l\e[?1003h
    \e[?2026$p\e[?2004h\e[?2004l\e[?1000l\e[?1003l\e[?1015l\e[?1006l\e[?1000l\e[?1003l\e[?1015l\e[?1006l╭───────────────────── Traceback (most recent call last) ──────────────────────╮
    │ /home/linuxbrew/.linuxbrew/Cellar/dolphie/3.1.4/libexec/lib/python3.12/site- │
    │ packages/textual/drivers/linux_driver.py:243 in _run_input_thread            │
    │                                                                              │
    │   240 │   │   an exception                                                   │
    │   241 │   │   \"\"\"                                                            │
    │   242 │   │   try:                                                           │
    │ ❱ 243 │   │   │   self.run_input_thread()                                    │
    │   244 │   │   except BaseException as error:                                 │
    │   245 │   │   │   self._app.call_later(                                      │
    │   246 │   │   │   │   self._app.panic,                                       │
    │                                                                              │
    │ /home/linuxbrew/.linuxbrew/Cellar/dolphie/3.1.4/libexec/lib/python3.12/site- │
    │ packages/textual/drivers/linux_driver.py:279 in run_input_thread             │
    │                                                                              │
    │   276 │   │   │   │   for _selector_key, mask in selector_events:            │
    │   277 │   │   │   │   │   if mask & EVENT_READ:                              │
    │   278 │   │   │   │   │   │   unicode_data = decode(read(fileno, 1024))      │
    │ ❱ 279 │   │   │   │   │   │   for event in feed(unicode_data):               │
    │   280 │   │   │   │   │   │   │   self.process_event(event)                  │
    │   281 │   │   finally:                                                       │
    │   282 │   │   │   selector.close()                                           │
    │                                                                              │
    │ /home/linuxbrew/.linuxbrew/Cellar/dolphie/3.1.4/libexec/lib/python3.12/site- │
    │ packages/textual/_parser.py:81 in feed                                       │
    │                                                                              │
    │    78 │   │   │   try:                                                       │
    │    79 │   │   │   │   self._gen.send(self._buffer.getvalue())                │
    │    80 │   │   │   except StopIteration:                                      │
    │ ❱  81 │   │   │   │   raise ParseError(\"end of file reached\") from None      │
    │    82 │   │   │   while self._tokens:                                        │
    │    83 │   │   │   │   yield self._tokens.popleft()                           │
    │    84                                                                        │
    ╰──────────────────────────────────────────────────────────────────────────────╯
    ParseError: end of file reached
    ".

full build log, https://github.com/Homebrew/homebrew-core/actions/runs/6705950469/job/18226452402?pr=152912
relates to Homebrew/homebrew-core#152912

Display details of thread crashes

On large queries and or large windows, dolphie crashes with:
AttributeError: 'NoneType' object has no attribute 'splitlines'

I have attached a traceback

And thanks for the great tool!

Query PyPI API to get the latest version - enable config bypass

Hi,

we are behind strict company environment, that is not allowed for direct connections out of intranet. Instead, we mirror repositories like docker/debian/PyPi in intranet and download packages thru it...

Is there by any chance you can modify this part to allow bypassing thru config file? Some kind of option like pypi_repository to allow user specify their "own" PyPi repository address via config file.

Thank you for your reaction and time.

url = f"https://pypi.org/pypi/{__package_name__}/json"

Indexes panel - FR

It would be great to have a panel to see/manage duplicate indexes, unused indexes, invisible indexes, ...

Aurora abort trace

Superb program, fulfilling such a need outside of Enterprise Monitor.
Thank you Charles for the effort that you have put into this.

Using Dolphie from a Mac (pip-derived version), everything connects and works well with MySQL 8.0 instances. However, when connecting to Aurora 2 instances from the same Mac, Dolphie aborts. I attach one such trace output.

dolphie_pip_trace_aurora2.txt

Unable to connect

I am having trouble to connect to a Server version: 8.0.33-25 Percona Server.

I have tried everything using .my.cnf file (which reads it) asking for password and with socket.

Nothing works I am getting access denied password of course works.
Since its localhost user is root but still.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.