Giter VIP home page Giter VIP logo

puppetlabs-pe_status_check's Introduction

puppetlabs-pe_status_check

Description

puppetlabs-pe_status_check provides a way to alert the end-user when Puppet Enterprise is not in an ideal state. It uses pre-set indicators and has a simplified output that directs the end-user to the next steps for resolution.

Users of the tool have a greater ability to provide their own self-service resolutions and shorter incident resolution times with Puppet Support due to higher quality information available to our team.

Setup

What pe_status_check affects

This module installs two structured facts named pe_status_check and agent_status_check. Each fact contains an array of key pairs that output an indicator ID and a Boolean value. The pe_status_check fact is confined to only Puppet Enterprise infrastructure agents, and the agent_status_check fact is confined to non-infrastructure agent nodes.

Setup requirements

Install the module, plug-in sync will be used to deliver the required facts for this module, to each agent node in the environment the module is installed in.

Beginning with pe_status_check

This module primarily provides indicators using facts, so installing the module and allowing plug-in sync to occur lets the module start functioning. Collection of the agent_status_check fact is disabled by default so as not to affect all puppet agents indiscriminately

Usage

The facts in this module can be directly consumed by monitoring tools such as Splunk, any element in the structured facts pe_status_check or agent_status_check reporting as false indicates a fault state in Puppet Enterprise. When any element reports as false, look up the incident ID in the reference section for next steps.

Alternatively, assigning the class pe_status_check to the infrastructure notifies on each Puppet run if any indicator is reporting as false, this can be viewed in the Puppet report for each node.

Enabling agent_status_check

By default your normal agent population will not collect the agent_status_check fact, this can be enabled for all agents or a subset of agents, by classifying pe_status_check::agent_status_enable to your nodes.

Disabling agent_status_check

Following the addition of the class pe_status_check::agent_status_enable to an agent node, disable the collection of agent_status_check fact, set the following parameter:

pe_status_check::agent_status_enable::agent_status_enabled = false

Reporting Options

Class declaration pe_status_check (optional)

To activate the notification functions of this module, classify your Puppet Infrastructure with the pe_status_check class using your preferred classification method. Below is an example using site.pp.

node 'node.example.com' {
  include pe_status_check
}

For maximum coverage, report on all default indicators. However, if you need to make exceptions for your environment, classify the array parameter indicator_exclusions with a list of all the indicators you do not want to report on.

This workflow is not available for the agent_status_check fact.

class { 'pe_status_check':
  indicator_exclusions             => ['S0001','S0003','S0003','S0004'],
}

Ad-hoc Report (Plans)

The plans, pe_status_check::infra_summary and pe_status_check::agent_summary summarize the status of each of the checks on target nodes that have the pe_status_check or agent_status_check fact respectively, sample output can be seen below:

{
    "nodes": {
        "details": {
            "pe-psql-70aefa-0.region-a.domain.com": {
                "failed_tests_count": 0,
                "passing_tests_count": 13,
                "failed_tests_details": []
            },
            "pe-server-70aefa-0.region-a.domain.com": {
                "failed_tests_count": 1,
                "passing_tests_count": 30,
                "failed_tests_details": [
                    "S0022 Determines if there is a valid Puppet Enterprise license in place at /etc/puppetlabs/license.key on your primary which is not going to expire in the next 90 days"
                ]
            },
            "pe-compiler-70aefa-0.region-a.domain.com": {
                "failed_tests_count": 0,
                "passing_tests_count": 23,
                "failed_tests_details": []
            },
            "pe-compiler-70aefa-1.region-b.domain.com": {
                "failed_tests_count": 0,
                "passing_tests_count": 23,
                "failed_tests_details": []
            }
        },
        "failing": [
            "pe-server-70aefa-0.region-a.domain.com"
        ],
        "passing": [
            "pe-compiler-70aefa-1.region-b.domain.com",
            "pe-compiler-70aefa-0.region-a.domain.com",
            "pe-psql-70aefa-0.region-a.domain.com"
        ]
    },
    "errors": {},
    "status": "failing",
    "failing_node_count": 1,
    "passing_node_count": 3
}

The plan pe_status_check::infra_role_summary will provide you a hash with all PE infrastructure nodes, grouped by their role:

{
    "primary": [
        "primary.bastelfreak.local"
    ],
    "replica": [
        "replica.bastelfreak.local"
    ],
    "compiler": [
        "compiler01.bastelfreak.local",
        "compiler02.bastelfreak.local"
    ],
    "postgres": [],
    "legacy_primary": [],
    "legacy_compiler": []
}

The data is obtained from PuppetDB by checking the classes in the last catalog of every node. You can reuse the data in other plans or use it to inspect your environment. You can plott it in a more human-readable way with the puppet/format modules.

Using a Puppet Query to report status.

As the pe_status_check module uses Puppet's existing fact behavior to gather the status data from each of the agents, it is possible to use PQL (puppet query language) to gather this information.

Consult with your local Puppet administrator to construct a query suited to your organizational needs. Please find some examples of using pe_client_tools to query the status check facts below:

  1. To find the complete output of pe_status_check from all nodes listed by certname:

    puppet query 'facts[certname,value] { name = "pe_status_check" }'
  2. To find the complete output of agen_status_check from all nodes listed by certname (this could be a large query based on the number of agent nodes, further filtering is advised ):

    puppet query 'facts[certname,value] { name = "agent_status_check" }'
  3. To find those nodes with a specific status check set to false:

    puppet query 'inventory[certname] { facts.pe_status_check.S0001 = false }'

Setup Requirements

pe_status_check::infra_summary and pe_status_check::agent_summary utilize hiera to lookup test definitions, this requires placing a static hierarchy in your environment level hiera.yaml.

plan_hierarchy:
  - name: "Static data"
    path: "static.yaml"
    data_hash: yaml_data

See the following documentation for further explanation.

Using Static Hiera data to populate indicator_exclusions when executing plans

Place the plan_hierarchy listed in the step above, in the environment layer (https://www.puppet.com/docs/pe/latest/writing_plans_in_puppet_language_pe.html#using_hiera_with_plans)

Create a [static.yaml] file in the environment layer hiera data directory```

pe_status_check::indicator_exclusions:                                             
  - '<TEST ID>'                                                                

Indicator ID's within array will be excluded when running pe_status_check::infra_summary and pe_status_check::agent_summary p

Running the plans

The pe_status_check::infra_summary and pe_status_check::agent_summary plans can be run from the PE console or from the command line. Below are some examples of running the plans from the command line. More information on the parameters in the plan can be seen in the REFERENCE.md.

Example call from the command line to run pe_status_check::infra_summary against all infrastructure nodes:

puppet plan run pe_status_check::infra_summary

Example call from the command line to run pe_status_check::agent_summary against all regular agent nodes:

puppet plan run pe_status_check::agent_summary

Example call from the command line to run against a set of infrastructure nodes:

puppet plan run pe_status_check::infra_summary targets=pe-server-70aefa-0.region-a.domain.com,pe-psql-70aefa-0.region-a.domain.com

Example call from the command line to exclude indicators for pe_status_check::infra_summary:

puppet plan run pe_status_check::infra_summary -p '{"indicator_exclusions": ["S0001","S0021"]}'

Example call from the command line to exclude indicators for pe_status_check::agent_summary:

puppet plan run pe_status_check::agent_summary -p '{"indicator_exclusions": ["AS001","AS002"]}'

Reference

Fact: pe_status_check_role

This fact is used to determine which individual status checks should be run on each individual infrastructure node. The fact queries which Puppet Enterprise Roles have been classified to each node and uses this to make the determination.

Role Description
primary The node is both a certificate authority and a postgres host
replica The node has the primary_master_replica role
pe_compiler The node has both the master and puppetdb roles
postgres The node has just the database role
legacy_compiler The node has the master role but not the puppetdb role
legacy_primary The node is a certificate authority but not a postgres host
unknown The node type could not be determined

A failure to determine node type will result in a safe subset of checks being run that will work on all infrastructure node types.

Fact: pe_status_check

This fact is confined to run on infrastructure nodes only.

Refer below for next steps when any indicator reports a false.

Indicator ID Description Self-service steps What to include in a Support ticket
S0001 Determines if the puppet service is running on agents. See documentation If the service fails to start, open a Support ticket referencing S0001, and provide syslog and any errors output when attempting to restart the service.
S0002 Determines if the pxp-agent service is running. Start the pxp-agent service - puppet resource service pxp-agent ensure=running, if the service has failed check the logs located in /var/logs/puppetlabs/pxp-agent, for information to debug and understand what the logs mean, see the following links for assistance. Connection Type Issues, if the service is up and running but issues still occur, see Debug Logging If the service fails to start, open a Support ticket referencing S0002, provide syslog any errors output when attempting to restart the service, and /var/log/puppetlabs/pxp-agent/pxp-agent.log
S0003 Determines if infrastructure components are running in noop. Do not routinely configure noop on PE infrastructure nodes, as it prevents the management of key infrastructure settings. Disable this setting on infrastructure components. If you are unable to disable noop or encounter an error when disabling noop, open a Support ticket referencing S0003, and provide any errors output when attempting to change the setting.
S0004 Determines if the Puppet Server status endpoint is returning any errors. Execute puppet infrastructure status. Which ever service returns in a state that is not running, examine the logging for that service to indicate the fault. Open a Support ticket referencing S0004, provide the output of puppet infrastructure status and any service logs associated with the errors.
S0005 Determines if certificate authority (CA) cert expires in the next 90 days. Install the puppetlabs-ca_extend module and follow steps to extend the CA cert. Open a Support ticket referencing S0005 and provide support script output from the primary server, and any errors encountered when using the ca_extend module.
S0006 Determines if Puppet metrics collector is enabled and collecting metrics. Metrics collector is a tool that lets you monitor a PE installation. If it is not enabled, enable it. If you have issues enabling metrics, open a ticket referencing S0006 and provide the output of the support script.
S0007 Determines if there is at least 20% disk free on the PostgreSQL data partition. Determines if growth is slow and expected within the TTL of your data. If there's an unexpected increase, use this article to troubleshoot PuppetDB If your Puppet Practitioner is unable to find a cause for the growth and the suggested KB does not help, open a Support ticket referencing S0007 and provide details about large files and folders, rate of growth, and a full support script from the affected node.
S0008 Determines if there is at least 20% disk free on the codedir data partition. See documentation
S0009 Determines if pe-puppetserver service is running and enabled on relevant components. See documentation If you are unable to explain the service outage from the logging, or are unable to start the service, open a Support ticket referencing S0009 and provide output of the support script
S0010 Determines if pe-puppetdb service is running and enabled on relevant components. See documentation If you are unable to explain the service outage from the logging, or are unable to start the service, open a Support ticket referencing S0010 and provide output of the support script
S0011 Determines if pe-postgres service is running and enabled on relevant components. See documentation If you are unable to explain the service outage from the logging, or are unable to start the service, open a Support ticket referencing S0011 and provide output of the support script
S0012 Determines if Puppet produced a report during the last run interval. Troubleshoot Puppet run failures. Open a Support ticket referencing S0012 and provide the output of puppet agent -td > debug.log 2>&1
S0013 Determines if the catalog was successfully applied during the last Puppet run. Troubleshoot Puppet run failures. Open a Support ticket referencing S0013 and provide the output of puppet agent -td > debug.log 2>&1
S0014 Determines if anything in the command queue is older than a Puppet run interval. This can indicate that the PuppetDB performance is inadequate for incoming requests. Review PuppetDB performance. Use metrics to pinpoint the issue. If your are unable to determine the reason from the metrics, open a Support ticket referencing S0014 and provide the output of the support script. and the findings from your analysis of the metrics
S0015 Determines if the infrastructure agent host certificate is expiring in the next 90 days. Puppet Enterprise has built in functionalilty to regenerate infrastructure certificates, see the following documentation If the documented steps fail to resolve your issue, open a support ticket referencing S0015 and provide the error message received when running the steps.
S0016 Determines if there are any OutOfMemory errors in the puppetserver log. Increase the Java heap size for that service. Open a Support ticket referencing S0016 and provide puppet metrics, /var/log/puppetlabs/puppetserver/puppetserver.log, and the output of puppet infra tune.
S0017 Determines if there are any OutOfMemory errors in the puppetdb log. Increase the Java heap size for that service. Open a Support ticket referencing S0017 and provide puppet metrics, /var/log/puppetlabs/puppetdb/puppetdb.log, and the output of puppet infra tune.
S0018 Determines if there are any OutOfMemory errors in the orchestrator log. Increase the Java heap size for that service. Open a Support ticket referencing S0018 and provide puppet metrics, /var/log/puppetlabs/orchestration-services/orchestration-services.log, and output of puppet infra tune.
S0019 Determines if there are sufficent jRubies available to serve agents. Insufficient jRuby availability results in queued puppet agents and overall poor system performance. There can be many causes: Insufficient server tuning for load, a thundering herd, and insufficient system resources for scale. If self-sevice fails to resolve the issue, open a ticket referencing S0019 and provide a description of actions so far and the output of the support script.
S0020 Determines if the Console status api reports all services as running Determine which service caused the failure Service Request Format, go to the [logging] (https://www.puppet.com/docs/pe/2023.4/what_gets_installed_and_where.html?_ga=2.219585753.1594518485.1698057844-280774152.1694007045&_gl=1*xeui3a*_ga*MjgwNzc0MTUyLjE2OTQwMDcwNDU.*_ga_7PSYLBBJPT*MTY5ODMyNzY5MS41Ny4xLjE2OTgzMjgyOTIuMTEuMC4w#log_files_installed) of that service and look for related error messages Open a Support ticket referencing S0020, please provide the name of the service that failed, time of failure, error messages and provide a copy of the Support Script from your primary.
S0021 Determines if free memory is less than 10%. Ensure your system hardware availability matches the recommended configuration, note this assumes no third-party software using significant resources, adapt requirements accordingly for third-party requirements. Examine metrics from the server and determine if the memory issue is persistent If you have issues with memory utilization in Puppet Enterprise that can not be explained, open a Support ticket, referencing S0021 and provide the output of the support script
S0022 Determines the validity of both older and newer types of Puppet Enterprise licenses. Get help with Puppet Enterprise license issues Open a Support ticket referencing S0022 and provide the output of the following commands ls -la /etc/puppetlabs/license.key and cat /etc/puppetlabs/license.key.
S0023 Determines if certificate authority CRL expires in the next 90 days. The solution is to reissue a new CRL from the Puppet CA, note this will also remove any revoked certificates. To do this follow the instructions in this module Open a Support ticket referencing S0023 and provide support script output from the primary server, and errors or output collected from the resolution steps
S0024 Determines if there are files in the puppetdb discard directory newer than 1 week old see documentation If you are unable to determine a reason for the rejections from logging, Open a Support ticket referencing S0024 and provide a copy of the PuppetDB log for the time in question, along with a sample of the most recent file in the following directory /opt/puppetlabs/server/data/puppetdb/stockpile/discard/
S0025 Determines if the host copy of the CRL expires in the next 90 days. If the Output of S0023 on the primary server is also false use the resolution steps in S0023. If S0023 on the Primary is True, follow this article Open a Support ticket referencing S0025 and provide any errors you received in following the resolution steps
S0026 Determines if pe-puppetserver JVM Heap-Memory is set to an inefficient value. Due to an odditity in how JVM memory is utilised, most applications are unable to consume heap memory between ~31GB and ~48GB as such is if you have heap memory set within this value, you should reduce it to more efficiently allocate server resources. To set heap refer to Increase the Java heap size for this service.
S0027 Determines if if pe-puppetdb JVM Heap-Memory is set to an inefficient value. Due to an odditity in how JVM memory is utilised, most applications are unable to consume heap memory between ~31GB and ~48GB as such is if you have heap memory set within this value, you should reduce it to more efficiently allocate server resources. To set heap refer to Increase the Java heap size for this service.
S0029 Determines if number of current connections to Postgresql DB is approaching 90% of the max_connections defined. See documentation Should you be unable to determine the reason for a recent increase in connection use, or are having issue upping the number of connections available, open a Support ticket referencing S0029 and provide the current and future value for puppet_enterprise::profile::database::max_connections and we will assist.
S0030 Determines when infrastructure components have the setting use_cached_catalog set to true. Don't configure use_cached_catalog on PE infrastructure nodes. It prevents the management of key infrastructure settings. Disable this setting on all infrastructure components. See our documentation for more information If you encounter errors after disabling use_cached_catalog, open a Support ticket referencing S0030 and provide the errors.
S0031 Determines if old PE agent packages exist on the primary server. Remove the old PE agent packages.
S0033 Determines if Hiera 5 is in use. Upgrading to Hiera 5 offers some major advantages If you're having issues upgrading to Hiera 5 or if your global Hiera configuration file was erroneously modified, open a Support ticket referencing S0033. Provide your global Hiera configuration file puppet config print hiera_config; the default location is /etc/puppetlabs/puppet/hiera.yaml.
S0034 Determines if your PE deployment has not been upgraded in the last year. Upgrade your PE instance. If you have issues during a Puppet Upgrade, open a ticket and provide your current version and the version you would like to upgrade to and state any problems, providing any logging that is helpful.
S0035 Determines if puppet module list is returning any warnings If S0035 returns false, i.e., warnings are present, you should run puppet module list --debug and resolve the issues shown. The Puppetfile does NOT include Forge module dependency resolution. You must make sure that you have every module needed for all of your specified modules to run.Please refer to Managing environment content with a Puppetfile for more info on Puppetfile and refer to the specific module page on the forge for further information on specific dependancies If you are unable to remove all the warnings, then please refer to Get help for supported modules and raise a support request
S0036 Determines if max-queued-requests is set above 150. The maximum value for jruby_puppet_max_queued_requests is 150 If you are unable to change the value of jruby_puppet_max_queued_requests or encounter an error when changing it, open a Support ticket referencing S0036 and provide any errors output when attempting to change the setting.
S0038 Determines whether the number of environments within $codedir/environments is less than 100 Having a large number of code environments can negatively affect Puppet Server performance. See the Configuring Puppet Server documentation for more information. You should examine if you need them all, any unused environments should be removed. If all are required you can ignore this warning.
S0039 Determines if Puppets Server has reached its queue-limit-hit-rate,and is sending messages to agents. Check the max-queued-requests article for more information. If the article is unable to solve your issue, open a Support ticket referencing S0039, indicating the investigation so far, and any issues you encountered, then provide the support script output from the primary server.
S0040 Determines if PE is collecting system metrics. If system metrics are not collected by default, the sysstat package is not installed on the impacted PE infrastructure component. Install the package and set the parameter puppet_enterprise::enable_system_metrics_collection to true. See the documentation. After system metrics are configured, you do not see any files in /var/log/sa or if the /var/log/sa directory does not exist, open a Support ticket.
S0041 Determines if the pxp broker on a compiler has an established connection to another pxp broker See documenation If unable to make a connection to a broker, raise a ticket with the support team quoting S0041 and attaching the file /var/log/puppetlabs/puppetserver/pcp-broker.log along with the conclusions of your investigation so far
S0042 Determines if the pxp-agent has an established connection to a pxp broker See documenation If unable to make a connection to a broker, raise a ticket with the support team quoting S0042 and attaching the file /var/log/puppetlabs/pxp-agent/pxp-agent.log (on *nix) or C:/ProgramData/PuppetLabs/pxp-agent/var/log/pxp-agent.log (on Windows), along with the conclusions of your investigation so far
S0043 Determines if there are nodes with Puppet agent versions ahead of the primary server Agent nodes should not be running Puppet agent versions ahead of infrastructure nodes. Instead consider upgrading PE so that PE package management contains the desired Puppet agent version. See the upgrading PE and upgrading agents documentation for more information. If you are unable to determine why the indicator is evaluating to false or have questions about Puppet agent versions, open a support ticket and reference S0043.
S0044 Determines if Puppet Servers are using the the PE classifier for the node data plugin (node terminus) Due to performance optimizations, it is recommended to use the PE classifier plugin instead of external node classifier (ENC) scripts or applications. See the node_terminus configuration setting documentation for more information. If you have additional questions about the node_terminus configuration setting, open a support ticket and reference S0044.
S0045 Determines if Puppet Servers are configured with an excessive number of JRubies. Because each JRuby instance consumes additional memory, having too many can reduce the amount of heap space available to Puppet server and cause excessive garbage collections. While it is possible to increase the heap along with the number of JRubies, we have observered diminishing returns with more than 12 JRubies and therefore recommend an upper limit of 12. We also recommend allocating between 1 - 2gb of heap memory for each JRuby. If you would like to measure the effects of changing JRubies and heap settings, use the Puppet Operational Dashboards module to configure a metrics stack and Grafana dashboards for viewing the metrics. If you still have performance issues or further questions, open a support ticket and reference S0045.

Fact: agent_status_check

This fact is confined to run on only agent nodes that are NOT infrastructure nodes.

Refer below for next steps when any indicator reports a false.

Indicator ID Description Self-service steps What to include in a Support ticket
AS001 Determines if the agent host certificate is expiring in the next 90 days. Puppet Enterprise has a plan built into extend agent certificates. Use a puppet query to find expiring host certificates and pass the node ID to this plan: puppet plan run enterprise_tasks::agent_cert_regen agent=$(puppet query 'inventory[certname] { facts.agent_status_check.AS001 = false }' | jq -r '.[].certname' | paste -sd, -) master=$(puppet config print certname) If the plan fails to run, open a support ticket referencing AS001 and provide the error message received when running the plan.
AS002 Determines if the pxp-agent has an established connection to a pxp broker Ensure the pxp-agent service is running, if running check /var/log/puppetlabs/pxp-agent/pxp-agent.log (on *nix) or C:/ProgramData/PuppetLabs/pxp-agent/var/log/pxp-agent.log (on Windows) — Contains the for connection issues, first ensuring the agent is connecting to the proper endpoint, for example, a compiler and not the primary. This fact can also be used as a target filter for running tasks, ensuring time is not wasted sending instructions to agents not connected to a broker If unable to make a connection to a broker, raise a ticket with the support team quoting AS002 and attaching the file /var/log/puppetlabs/pxp-agent/pxp-agent.log (on *nix) or C:/ProgramData/PuppetLabs/pxp-agent/var/log/pxp-agent.log (on Windows) along with the conclusions of your investigation so far
AS003 Determines the certname configuration parameter is incorrectly set outside of the [main] section of the puppet.conf file. The Puppet documentation states clearly certname should always be placed solely in the [main] section to prevent unforseen issues with the operation of the puppet agent https://puppet.com/docs/puppet/7/configuration.html#certname If unable to determine why the indicator is being raised. Open a ticket with the support team quoting AS003 and attaching the file puppet.conf along with the conclusions of your investigation so far .
AS004 Determines if the host copy of the CRL expires in the next 90 days. If the Output of S0023 on the primary server is also false use the resolution steps in S0023. If S0023 on the Primary is True, follow this article Open a Support ticket referencing AS004 and provide any errors you recieved in following the resolution steps

How to report an issue or contribute to the module

If you are a PE user and need support using this module or encounter issues, our Support team is happy to help you. Open a ticket at the Support Portal. If you have a reproducible bug or are a community user, you can open an issue directly in the GitHub issues page of the module. We also welcome PR contributions to improve the module. Please see further details about contributing.


Supporting Content

Articles

The Support Knowledge base is a searchable repository for technical information and how-to guides for all Puppet products.

This Module has the following specific Article(s) available:

  1. Find and fix common issues in Puppet Enterprise using the puppetlabs-pe_status_check module

Videos

The Support Video Playlist is a resource of content generated by the support team

This Module has the following specific video content available:

  1. Preventative Maintenance With PE Status Check


puppetlabs-pe_status_check's People

Contributors

aaronoftheages avatar bartoszblizniak avatar bastelfreak avatar binford2k avatar coreymbe avatar elainemccloskey avatar gavindidrichsen avatar gmcgrillan avatar j-hunniford avatar jarretlavallee avatar jordi-garcia avatar joshcooper avatar kenyon avatar m0dular avatar martyewings avatar misseuropa avatar pgrant87 avatar puppet-sup avatar seanmil avatar taikaa avatar timidri avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

puppetlabs-pe_status_check's Issues

Refactor S0035 to use ruby rather than shell out.

See Adrian's comment here #43 (comment):

I wonder if it would be better to use the puppet module face for this than shelling out. These are callable from Ruby code like this:

require 'puppet'
require 'puppet/face'

face = Puppet::Face[:module, :current]
modules = face.list(environment: 'production')

# Each key is the modulepath, and its value is an array of the modules
has_warnings = modules[:modules_by_path].any? {|k,v|
  v.any? {|mod|
    !mod.unmet_dependencies.empty?
  }
}

I tested this in a couple of scenarios and it seems to work. It's also more accurate, since it's possible for a user to have a module with the word "Warning" in it, which I know is unlikely, but still possible.

Check for console-cert expire date

Use Case

PE manages a certificate for the console. This can be the same as the agent cert,but it doesn't has to be. I think it would be really helpful to check if the used cert expires in < 90 days.

Describe the Solution You Would Like

As done in S0015, we could read the cert:

chunk(:S0015) do
# Is the hostcert expiring within 90 days
next unless File.exist?(Puppet.settings['hostcert'])
raw_hostcert = File.read(Puppet.settings['hostcert'])
certificate = OpenSSL::X509::Certificate.new raw_hostcert
result = certificate.not_after - Time.now
{ S0015: result > 7_776_000 }
end

The default path is /opt/puppetlabs/server/data/console-services/certs/console-cert.cert.pem or /opt/puppetlabs/server/data/console-services/certs/${certname}.cert.pem. But the path is configureable in puppet_enterprise::profile::console::browser_ssl_cert. I don't think we can access the parameter easily from a fact. We could read it from /etc/puppetlabs/console-services/conf.d/console.conf.

A different approach would be to make an HTTP request to the console and get the cert.

Describe Alternatives You've Considered

A customer could use their internal monitoring tool for this, but since pe_status_check already validates certificates and people might get the impression that it covers every important part, I think it makes sense to integrate this into pe_status_check.

S0026 is too slow and times out on large PE environments

Describe the Bug

The query used to retrieve the status for S0026 is too slow in PE environments with a large number of compilers and/or environments, which are queried as part of the full-scope check (to retrieve the file sync status information, I believe): https://github.com/puppetlabs/puppetlabs-pe_status_check/blob/main/lib/facter/pe_status_check.rb#L300

This query takes a little under 5 seconds in my environment, which significantly exceeds the hard coded 2 second request timeout in the fact.

Expected Behavior

S0026 should be able to retrieve the JVM value without timing out.

Using a more narrow scoping for the service check looks to be possible and greatly decreases the query time (from just under 5 seconds to around 0.2 seconds in my environments): /status/v1/services/status-service?level=debug

Environment

  • PE 2021.7.1
  • pe_status_check v2.4.1

Additional Context

The timeout exceed condition was seen in a "large" PE deployment with 21 environments, 11 compilers, and a DR node.

Check for issues with yum provider

Use Case

It was decided that yum provider messages should only be a warning in https://tickets.puppetlabs.com/browse/PUP-5594. This can then go undetected until puppet attempts to manage packages and then causes a failed puppet run. Warnings are not visible on PE dashboards unless viewing the reports directly.

Describe the Solution You Would Like

A fact should either check yum check-update comes back clean or check the logs for "Warning: Puppet::Type::package::ProviderYum: Could not check for updates, '/bin/yum check-update' exited with 1" messages

Describe Alternatives You've Considered

This issue could be worked around by users setting repos correctly to ignore broken sources but this is rarely done and outside of our control.

Additional Context

Customers have missed configurations for months and then been impacted during key events such as upgrades.

Add code_deployment_healthy status check

Use Case

Sometimes my code seems to silently fail to deploy--especially in my development environment where I'm pretty ruthless with check-ins and squash-updates.

Describe the Solution You Would Like

I'd like a fact that checks the status of the last attempted code deploy--was it successful for all environments?

Describe Alternatives You've Considered

I'm not aware of any alternatives. However, maybe this fact could be rolled into something more generic like the "health" of code deployment. Not sure how reasonable it would be, but perhaps some of the checks described here https://puppet.com/docs/pe/2019.8/code_mgr_troubleshoot.html#troubleshooting_code_manager-monitor-webhook-deployment-trigger-issues could be rolled into this check...

Additional Context

Would be nice to see a warning when I do a puppet run on my primary--as I know the pe_status_check does already--if the code deployment is not healthy. For example, I have automated code deploys so didn't notice the following error until doing it manually. My puppet runs for the 'development' all seemed to work fine apart from the fact that I didn't see my new code check-ins. The issue was pretty simple: I had a typo on one of my modules...

image

agent_status_check AS002 too slow with many processes

Describe the Bug

On Linux based systems where the ss -onp state established '( dport = :8142 )' command is run the command will not finish before the timeout if the system is moderately busy and there are many thousands of processes in the process table. This is because every process has to be examined to determine which sockets are associated with it.

On a system with moderate load, thousands of users, and approximately 10,000 processes the -p option on the ss command consistently had it take longer than 2 seconds.

Expected Behavior

A large and busy system should not prevent this check from working properly. Furthermore, this check should probably attempt to be a bit more efficient so as to not run so long on busy systems.

S0007 increase required free disk space

Use Case

PE 2021.6 ships the pe_databases module by default I think. PE recommends using it for older versions as well. It configures 3 timers that run pg_repack on the pe-puppetdb. pg_repack requires free diskspace that's at least as big as the (biggest table+index)*2

from https://reorg.github.io/pg_repack/:

Performing a full-table repack requires free disk space about twice as large as the target table(s) and its indexes. For example, if the total size of the tables and indexes to be reorganized is 1GB, an additional 2GB of disk space is required.

Describe the Solution You Would Like

It would be helpful if the fact could identify the biggest table and multiply it by two and use that as lower limit

Describe Alternatives You've Considered

if another implementation is too tricky, maybe the message for S0007 should be adjusted like:

more than 20% free, but check you table size. maybe you need more free disk space.

Additional Context

Indeed pg_repack doesn't work reliable in my environment when I've less than the recommended free storage.

logdir is not correct dir for 16/17/18 and 39

Describe the Bug

All pe log parsing indicator tests are incorrectly keying off "puppet config print logdir"

this resolves to

/var/log/puppetlabs/puppet
and not

/var/log/puppetlabs/

So none of the indicators are actually checking the correct log files

Add a status check for the size of database tables.

Use Case

Have a check to alert if a table is above a certain threshold. For example, the certnames table. It would like this value to have a sane value but also be configurable.

Describe the Solution You Would Like

Alert me if key tables are larger than a set threshold and allow me to configure the threshold where necessary.

plans fail when run from PE console

PE2019.8.11

Plan uses environmental hiera and fails on

      - "nodes/%{trusted.certname}.yaml"

pe_status_check::infra_summary:

{
  "msg" : "Interpolations are not supported in lookups outside of an apply block: Undefined variable 'trusted' (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/production/hiera.yaml)",
  "kind" : "bolt/pal-error",
  "details" : { }
}

Bug in pe_status_check S0036

Describe the Bug

We have this in our code:

grep -r max_queued_requests data/
data/roles/pe_xl/compiler.yaml:puppet_enterprise::master::puppetserver::jruby_puppet_max_queued_requests: 150
data/roles/pe_xl/master.yaml:puppet_enterprise::master::puppetserver::jruby_puppet_max_queued_requests: 32

Still, we’re getting “flagged” for S0036 (puppetlabs-pe_status_check v2.5.0):

Notice: S0036 is at fault. The indicator S0036 Determines if max-queued-requests is set above 150, refer to documentation for required action

Notice: /Stage[main]/Pe_status_check/Notify[pe_status_check S0036]/message: defined 'message' as 'S0036 is at fault. The indicator S0036 Determines if max-queued-requests is set above 150, refer to documentation for required action'
...
$ facter -p pe_status_check.S0036
false
...
$ grep max-queued-requests /etc/puppetlabs/puppetserver/conf.d/pe-puppet-server.conf
    max-queued-requests: 150

Could that be because https://github.com/puppetlabs/puppetlabs-pe_status_check/blob/main/lib/facter/pe_status_check.rb#L428 only checks for less than, not less than or equal?

Thanks

Expected Behavior

A clear and concise description of what you expected to happen.

Steps to Reproduce

See above

Environment

  • Version 2019.8.12
  • Platform RHEL7

Additional Context

Cannot compute pe_status_check hash when log files do not exist or are unreadable

Describe the Bug

When for example puppetserver-access.log does not exist or is unreadable an uncaught exception is raised when computing the custom fact.

logfile = File.dirname(Puppet.settings['logdir'].to_s) + '/puppetserver/puppetserver-access.log'
apache_regex = %r{^(\S+) \S+ (\S+) (?<time>\[([^\]]+)\]) "([A-Z]+) ([^ "]+)? HTTP/[0-9.]+" (?<status>[0-9]{3})}
has_503 = File.foreach(logfile).any? do |line|
match = line.match(apache_regex)
next unless match && match[:time] && match[:status]
time = Time.strptime(match[:time], '[%d/%b/%Y:%H:%M:%S %Z]')
since_lastrun = Time.now - time
current = since_lastrun.to_i <= Puppet.settings['runinterval']
match[:status] == '503' and current
rescue StandardError => e
Facter.warn("Error in fact 'pe_status_check.S0039' when querying puppetserver access logs: #{e.message}")
Facter.debug(e.backtrace)
break
end

Expected Behavior

Guard against some files not being on disk or unreadable

Steps to Reproduce

Delete puppetserver-access.log and try to compute the custom fact

Additional Context

Here is the issue we are seeing in CI (when an unrelated issue is causing puppet-access.log to not be written)

logfile = File.dirname(Puppet.settings['logdir'].to_s) + '/puppetserver/puppetserver-access.log'
apache_regex = %r{^(\S+) \S+ (\S+) (?<time>\[([^\]]+)\]) "([A-Z]+) ([^ "]+)? HTTP/[0-9.]+" (?<status>[0-9]{3})}
has_503 = File.foreach(logfile).any? do |line|
match = line.match(apache_regex)
next unless match && match[:time] && match[:status]
time = Time.strptime(match[:time], '[%d/%b/%Y:%H:%M:%S %Z]')
since_lastrun = Time.now - time
current = since_lastrun.to_i <= Puppet.settings['runinterval']
match[:status] == '503' and current
rescue StandardError => e
Facter.warn("Error in fact 'pe_status_check.S0039' when querying puppetserver access logs: #{e.message}")
Facter.debug(e.backtrace)
break
end
The rescue does not catch any errors around open/read for the file. Looks like other places in this function use at least File.exist? function to ensure files exist before trying to open them. Not sure how defensive we need to be about also ensuring they are readable, but guarding against missing files for this particular chunk would be a good improvement.

lookup indicator_exclusions from hiera

Use Case

It would be convenient to be able to store indicator_exclusions in plan hiera
so instead of having to list exclusions in the command line
bolt plan run pe_status_check::infra_summary indicator_exclusions='["S0035"]'
plan could lookup pe_status_check::indicator_exclusions from hiera, similar to the class behavior

S0036 Acceptance tests are unreliable

Describe the Bug

The acceptance tests for S0036 are not reliable and cause intermittent failures. It also does not clean up the setting that was configured.

Example failure: https://github.com/puppetlabs/puppetlabs-self_service/runs/4968232026?check_suite_focus=true#step:13:25

Recommendation

Change the acceptance test from using hiera and the puppet agent to using a puppet apply that manages the setting.

https://github.com/puppetlabs/puppetlabs-self_service/blob/main/spec/acceptance/self_service_spec.rb#L153-L160

Use present to create the failure and absent to remove the failure state.

  pe_hocon_setting { 'jruby-puppet.max-queued-requests':
    ensure  => present,
    path    => '/etc/puppetlabs/puppetserver/conf.d/pe-puppet-server.conf',
    setting => 'jruby-puppet.max-queued-requests',
    value   => 151,
  }

Check on revoked users in PE

Use Case

Users can get revoked accidentally and its not noticed until it causes impact due to access failures.

Describe the Solution You Would Like

A status check via task and fact which listed out revoked users would be useful to allow monitoring.

Describe Alternatives You've Considered

I can't think of another way of doing this right now, maybe metrics collector should have this?

Additional Context

Check for revoked users using https://puppet.com/docs/pe/2019.8/rbac_api_v1_user_get_users.html with a filter

Update S0022 to cope with the new license file

Describe the Bug

We now have the option to use license.key or suite-license.lic as license files for PE. S0022 is looking just for license.key, if it doesn't find it, it triggers an alert.

Expected Behavior

S0022 should be updated to include checking for suite-license.lic and, maybe, triggering an alert if both license files are present.

Steps to Reproduce

Assuming you're not managing the license key as a resource:

  • Remove license.key
  • Install a correctly formatted suite-license.lic
  • Restart pe-puppetserver
  • Run puppet agent, check report.

Environment

  • PE 2023.8.0

Additional Context

I'm going to try and take a look at fixing it myself, but no promises!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.