Giter VIP home page Giter VIP logo

pvdegradationtools's Introduction

License license
Publications DOI
Documentation Documentation Status
Build status GitHub Actions Testing Status

PV Degradation Tools (pvdeg)

This repository contains functions for calculating degradation of photovoltaic modules. For example, functions to calculate front and rear relative Humidity, as well as Acceleration Factors. A degradation calculation function is also being developed, considering humidity and spectral irradiances models.

Tutorials

Jupyter Book

For in depth Tutorials you can run online, see our jupyter-book Jupyter Book Badge

Clicking on the rocket-icon on the top allows you to launch the journals on Google Colaboratory for interactive mode. Just uncomment the first line pip install ... to install the environment on each journal if you follow this mode.

Binder

To run these tutorials in Binder, you can click here: Binder It takes a minute to load the environment.

Locally

You can also run the tutorial locally in a virtual environment, i.e., venv or miniconda.

  1. Create and activate a new environment, e.g., on Mac/Linux terminal with venv:

    python -m venv pvdeg
    . pvdeg/bin/activate
    

    or with conda:

    conda create -n pvdeg
    conda activate pvdeg
    
  2. Install pvdeg into the new environment with pip:

    python -m pip install pvdeg
    
  3. Start a Jupyter session:

    jupyter notebook
    
  4. Use the file explorer in Jupyter lab to browse to tutorials and start the first Tutorial.

Documentation

Documentation is available in ReadTheDocs where you can find more details on the API functions.

Installation

Relative Humidity and Acceleration Factors for Solar Modules releases may be installed using the pip and conda tools. Compatible with Python 3.5 and above.

Install with:

pip install pvdeg

For developer installation, clone the repository, navigate to the folder location and install as:

pip install -e .[all]

License

BSD 3-clause

Contributing

We welcome contributiosn to this software, but please read the copyright license agreement (cla-1.0.md), with instructions on signing it in sign-CLA.md. For questions, email us.

Getting support

If you suspect that you may have discovered a bug or if you'd like to change something about pvdeg, then please make an issue on our GitHub issues page.

Citing

If you use this functions in a published work, please cite:

Holsapple, Derek, Ayala Pelaez, Silvana, Kempe, Michael. "PV Degradation Tools", NREL Github 2020, Software Record SWR-20-71.

And/or the specific release from Zenodo:

Martin Springer, Matthew Brown, Silvana Ovaitt, Tobin Ford, Joseph Karas, Mark Campanelli, Derek M Holsapple, Kevin Anderson, Michael Kempe. (2024). NREL/PVDegradationTools: 0.3.2 (0.3.2). Zenodo. https://doi.org/10.5281/zenodo.11123249

pvdegradationtools's People

Contributors

github-actions[bot] avatar holsappled avatar jfkaras avatar kandersolar avatar markcampanelli avatar martin-springer avatar mcbrown042 avatar mdkempe avatar shirubana avatar tobin-ford avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pvdegradationtools's Issues

Error checking for weather.get

Weather.get is one of the most important methods, it need to have a lot of error checking. Specifically, if you use PSM3 it should first check to see if the gid or the lat/lon are valid and in the range of testable area. If not, an error message should come up with a link to where to determine the applicable area or how to properly get the gid. A similar thing should be in place for the other methods using the NSRDB or PVGIS.

Geospatial data down selection function

We need to create a geospatial down-selection function that will be comprehensive.

The input would be a list of, GID, lat/lon, altitude and altitude. The output would be the shortened list with just the GID or other identifier as appropriate/selected.

We want to be able to select the number of points to include in the final list.

To get more useful data, we would want to account for topology where preference for data next to or in mountains would be preferentially selected. This could be accomplished by a nearest neighbor search where a weighted number is calculated based on the altitude difference between the nearest neighbors. Then all the points are randomly included with this weighted probability. This method does rely on there being a statistically large enough number of data points.

We would also want to determine the perimeter locations or locations near an ocean or large lake and try to make sure there is a good outline. This could be done through a point search that looks for a direction where there are no data points in a ~150 degree cone for a specified number of miles. The number of miles would be determined by looking at the typical spacing (e.g. 4 km) determined by a few random tests of nearest neighbors, and then just multiplying that distance by say a factor of 10. Then you would, for example, look for points where there is a direction with nothing for 40 km. Then you put all the edge points into a sublist and down select with half the rate of exclusion.

These calculations may take some time, but would create nice lists to make the subsequent calculations much better.

Improve the descriptions for the different methods

For the method descriptions we should make this look very professional. This includes adding in units for clarity and completeness. Also, you can get things like the degree sign or superscript to be included by pulling them up in Word or Excel and pasting them into your document, e.g. ±²³₉₈ [°C]. This looks more professional as opposed to ^2 or just [C].

High level reorganization

We need to differentiate between Jupyter Notebook tutorials and calculation tools. I'm creating a tool to do a complete standoff analysis which is much more than a tutorial. Tutorials should be just how to use some common and important functions. So I think we need a new folder called something like "Calculation_Tools" or just "Tools".

Chamber exposure stressor creation function needed

We need to create a a function that will allow you to create a stress series of data which can be subsequently used to either calculate degradation in the chamber or calculate an acceleration factor using another tool.

The entry would be a list of **kwargs where there are standardized cycles such as TC, HF, DH... and some preprogrammed sequences such as that found in IEC 61730 or IEC 61215. Additionally, one could specify an arbitray sequence with calling parameters such as T, RH, Irr, Irr-spectrum, light source, chamber equilibration time, sample equilibration time...

The output would be a time series of data with of an arbitrary time step including T, RH, Irr, V data, and meta data for the spectrum and other relevant parameters.

Once the function is created, we would want there to be a tool that will simplify the creation of an *.csv file that can be read as if it was a weather file with weather.read().

Improvements geospatial

-) rename NREL_HPC to dataset_path and have predefined paths for AWS and NREL
-) include function to select a satellite based on lat, lon
-) pvlib uses 'TMY' as default for names. We could do the same and if no TMY file exists we could fall back to the latest available non-leap year.

Create a full LETID tool

We want to create a tool for LETID that will provide some sort of interface to the degradation database, allow calculations for a single site for a specified location and number of years, produce a performance plot vs time, calculate total power losses vs time, do a present value of losses calculation (and any other similar useful calculations), if we get ambitious do a Monte Carlo simulation (that would be worthy of a paper publication), and lastly run regional calculations similar to what Martin did.

Provide access to data from older TMY-3, WGIS, and PVGIS data

There are a thousand or so sites from the older data sets that are based on ground based sources in Alaska and elsewhere. I think we should tackle the single site access through NSRDB first. What this would look like is when you call the .get method there could be a boolean pass through that allows for the use of this dataset, with a default setting of true. Then if the selected lat/lon is not in the dataset it would look at the ground based sites that were identified as being outside of all the satellites. The big issue to how to store it on AWS. Maybe the NSRDB people could help.

Implement JIT for faster computation.

We need to go through all the code and where there are For-Next loops or intensive calculations on the whole series of weather data (e.g. calculating the total degradation using the Arrhenius equation).

For the LETID, this probably requires some combinations of subroutines to make it work, but it could help it run 10x faster.

DataLibrary not found

Describe the bug
import pvdeg causes an error message on some installations

To Reproduce
Not sure which systems/versions are affected.

Expected behavior
DataLibrary should be found.

Screenshots
image

Desktop (please complete the following information):

  • OS: unkown
  • pvdeg version: 0.2.x

Additional context
reported by Yili

Interactive mapping feature

We want to have a world map with country outlines that you can zoom into. It will have a slider where you can adjust the Xeff value which will then change the map to indicate whether level 0, 1 or 2 module certification is needed. How difficult is this?

Update JupyterBook

New tutorials including

  • Van't Hoff
  • Monte Carlo
  • Other demos

That are live on main are not present on Jupyterbook.

pyproject.toml for builds, using setuptools_scm for versioning

Hello PVdeg team and thanks for this effort!

I might be able to suggest a significantly more streamlined way of configuring, building, and testing this repo using a pyproject.toml file. I just want to check if the team is open to this if I can show that it would work.

Some errors in first few lines of codes. Pls help

Lines of code -
import os
import pandas as pd

import pvdeg
from pvdeg import DATA_DIR

PSM_FILE = os.path.join(DATA_DIR,'psm3_demo.csv')

WEATHER, META = pvdeg.weather.read(PSM_FILE,'psm')

Error-
Traceback (most recent call last):

File ~\anaconda3\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)

File c:\users\saikatghosh\degradation rates\degradation.py:11
import pvdeg

File ~\anaconda3\lib\site-packages\pvdeg_init_.py:5
from . import cli

File ~\anaconda3\lib\site-packages\pvdeg\cli.py:1
from pvdeg.standards import run_calc_standoff

File ~\anaconda3\lib\site-packages\pvdeg\standards.py:18
from . import weather

File ~\anaconda3\lib\site-packages\pvdeg\weather.py:10
from pvdeg import humidity

File ~\anaconda3\lib\site-packages\pvdeg\humidity.py:7
from numba import jit

File ~\anaconda3\lib\site-packages\numba_init_.py:55
_ensure_critical_deps()

File ~\anaconda3\lib\site-packages\numba_init_.py:42 in _ensure_critical_deps
raise ImportError("Numba needs NumPy 1.24 or less")

ImportError: Numba needs NumPy 1.24 or less

Get City State and Country in PSM data

When not doing a geospatial analysis get the city, state, country, prefecture, county, or other meta data fields filled in.

(https://www.geeksforgeeks.org/get-the-city-state-and-country-names-from-latitude-and-longitude-using-python/)

pip install geopy

import module

from geopy.geocoders import Nominatim

initialize Nominatim API

geolocator = Nominatim(user_agent="geoapiExercises")

Latitude & Longitude input

Latitude = "25.594095"
Longitude = "85.137566"

location = geolocator.reverse(Latitude+","+Longitude)

Display

print(location)

address = location.raw['address']
print(address)

city = address.get('city', '')
state = address.get('state', '')
country = address.get('country', '')
code = address.get('country_code')
zipcode = address.get('postcode')
print('City : ',city)
print('State : ',state)
print('Country : ',country)
print('Zip Code : ', zipcode)

Object Oriented SIM structure

Create a basic object oriented simulation object, to keep track of things like:

-Database being used. If NSRDB, this will set elements like -wind_speed_factor for some of the functions, or calculated humidity instead of provided humidity, etc.

Need more pytests

We need to incoporate more pytests into the document. All major methods must have a pytest. To start, we should go through and make a list of the inadequacies.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.