Giter VIP home page Giter VIP logo

segment-geospatial's Introduction

segment-geospatial

image image image image image Docker Pulls PyPI Downloads Conda Downloads DOI

A Python package for segmenting geospatial data with the Segment Anything Model (SAM)

Introduction

The segment-geospatial package draws its inspiration from segment-anything-eo repository authored by Aliaksandr Hancharenka. To facilitate the use of the Segment Anything Model (SAM) for geospatial data, I have developed the segment-anything-py and segment-geospatial Python packages, which are now available on PyPI and conda-forge. My primary objective is to simplify the process of leveraging SAM for geospatial data analysis by enabling users to achieve this with minimal coding effort. I have adapted the source code of segment-geospatial from the segment-anything-eo repository, and credit for its original version goes to Aliaksandr Hancharenka.

Citations

  • Wu, Q., & Osco, L. (2023). samgeo: A Python package for segmenting geospatial data with the Segment Anything Model (SAM). Journal of Open Source Software, 8(89), 5663, https://doi.org/10.21105/joss.05663

Features

  • Download map tiles from Tile Map Service (TMS) servers and create GeoTIFF files
  • Segment GeoTIFF files using the Segment Anything Model (SAM) and HQ-SAM
  • Segment remote sensing imagery with text prompts
  • Create foreground and background markers interactively
  • Load existing markers from vector datasets
  • Save segmentation results as common vector formats (GeoPackage, Shapefile, GeoJSON)
  • Save input prompts as GeoJSON files
  • Visualize segmentation results on interactive maps

Installation

Install from PyPI

segment-geospatial is available on PyPI. To install segment-geospatial, run this command in your terminal:

pip install segment-geospatial

Install from conda-forge

segment-geospatial is also available on conda-forge. If you have Anaconda or Miniconda installed on your computer, you can install segment-geospatial using the following commands. It is recommended to create a fresh conda environment for segment-geospatial. The following commands will create a new conda environment named geo and install segment-geospatial and its dependencies:

conda create -n geo python
conda activate geo
conda install -c conda-forge mamba
mamba install -c conda-forge segment-geospatial

If your system has a GPU, but the above commands do not install the GPU version of pytorch, you can force the installation of the GPU version of pytorch using the following command:

mamba install -c conda-forge segment-geospatial "pytorch=*=cuda*"

Samgeo-geospatial has some optional dependencies that are not included in the default conda environment. To install these dependencies, run the following command:

mamba install -c conda-forge groundingdino-py segment-anything-fast

Examples

Demos

  • Automatic mask generator

  • Interactive segmentation with input prompts

  • Input prompts from existing files

  • Interactive segmentation with text prompts

Tutorials

Video tutorials are available on my YouTube Channel.

  • Automatic mask generation

Alt text

  • Using SAM with ArcGIS Pro

Alt text

  • Interactive segmentation with text prompts

Alt text

Using SAM with Desktop GIS

Computing Resources

The Segment Anything Model is computationally intensive, and a powerful GPU is recommended to process large datasets. It is recommended to have a GPU with at least 8 GB of GPU memory. You can utilize the free GPU resources provided by Google Colab. Alternatively, you can apply for AWS Cloud Credit for Research, which offers cloud credits to support academic research. If you are in the Greater China region, apply for the AWS Cloud Credit here.

Legal Notice

This repository and its content are provided for educational purposes only. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations. Users who intend to download a large number of image tiles from any basemap are advised to contact the basemap provider to obtain permission before doing so. Unauthorized use of the basemap or any of its components may be a violation of copyright laws or other applicable laws and regulations.

Contributing

Please refer to the contributing guidelines for more information.

Acknowledgements

This project is based upon work partially supported by the National Aeronautics and Space Administration (NASA) under Grant No. 80NSSC22K1742 issued through the Open Source Tools, Frameworks, and Libraries 2020 Program.

This project is also supported by Amazon Web Services (AWS). In addition, this package was made possible by the following open source projects. Credit goes to the developers of these projects.

segment-geospatial's People

Contributors

atanas-balevsky avatar darrenwiens avatar djm93dev avatar dvolgyes avatar forestbat avatar giswqs avatar jrbourbeau avatar jsolly avatar lbferreira avatar lucasosco avatar lukesteinbicker avatar p-vdp avatar pre-commit-ci[bot] avatar sdtaylor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

segment-geospatial's Issues

segment and vector outputs are single value

Environment Information

  • samgeo version: segment_geospatial-0.4.0-py2.py3-none-any.whl
  • Python version: 3.1
  • Operating System: Windows 10

Description

Ran the example code provided in both pycharm on local machine and in Google Colab here:
https://colab.research.google.com/github/opengeos/segment-geospatial/blob/main/docs/examples/satellite.ipynb#scrollTo=Gqby0rFINPNR

Output vectors and segments return single value of 255. Does not return the same output as shown in example.

image

What I Did

import os
from samgeo import SamGeo, tms_to_geotiff, get_basemaps
bbox = [-122.0142108,37.0539328,-122.0129377,37.0554187]

image = 'satellite_v6.tif'
shapefile = "segment_v6.shp"
mask = 'segment_v6.tif'
tms_to_geotiff(output=image, bbox = bbox, zoom=20, source="Satellite", overwrite=True)

# '''
# Initialize SAM class
# '''
out_dir = os.path.join(os.path.expanduser("~"), "Downloads")
checkpoint = os.path.join(out_dir, "sam_vit_h_4b8939.pth")

sam = SamGeo(
   model_type="vit_h",
   checkpoint=checkpoint,
   sam_kwargs=None,
)
#
# '''
# Segment the image
# Set batch=True to segment the image in batches. This is useful for large images that cannot fit in memory.
# '''
print('Generating segments...')

sam.generate(
   image, mask, batch=True, foreground=True, erosion_kernel=(3, 3), mask_multiplier=255
)

# '''
# Polygonize the raster data
# Save the segmentation results as a GeoPackage file.
# '''
print('Creating shapefile...')
sam.tiff_to_vector(mask, shapefile)

Returns same results as in the Colab example.

Segment non-geospatial raster imagery in png or tif format

Description

I'd like to segment non geospatial raster imagery, or have segment-geospatial handle dummy georeferencing for png images so that they can be loaded into the interactive segmentation UI. Currently, PNGs can be loaded into the image comparison tool from leafmap. But when trying to start the interactive UI, non georeferenced imagery does not show up after calling

sam.set_image("path/to/png")
same.show_map()

I've tried to georeference an image and plot it, but because I'm using a remote jupyterlab instance, I see a flicker and then a map at full zoom. Following the suggestion of including a localtileserver config:

import os
os.environ['LOCALTILESERVER_CLIENT_PREFIX'] = 'proxy/{port}'
m.layers[-1].visible = False
m.add_raster(result['HR_path'].split(".png")[0]+".tif", center=[origin[1], origin[0]], layer_name="Image")
m

unfortunately didn't help and produced the same result.

ADDING LABELS ON DETECTED OBJECTS

Description

As I know SAM can tell you what objects are on the image so it will be good to add labels on the detected objects. it will make life much easier.

another thing is, is it possible to make this as a rest api?

Dockerfile

Description

Since this project requires GDAL so the requirements are fairly complex to install, it would be very helpful to include a Dockerfile

Customize segmentation class as input

Description

It is a one of interesting project to utilize the SAM model. It would be great, If there is an option to input segmentation class 2,3, ... to classify segmentation like text input. As a results, the output will produce those class with color.

Source code

Paste your source code here if have sample code to share.

Qgis problem instalation of SAM

Environment Information

  • samgeo version: 0.8.1
  • Python version: 3.11
  • Operating System: windows

Description

I have installed the "Geometric Attributes" and I have followed the documentation. But this problem appears
image
Then I tried to install it manually but in my windows terminal this was the problem:
image

What I Did

I tried to solve this upgrading "pip" in the terminal but the problem persists. So then I tried in pyhton.exe but I think I am not writing correctly the code:
image

cuda_installation_issue

Environment Information

  • samgeo version: 0.3.0
  • Python version: 3.9
  • Operating System: windows 11

Description

I checked fo the cuda installation but still it's not being detected .

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Partial coverage of resulting geometries

Hi, thanks for this awesome work !

Not sure of this a bug or not, but in any case I thought it would be great to discuss results quality with you and other users.

Environment Information

  • samgeo version: 0.2.0
  • Python version: 3.10.8
  • Operating System: Ubuntu-22.04

Description

I'm trying the vit_h model on a small Plรฉiades data ROI.
The results are better than I expected, but the model seems to struggle with this kind of images, my guess is that it is optimized to find objects masks vs image background. But with satellite imagery, there is no such concept of background, and I got a strange result. See the attached screenshot above.

I'm currently trying with different SAM parameters.

What I Did

I followed the example notebook :

sam = SamGeo(
    checkpoint="models/sam_vit_h_4b8939.pth",
    model_type="vit_h",
    device="cuda",
    erosion_kernel=(3, 3),
    mask_multiplier=255,
    sam_kwargs=None
)

mask = 'segment.tiff'
sam.generate(image, mask)

Here is the result :
Screenshot_20230424_151959

Any ideas on what could explain this huge rectangular hole ?

can not run segment-geospatial

Environment Information

  • samgeo version: 0.8.0
  • Python version: 3.10
  • Operating System: win11

Description

when I ran sam = LangSAM(), the error appeared, "name 'hf_hub_download is not defined' ", and in the command line, the error seems that the groundingDINO install failed...

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Execution in non-interactive environment

I am trying to make samgeo to work outside the context of a python notebook. I already subclassed SamGeo and overrided the tiff_to_tiff method to remove the tqdm wrapper from the sample_grid iterator. Even still, the process gets frozen and am still unable to properly debug the issue. Are there any other jupyter-specific methods/classes used?

(I am developing a QGIS wrapper here )

Error when saving output: 'Map' object has no attribute 'file_control' in V0.6.0 - Generating object masks from input prompts with SAM

line 1951 in /samgeo/common.py - https://github.com/opengeos/segment-geospatial/pull/44/files

/content/drive/MyDrive/samgeo/common.py in save_button_click(change)
1980 m.file_control = file_control
1981 else:
-> 1982 if m.file_control in m.controls:
1983 m.remove_control(m.file_control)
1984 delattr(m, "file_control")

AttributeError: 'Map' object has no attribute 'file_control'

How to speed up model Inference?

Description

In my experiment, it takes 49 hours to infer an image with a size of 50000ร—50000. How to speed up?

sam. generate(
image, mask, batch=True, foreground=True, erosion_kernel=None, mask_multiplier=255
)

Function text_sam_gui() is not defined

Environment Information

  • samgeo version: 0.8.0
  • Python version: 3.10.4
  • Operating System: Windows 10

Description

In text_sam.py is not defined function text_sam_gui()

โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ in
โ”‚
โ”‚ โฑ 1 sam.show_map()
โ”‚ 2
โ”‚
โ”‚ c:\Users\MyComputer.conda\envs\geo\lib\site-packages\samgeo\text_sam.py:419 in show_map
โ”‚
โ”‚ 416 โ”‚ โ”‚ Returns:
โ”‚ 417 โ”‚ โ”‚ โ”‚ leafmap.Map: The map object.
โ”‚ 418 โ”‚ โ”‚ """
โ”‚ โฑ 419 โ”‚ โ”‚ return text_sam_gui(self, basemap=basemap, out_dir=out_dir, **kwargs)
โ”‚ 420
โ”‚ 421
โ”‚ 422 def main():
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
NameError: name 'text_sam_gui' is not defined

What I Did

sam.show_map()

Integrated into qgis

Hi๏ผŒItโ€˜s really great work!
Could you please tell me how to integratethis work into qgis software?
Many thanks~

Automatic Mask Generator Batch Processing

Environment Information

Testing on the original notebook examples here, here and here.

Description

Im not sure if this is a design implementation or a bug but feel free to close if it is meant to behave in this way. Anyways I have noticed these issues:

  1. The Automatic Mask Generator of SAM does not work with the batch function set to True.
  2. The sam.generate with an erosion_kernel set to None results in one mask for the entire region.
  3. For the notebook example, sam_model_registrymodel_type, does not automatically download checkpoints. Fixed with common.download_checkpoint(url,checkpoint)

What I Did

 ##Automatic Mask Generator example from notebook
sam.generate(image, output='masks.tif', foreground=True, unique=True, batch=True)

#Satellite Example from notebook
sam = SamGeo(
    model_type='vit_h',
    checkpoint=checkpoint,
    erosion_kernel=None,
    mask_multiplier=255,
    sam_kwargs=None,
)

Input points poor performance

Thank you for your speedy work on this. Impressive.

I'm posting to check whether there is a known issue with the point input feature. I have tried to run the example with one point and several points and the output is very poor. I also experimented with different sam_kwargs and that didn't improve it either.

The automatic mask generator example works fine for me.

GPU out of memory

Environment Information

  • samgeo version: latest version using pip
  • Python version: 3.8
  • Operating System:Ubuntu

Description

Trying to run auto segmentation
using model parameters

device = 'cuda:0'
sam_kwargs = {
"points_per_side": 32,
"points_per_batch": 32,
}

What I Did

Here is the output:
OutOfMemoryError: CUDA out of memory. Tried to allocate 21.16 GiB (GPU 0; 31.75 GiB total capacity; 26.52 GiB already allocated; 3.37 GiB free; 27.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

UAV dataset

I am using it for UAV RGB and multispectral dataset.

Works fine with RGB but multispectral do not work, please provide some script ideas to use UAV orthomosaics (.tif)

HELP: jupyter log output : assert 0 < size <= self._size when i use leafmap

image
when i use leafmap to draw a rectangle๏ผŒ the jupyter usually output a error.
Traceback (most recent call last): File "/sdisk/shome/speed/anaconda3/envs/samgeoNew/lib/python3.9/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/sdisk/shome/speed/anaconda3/envs/samgeoNew/lib/python3.9/site-packages/tornado/platform/asyncio.py", line 206, in _handle_events handler_func(fileobj, events) File "/sdisk/shome/speed/anaconda3/envs/samgeoNew/lib/python3.9/site-packages/tornado/iostream.py", line 702, in _handle_events self._handle_write() File "/sdisk/shome/speed/anaconda3/envs/samgeoNew/lib/python3.9/site-packages/tornado/iostream.py", line 976, in _handle_write self._write_buffer.advance(num_bytes) File "/sdisk/shome/speed/anaconda3/envs/samgeoNew/lib/python3.9/site-packages/tornado/iostream.py", line 182, in advance assert 0 < size <= self._size AssertionError

Error while downloading Google Satellite imagery of large area (around 5km by 5km)


MemoryError Traceback (most recent call last)
File ~\miniconda3\envs\gee\lib\site-packages\samgeo\common.py:523, in tms_to_geotiff(output, bbox, zoom, resolution, source, to_cog, return_image, overwrite, quiet, **kwargs)
522 try:
--> 523 image = draw_tile(
524 source, south, west, north, east, zoom, output, quiet, **kwargs
525 )
526 if return_image:

File ~\miniconda3\envs\gee\lib\site-packages\samgeo\common.py:512, in tms_to_geotiff..draw_tile(source, lat0, lon0, lat1, lon1, zoom, filename, quiet, **kwargs)
511 for band in range(imgbands):
--> 512 array = numpy.array(img.getdata(band), dtype="u8")
513 array = array.reshape((img.size[1], img.size[0]))

MemoryError: Unable to allocate 6.89 GiB for an array with shape (924972736,) and data type uint64

During handling of the above exception, another exception occurred:

Exception Traceback (most recent call last)
Cell In[7], line 2
1 # Download image as GeoTiff
----> 2 tms_to_geotiff(output=output_image, bbox=bbox, resolution=spatial_resolution, source=data_source,quiet=False)

File ~\miniconda3\envs\gee\lib\site-packages\samgeo\common.py:531, in tms_to_geotiff(output, bbox, zoom, resolution, source, to_cog, return_image, overwrite, quiet, **kwargs)
529 image_to_cog(output, output)
530 except Exception as e:
--> 531 raise Exception(e)

Exception: Unable to allocate 6.89 GiB for an array with shape (924972736,) and data type uint64

RuntimeError: Error(s) in loading state_dict for Sam for models vit_b and vit_l

Model vit_h seems to be working fine but vit_b and vit_l throw an error.

This is my code:

models = {
    'vit_b': 'sam_vit_b_01ec64.pth',
    'vit_h': 'sam_vit_h_4b8939.pth',
    'vit_l': 'sam_vit_l_0b3195.pth',
}

model_type = 'vit_b'


mask = infile.replace('.tif', f'_{model_type}_segmented.tif')
checkpoint = os.path.join(out_dir, models[model_type])

sam = SamGeo(
    checkpoint = checkpoint,
    model_type = model_type,
    device = device,
    sam_kwargs = None,
)
This is the error I get:

RuntimeError Traceback (most recent call last)
Cell In [10], line 13
10 mask = infile.replace('.tif', f'_{model_type}_segmented.tif')
11 checkpoint = os.path.join(out_dir, models[model_type])
---> 13 sam = SamGeo(
14 checkpoint = checkpoint,
15 model_type = model_type,
16 device = device,
17 sam_kwargs = None,
18 )

File C:\Python39\lib\site-packages\samgeo\samgeo.py:87, in SamGeo.init(self, model_type, checkpoint, automatic, device, sam_kwargs)
84 self.logits = None
86 # Build the SAM model
---> 87 self.sam = sam_model_registryself.model_type
88 self.sam.to(device=self.device)
89 # Use optional arguments for fine-tuning the SAM model

File C:\Python39\lib\site-packages\segment_anything\build_sam.py:38, in build_sam_vit_b(checkpoint)
37 def build_sam_vit_b(checkpoint=None):
---> 38 return _build_sam(
39 encoder_embed_dim=768,
40 encoder_depth=12,
41 encoder_num_heads=12,
42 encoder_global_attn_indexes=[2, 5, 8, 11],
43 checkpoint=checkpoint,
44 )

File C:\Python39\lib\site-packages\segment_anything\build_sam.py:106, in _build_sam(encoder_embed_dim, encoder_depth, encoder_num_heads, encoder_global_attn_indexes, checkpoint)
104 with open(checkpoint, "rb") as f:
105 state_dict = torch.load(f)
--> 106 sam.load_state_dict(state_dict)
107 return sam

File C:\Python39\lib\site-packages\torch\nn\modules\module.py:2041, in Module.load_state_dict(self, state_dict, strict)
2036 error_msgs.insert(
2037 0, 'Missing key(s) in state_dict: {}. '.format(
2038 ', '.join('"{}"'.format(k) for k in missing_keys)))
2040 if len(error_msgs) > 0:
-> 2041 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
2042 self.class.name, "\n\t".join(error_msgs)))
2043 return _IncompatibleKeys(missing_keys, unexpected_keys)

RuntimeError: Error(s) in loading state_dict for Sam:
Unexpected key(s) in state_dict: "image_encoder.blocks.12.norm1.weight", "image_encoder.blocks.12.norm1.bias", "image_encoder.blocks.12.attn.rel_pos_h", "image_encoder.blocks.12.attn.rel_pos_w", "image_encoder.blocks.12.attn.qkv.weight", "image_encoder.blocks.12.attn.qkv.bias", "image_encoder.blocks.12.attn.proj.weight", "image_encoder.blocks.12.attn.proj.bias", "image_encoder.blocks.12.norm2.weight", "image_encoder.blocks.12.norm2.bias", "image_encoder.blocks.12.mlp.lin1.weight", "image_encoder.blocks.12.mlp.lin1.bias", "image_encoder.blocks.12.mlp.lin2.weight", "image_encoder.blocks.12.mlp.lin2.bias", "image_encoder.blocks.13.norm1.weight", "image_encoder.blocks.13.norm1.bias", "image_encoder.blocks.13.attn.rel_pos_h", "image_encoder.blocks.13.attn.rel_pos_w", "image_encoder.blocks.13.attn.qkv.weight", "image_encoder.blocks.13.attn.qkv.bias", "image_encoder.blocks.13.attn.proj.weight", "image_encoder.blocks.13.attn.proj.bias", "image_encoder.blocks.13.norm2.weight", "image_encoder.blocks.13.norm2.bias", "image_encoder.blocks.13.mlp.lin1.weight", "image_encoder.blocks.13.mlp.lin1.bias", "image_encoder.blocks.13.mlp.lin2.weight", "image_encoder.blocks.13.mlp.lin2.bias", "image_encoder.blocks.14.norm1.weight", "image_encoder.blocks.14.norm1.bias", "image_encoder.blocks.14.attn.rel_pos_h", "image_encoder.blocks.14.attn.rel_pos_w", "image_encoder.blocks.14.attn.qkv.weight", "image_encoder.blocks.14.attn.qkv.bias", "image_encoder.blocks.14.attn.proj.weight", "image_encoder.blocks.14.attn.proj.bias", "image_encoder.blocks.14.norm2.weight", "image_encoder.blocks.14.norm2.bias", "image_encoder.blocks.14.mlp.lin1.weight", "image_encoder.blocks.14.mlp.lin1.bias", "image_encoder.blocks.14.mlp.lin2.weight", "image_encoder.blocks.14.mlp.lin2.bias", "image_encoder.blocks.15.norm1.weight", "image_encoder.blocks.15.norm1.bias", "image_encoder.blocks.15.attn.rel_pos_h", "image_encoder.blocks.15.attn.rel_pos_w", "image_encoder.blocks.15.attn.qkv.weight", "image_encoder.blocks.15.attn.qkv.bias", "image_encoder.blocks.15.attn.proj.weight", "image_encoder.blocks.15.attn.proj.bias", "image_encoder.blocks.15.norm2.weight", "image_encoder.blocks.15.norm2.bias", "image_encoder.blocks.15.mlp.lin1.weight", "image_encoder.blocks.15.mlp.lin1.bias", "image_encoder.blocks.15.mlp.lin2.weight", "image_encoder.blocks.15.mlp.lin2.bias", "image_encoder.blocks.16.norm1.weight", "image_encoder.blocks.16.norm1.bias", "image_encoder.blocks.16.attn.rel_pos_h", "image_encoder.blocks.16.attn.rel_pos_w", "image_encoder.blocks.16.attn.qkv.weight", "image_encoder.blocks.16.attn.qkv.bias", "image_encoder.blocks.16.attn.proj.weight", "image_encoder.blocks.16.attn.proj.bias", "image_encoder.blocks.16.norm2.weight", "image_encoder.blocks.16.norm2.bias", "image_encoder.blocks.16.mlp.lin1.weight", "image_encoder.blocks.16.mlp.lin1.bias", "image_encoder.blocks.16.mlp.lin2.weight", "image_encoder.blocks.16.mlp.lin2.bias", "image_encoder.blocks.17.norm1.weight", "image_encoder.blocks.17.norm1.bias", "image_encoder.blocks.17.attn.rel_pos_h", "image_encoder.blocks.17.attn.rel_pos_w", "image_encoder.blocks.17.attn.qkv.weight", "image_encoder.blocks.17.attn.qkv.bias", "image_encoder.blocks.17.attn.proj.weight", "image_encoder.blocks.17.attn.proj.bias", "image_encoder.blocks.17.norm2.weight", "image_encoder.blocks.17.norm2.bias", "image_encoder.blocks.17.mlp.lin1.weight", "image_encoder.blocks.17.mlp.lin1.bias", "image_encoder.blocks.17.mlp.lin2.weight", "image_encoder.blocks.17.mlp.lin2.bias", "image_encoder.blocks.18.norm1.weight", "image_encoder.blocks.18.norm1.bias", "image_encoder.blocks.18.attn.rel_pos_h", "image_encoder.blocks.18.attn.rel_pos_w", "image_encoder.blocks.18.attn.qkv.weight", "image_encoder.blocks.18.attn.qkv.bias", "image_encoder.blocks.18.attn.proj.weight", "image_encoder.blocks.18.attn.proj.bias", "image_encoder.blocks.18.norm2.weight", "image_encoder.blocks.18.norm2.bias", "image_encoder.blocks.18.mlp.lin1.weight", "image_encoder.blocks.18.mlp.lin1.bias", "image_encoder.blocks.18.mlp.lin2.weight", "image_encoder.blocks.18.mlp.lin2.bias", "image_encoder.blocks.19.norm1.weight", "image_encoder.blocks.19.norm1.bias", "image_encoder.blocks.19.attn.rel_pos_h", "image_encoder.blocks.19.attn.rel_pos_w", "image_encoder.blocks.19.attn.qkv.weight", "image_encoder.blocks.19.attn.qkv.bias", "image_encoder.blocks.19.attn.proj.weight", "image_encoder.blocks.19.attn.proj.bias", "image_encoder.blocks.19.norm2.weight", "image_encoder.blocks.19.norm2.bias", "image_encoder.blocks.19.mlp.lin1.weight", "image_encoder.blocks.19.mlp.lin1.bias", "image_encoder.blocks.19.mlp.lin2.weight", "image_encoder.blocks.19.mlp.lin2.bias", "image_encoder.blocks.20.norm1.weight", "image_encoder.blocks.20.norm1.bias", "image_encoder.blocks.20.attn.rel_pos_h", "image_encoder.blocks.20.attn.rel_pos_w", "image_encoder.blocks.20.attn.qkv.weight", "image_encoder.blocks.20.attn.qkv.bias", "image_encoder.blocks.20.attn.proj.weight", "image_encoder.blocks.20.attn.proj.bias", "image_encoder.blocks.20.norm2.weight", "image_encoder.blocks.20.norm2.bias", "image_encoder.blocks.20.mlp.lin1.weight", "image_encoder.blocks.20.mlp.lin1.bias", "image_encoder.blocks.20.mlp.lin2.weight", "image_encoder.blocks.20.mlp.lin2.bias", "image_encoder.blocks.21.norm1.weight", "image_encoder.blocks.21.norm1.bias", "image_encoder.blocks.21.attn.rel_pos_h", "image_encoder.blocks.21.attn.rel_pos_w", "image_encoder.blocks.21.attn.qkv.weight", "image_encoder.blocks.21.attn.qkv.bias", "image_encoder.blocks.21.attn.proj.weight", "image_encoder.blocks.21.attn.proj.bias", "image_encoder.blocks.21.norm2.weight", "image_encoder.blocks.21.norm2.bias", "image_encoder.blocks.21.mlp.lin1.weight", "image_encoder.blocks.21.mlp.lin1.bias", "image_encoder.blocks.21.mlp.lin2.weight", "image_encoder.blocks.21.mlp.lin2.bias", "image_encoder.blocks.22.norm1.weight", "image_encoder.blocks.22.norm1.bias", "image_encoder.blocks.22.attn.rel_pos_h", "image_encoder.blocks.22.attn.rel_pos_w", "image_encoder.blocks.22.attn.qkv.weight", "image_encoder.blocks.22.attn.qkv.bias", "image_encoder.blocks.22.attn.proj.weight", "image_encoder.blocks.22.attn.proj.bias", "image_encoder.blocks.22.norm2.weight", "image_encoder.blocks.22.norm2.bias", "image_encoder.blocks.22.mlp.lin1.weight", "image_encoder.blocks.22.mlp.lin1.bias", "image_encoder.blocks.22.mlp.lin2.weight", "image_encoder.blocks.22.mlp.lin2.bias", "image_encoder.blocks.23.norm1.weight", "image_encoder.blocks.23.norm1.bias", "image_encoder.blocks.23.attn.rel_pos_h", "image_encoder.blocks.23.attn.rel_pos_w", "image_encoder.blocks.23.attn.qkv.weight", "image_encoder.blocks.23.attn.qkv.bias", "image_encoder.blocks.23.attn.proj.weight", "image_encoder.blocks.23.attn.proj.bias", "image_encoder.blocks.23.norm2.weight", "image_encoder.blocks.23.norm2.bias", "image_encoder.blocks.23.mlp.lin1.weight", "image_encoder.blocks.23.mlp.lin1.bias", "image_encoder.blocks.23.mlp.lin2.weight", "image_encoder.blocks.23.mlp.lin2.bias", "image_encoder.blocks.24.norm1.weight", "image_encoder.blocks.24.norm1.bias", "image_encoder.blocks.24.attn.rel_pos_h", "image_encoder.blocks.24.attn.rel_pos_w", "image_encoder.blocks.24.attn.qkv.weight", "image_encoder.blocks.24.attn.qkv.bias", "image_encoder.blocks.24.attn.proj.weight", "image_encoder.blocks.24.attn.proj.bias", "image_encoder.blocks.24.norm2.weight", "image_encoder.blocks.24.norm2.bias", "image_encoder.blocks.24.mlp.lin1.weight", "image_encoder.blocks.24.mlp.lin1.bias", "image_encoder.blocks.24.mlp.lin2.weight", "image_encoder.blocks.24.mlp.lin2.bias", "image_encoder.blocks.25.norm1.weight", "image_encoder.blocks.25.norm1.bias", "image_encoder.blocks.25.attn.rel_pos_h", "image_encoder.blocks.25.attn.rel_pos_w", "image_encoder.blocks.25.attn.qkv.weight", "image_encoder.blocks.25.attn.qkv.bias", "image_encoder.blocks.25.attn.proj.weight", "image_encoder.blocks.25.attn.proj.bias", "image_encoder.blocks.25.norm2.weight", "image_encoder.blocks.25.norm2.bias", "image_encoder.blocks.25.mlp.lin1.weight", "image_encoder.blocks.25.mlp.lin1.bias", "image_encoder.blocks.25.mlp.lin2.weight", "image_encoder.blocks.25.mlp.lin2.bias", "image_encoder.blocks.26.norm1.weight", "image_encoder.blocks.26.norm1.bias", "image_encoder.blocks.26.attn.rel_pos_h", "image_encoder.blocks.26.attn.rel_pos_w", "image_encoder.blocks.26.attn.qkv.weight", "image_encoder.blocks.26.attn.qkv.bias", "image_encoder.blocks.26.attn.proj.weight", "image_encoder.blocks.26.attn.proj.bias", "image_encoder.blocks.26.norm2.weight", "image_encoder.blocks.26.norm2.bias", "image_encoder.blocks.26.mlp.lin1.weight", "image_encoder.blocks.26.mlp.lin1.bias", "image_encoder.blocks.26.mlp.lin2.weight", "image_encoder.blocks.26.mlp.lin2.bias", "image_encoder.blocks.27.norm1.weight", "image_encoder.blocks.27.norm1.bias", "image_encoder.blocks.27.attn.rel_pos_h", "image_encoder.blocks.27.attn.rel_pos_w", "image_encoder.blocks.27.attn.qkv.weight", "image_encoder.blocks.27.attn.qkv.bias", "image_encoder.blocks.27.attn.proj.weight", "image_encoder.blocks.27.attn.proj.bias", "image_encoder.blocks.27.norm2.weight", "image_encoder.blocks.27.norm2.bias", "image_encoder.blocks.27.mlp.lin1.weight", "image_encoder.blocks.27.mlp.lin1.bias", "image_encoder.blocks.27.mlp.lin2.weight", "image_encoder.blocks.27.mlp.lin2.bias", "image_encoder.blocks.28.norm1.weight", "image_encoder.blocks.28.norm1.bias", "image_encoder.blocks.28.attn.rel_pos_h", "image_encoder.blocks.28.attn.rel_pos_w", "image_encoder.blocks.28.attn.qkv.weight", "image_encoder.blocks.28.attn.qkv.bias", "image_encoder.blocks.28.attn.proj.weight", "image_encoder.blocks.28.attn.proj.bias", "image_encoder.blocks.28.norm2.weight", "image_encoder.blocks.28.norm2.bias", "image_encoder.blocks.28.mlp.lin1.weight", "image_encoder.blocks.28.mlp.lin1.bias", "image_encoder.blocks.28.mlp.lin2.weight", "image_encoder.blocks.28.mlp.lin2.bias", "image_encoder.blocks.29.norm1.weight", "image_encoder.blocks.29.norm1.bias", "image_encoder.blocks.29.attn.rel_pos_h", "image_encoder.blocks.29.attn.rel_pos_w", "image_encoder.blocks.29.attn.qkv.weight", "image_encoder.blocks.29.attn.qkv.bias", "image_encoder.blocks.29.attn.proj.weight", "image_encoder.blocks.29.attn.proj.bias", "image_encoder.blocks.29.norm2.weight", "image_encoder.blocks.29.norm2.bias", "image_encoder.blocks.29.mlp.lin1.weight", "image_encoder.blocks.29.mlp.lin1.bias", "image_encoder.blocks.29.mlp.lin2.weight", "image_encoder.blocks.29.mlp.lin2.bias", "image_encoder.blocks.30.norm1.weight", "image_encoder.blocks.30.norm1.bias", "image_encoder.blocks.30.attn.rel_pos_h", "image_encoder.blocks.30.attn.rel_pos_w", "image_encoder.blocks.30.attn.qkv.weight", "image_encoder.blocks.30.attn.qkv.bias", "image_encoder.blocks.30.attn.proj.weight", "image_encoder.blocks.30.attn.proj.bias", "image_encoder.blocks.30.norm2.weight", "image_encoder.blocks.30.norm2.bias", "image_encoder.blocks.30.mlp.lin1.weight", "image_encoder.blocks.30.mlp.lin1.bias", "image_encoder.blocks.30.mlp.lin2.weight", "image_encoder.blocks.30.mlp.lin2.bias", "image_encoder.blocks.31.norm1.weight", "image_encoder.blocks.31.norm1.bias", "image_encoder.blocks.31.attn.rel_pos_h", "image_encoder.blocks.31.attn.rel_pos_w", "image_encoder.blocks.31.attn.qkv.weight", "image_encoder.blocks.31.attn.qkv.bias", "image_encoder.blocks.31.attn.proj.weight", "image_encoder.blocks.31.attn.proj.bias", "image_encoder.blocks.31.norm2.weight", "image_encoder.blocks.31.norm2.bias", "image_encoder.blocks.31.mlp.lin1.weight", "image_encoder.blocks.31.mlp.lin1.bias", "image_encoder.blocks.31.mlp.lin2.weight", "image_encoder.blocks.31.mlp.lin2.bias".
size mismatch for image_encoder.pos_embed: copying a param with shape torch.Size([1, 64, 64, 1280]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 768]).
size mismatch for image_encoder.patch_embed.proj.weight: copying a param with shape torch.Size([1280, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([768, 3, 16, 16]).
size mismatch for image_encoder.patch_embed.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.0.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.0.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.0.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.0.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.0.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.0.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.0.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.0.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.0.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.0.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.0.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.0.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.0.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.0.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.1.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.1.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.1.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.1.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.1.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.1.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.1.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.1.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.1.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.1.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.1.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.1.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.1.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.1.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.2.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.2.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.2.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.2.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.2.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.2.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.2.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.2.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.2.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.2.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.2.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.2.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.2.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.2.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.3.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.3.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.3.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.3.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.3.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.3.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.3.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.3.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.3.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.3.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.3.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.3.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.3.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.3.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.4.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.4.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.4.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.4.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.4.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.4.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.4.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.4.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.4.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.4.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.4.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.4.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.4.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.4.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.5.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.5.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.5.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.5.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.5.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.5.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.5.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.5.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.5.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.5.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.5.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.5.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.5.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.5.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.6.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.6.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.6.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.6.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.6.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.6.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.6.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.6.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.6.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.6.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.6.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.6.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.6.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.6.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.7.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.7.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.7.attn.rel_pos_h: copying a param with shape torch.Size([127, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.7.attn.rel_pos_w: copying a param with shape torch.Size([127, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.7.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.7.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.7.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.7.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.7.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.7.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.7.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.7.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.7.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.7.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.8.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.8.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.8.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.8.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.8.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.8.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.8.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.8.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.8.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.8.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.8.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.8.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.8.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.8.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.9.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.9.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.9.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.9.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.9.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.9.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.9.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.9.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.9.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.9.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.9.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.9.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.9.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.9.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.10.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.10.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.10.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.10.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([27, 64]).
size mismatch for image_encoder.blocks.10.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.10.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.10.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.10.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.10.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.10.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.10.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.10.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.10.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.10.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.11.norm1.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.11.norm1.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.11.attn.rel_pos_h: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.11.attn.rel_pos_w: copying a param with shape torch.Size([27, 80]) from checkpoint, the shape in current model is torch.Size([127, 64]).
size mismatch for image_encoder.blocks.11.attn.qkv.weight: copying a param with shape torch.Size([3840, 1280]) from checkpoint, the shape in current model is torch.Size([2304, 768]).
size mismatch for image_encoder.blocks.11.attn.qkv.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for image_encoder.blocks.11.attn.proj.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([768, 768]).
size mismatch for image_encoder.blocks.11.attn.proj.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.11.norm2.weight: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.11.norm2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.blocks.11.mlp.lin1.weight: copying a param with shape torch.Size([5120, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for image_encoder.blocks.11.mlp.lin1.bias: copying a param with shape torch.Size([5120]) from checkpoint, the shape in current model is torch.Size([3072]).
size mismatch for image_encoder.blocks.11.mlp.lin2.weight: copying a param with shape torch.Size([1280, 5120]) from checkpoint, the shape in current model is torch.Size([768, 3072]).
size mismatch for image_encoder.blocks.11.mlp.lin2.bias: copying a param with shape torch.Size([1280]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for image_encoder.neck.0.weight: copying a param with shape torch.Size([256, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 768, 1, 1]).

`show_masks`, and `show_anns` did not work in batch mode

Environment Information

  • samgeo version: 0.5.0
  • Python version: 3.10.11
  • Operating System: Ubuntu (?). I run in colab

Description

I run the example notebook in colab (GPU) with cuda device. Sam can output in batch mode when generate when set batch=True

sam.generate(image, output=mask,, foreground=True, unique=True, batch=True

However this is not compatible with show_masks, and show_anns as in the example

What I Did

I ran the code. For show_masks it output

No masks found. Please run generate() first.

and for show_anns it output

Please run generate() first.

CUDA error

Environment Information

  • samgeo version: 0.8.1
  • Python version: 3.10
  • Operating System: win0

Description

After I install Grounding_DINO, and then run sam_lang code, a error appeared, as "CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)". And I found in the source code, you use rasterio read the whole image to memory, if the image is too large, maybe it will meet a out of memory error,

Hope professor Wu can give some suggestion to me!

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Error creating Transformer from CRS.

I only run the code from the link: https://samgeo.gishub.org/examples/satellite/
and this code cell did not work:

style = {
    "color": "#3388ff",
    "weight": 2,
    "fillColor": "#7c4185",
    "fillOpacity": 0.5,
}
m.add_vector(vector, layer_name="LAL") #, style=style)
m

Do you know why it happened? Thanks.

The above code raised the errors below:

ProjError                                 Traceback (most recent call last)
Cell In[18], line 7
      1 style = {
      2     "color": "#3388ff",
      3     "weight": 2,
      4     "fillColor": "#7c4185",
      5     "fillOpacity": 0.5,
      6 }
----> 7 m.add_vector(vector, layer_name="LAL") #, style=style)
      8 m

File [~/miniconda3/envs/geemap/lib/python3.10/site-packages/leafmap/leafmap.py:2622](https://vscode-remote+wsl-002bubuntudev.vscode-resource.vscode-cdn.net/mnt/d/Scripts/sam/code/~/miniconda3/envs/geemap/lib/python3.10/site-packages/leafmap/leafmap.py:2622), in Map.add_vector(self, filename, layer_name, bbox, mask, rows, style, hover_style, style_callback, fill_colors, info_mode, encoding, **kwargs)
   2611     self.add_geojson(
   2612         filename,
   2613         layer_name,
   (...)
   2619         encoding,
   2620     )
   2621 else:
-> 2622     geojson = vector_to_geojson(
   2623         filename,
   2624         bbox=bbox,
   2625         mask=mask,
   2626         rows=rows,
   2627         epsg="4326",
   2628         **kwargs,
   2629     )
   2631     self.add_geojson(
   2632         geojson,
   2633         layer_name,
   (...)
   2639         encoding,
   2640     )

File [~/miniconda3/envs/geemap/lib/python3.10/site-packages/leafmap/common.py:1555](https://vscode-remote+wsl-002bubuntudev.vscode-resource.vscode-cdn.net/mnt/d/Scripts/sam/code/~/miniconda3/envs/geemap/lib/python3.10/site-packages/leafmap/common.py:1555), in vector_to_geojson(filename, out_geojson, bbox, mask, rows, epsg, encoding, **kwargs)
   1551 else:
   1552     df = gpd.read_file(
   1553         filename, bbox=bbox, mask=mask, rows=rows, encoding=encoding, **kwargs
   1554     )
-> 1555 gdf = df.to_crs(epsg=epsg)
   1557 if out_geojson is not None:
   1558     if not out_geojson.lower().endswith(".geojson"):

File [~/miniconda3/envs/geemap/lib/python3.10/site-packages/geopandas/geodataframe.py:1364](https://vscode-remote+wsl-002bubuntudev.vscode-resource.vscode-cdn.net/mnt/d/Scripts/sam/code/~/miniconda3/envs/geemap/lib/python3.10/site-packages/geopandas/geodataframe.py:1364), in GeoDataFrame.to_crs(self, crs, epsg, inplace)
   1362 else:
   1363     df = self.copy()
-> 1364 geom = df.geometry.to_crs(crs=crs, epsg=epsg)
   1365 df.geometry = geom
   1366 if not inplace:

File [~/miniconda3/envs/geemap/lib/python3.10/site-packages/geopandas/geoseries.py:1124](https://vscode-remote+wsl-002bubuntudev.vscode-resource.vscode-cdn.net/mnt/d/Scripts/sam/code/~/miniconda3/envs/geemap/lib/python3.10/site-packages/geopandas/geoseries.py:1124), in GeoSeries.to_crs(self, crs, epsg)
   1047 def to_crs(self, crs=None, epsg=None):
   1048     """Returns a ``GeoSeries`` with all geometries transformed to a new
   1049     coordinate reference system.
   1050 
   (...)
   1121 
   1122     """
   1123     return GeoSeries(
-> 1124         self.values.to_crs(crs=crs, epsg=epsg), index=self.index, name=self.name
   1125     )

File [~/miniconda3/envs/geemap/lib/python3.10/site-packages/geopandas/array.py:777](https://vscode-remote+wsl-002bubuntudev.vscode-resource.vscode-cdn.net/mnt/d/Scripts/sam/code/~/miniconda3/envs/geemap/lib/python3.10/site-packages/geopandas/array.py:777), in GeometryArray.to_crs(self, crs, epsg)
    774 if self.crs.is_exact_same(crs):
    775     return self
--> 777 transformer = Transformer.from_crs(self.crs, crs, always_xy=True)
    779 new_data = vectorized.transform(self.data, transformer.transform)
    780 return GeometryArray(new_data, crs=crs)

File [~/miniconda3/envs/geemap/lib/python3.10/site-packages/pyproj/transformer.py:573](https://vscode-remote+wsl-002bubuntudev.vscode-resource.vscode-cdn.net/mnt/d/Scripts/sam/code/~/miniconda3/envs/geemap/lib/python3.10/site-packages/pyproj/transformer.py:573), in Transformer.from_crs(crs_from, crs_to, skip_equivalent, always_xy, area_of_interest, authority, accuracy, allow_ballpark)
    566 if skip_equivalent:
    567     warnings.warn(
    568         "skip_equivalent is deprecated.",
    569         DeprecationWarning,
    570         stacklevel=2,
    571     )
--> 573 return Transformer(
    574     TransformerFromCRS(
    575         cstrencode(CRS.from_user_input(crs_from).srs),
    576         cstrencode(CRS.from_user_input(crs_to).srs),
    577         always_xy=always_xy,
    578         area_of_interest=area_of_interest,
    579         authority=authority,
    580         accuracy=accuracy if accuracy is None else str(accuracy),
    581         allow_ballpark=allow_ballpark,
    582     )
    583 )

File [~/miniconda3/envs/geemap/lib/python3.10/site-packages/pyproj/transformer.py:310](https://vscode-remote+wsl-002bubuntudev.vscode-resource.vscode-cdn.net/mnt/d/Scripts/sam/code/~/miniconda3/envs/geemap/lib/python3.10/site-packages/pyproj/transformer.py:310), in Transformer.__init__(self, transformer_maker)
    304     raise ProjError(
    305         "Transformer must be initialized using: "
    306         "'from_crs', 'from_pipeline', or 'from_proj'."
    307     )
    309 self._local = TransformerLocal()
...
    105     )

File pyproj/_transformer.pyx:1001, in pyproj._transformer._Transformer.from_crs()

ProjError: Error creating Transformer from CRS.

text_prompts.ipynb fails on Initialize LangSAM class

Environment Information

  • samgeo version: 0.8.0
  • Python version: 3.9.16
  • Operating System: Windows 11
  • ArcGIS Pro conda env set up using this repo's instructions

Description

Tried to run through the notebook. At step "Initialize LangSAM class" it throws a NameError on hf_hub_download.

image

What I Did

Restarted the kernel and Arc and tried again.
Installed huggingface_hub in the env via mamba, restarted kernel and tried again.

Looks like there's an issue with importing huggingface_hub in text_sam.py module. Should groundingdino and huggingface_hub be added to requirements.txt etc?

how to use

Hi,It's great! I have already installed this package.May I know how to use it?DO you have detailed documentation,thank you!

Generating object masks from input prompts with SAM

Environment Information

  • samgeo version: 0.5
  • Python version:
  • Operating System:

Description

Generating object masks from input prompts with SAM in the interactive part:
while running this line as described:
m = sam.show_map()
m
I got this error:

image

LangSam class downloads checkpoints when already present on filesystem

Environment Information

  • samgeo version:0.8.1
  • Python version:3.8
  • Operating System:Ubuntu

Description

I am using the SamGeo class as well as the LangSam class. The SamGeo class requires you to download the checkpoints yourself and initialize using the path, which I am doing. It would be nice if the LangSam class could also use the path instead of downloading it's own duplicate of the checkpoints.

Great library!

"ImportError: libLerc.so.4: cannot open shared object file: No such file or directory" when importing samgeo

Environment Information

  • segment-anything-py version: 1.0
  • segment-geospatial version: 0.5.0
  • Python version: 3.8.16
  • torch version: 2.0.0
  • GDAL version: 3.5.3
  • rasterio version: 1.3.3

Description

I tried to run the segmentation tutorial. I think I'm in some kind of dependency hell, as I've been trying to run this for a few days now, and based on the order of the imports in the DockerFile, I get different errors.

I work from a Dockern container, the DockerFile looks like this:

FROM nvidia/cuda:11.7.1-devel-ubuntu22.04
ENV CUDA_HOME=/usr/local/cuda-11.7
ENV PATH=/usr/local/cuda-11.7/bin:$PATH
ENV LD_LIBRARY_PATH=/usr/local/cuda-11.7/lib64:$LD_LIBRARY_PATH

RUN apt-get update && apt-get install wget ffmpeg libsm6 libxext6 -y

ENV CONDA_DIR /opt/conda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh &&
/bin/bash ~/miniconda.sh -b -p /opt/conda

ENV PATH=$CONDA_DIR/bin:$PATH

RUN conda install -n base mamba -c conda-forge
RUN mamba create -n geo python==3.8 -c conda-forge
RUN conda install -y -n geo pytorch==2.0.0 torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
RUN mamba install -n geo segment-geospatial -c conda-forge -vv
RUN mamba install -n geo leafmap geopandas localtileserver -c conda-forge
RUN conda install -n geo gdal -c conda-forge

RUN conda init bash

What I Did

I got stuck at the imports with the following Error:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[4], line 1
----> 1 from samgeo import SamGeo, tms_to_geotiff, get_basemaps

File /opt/conda/envs/geo/lib/python3.8/site-packages/samgeo/__init__.py:8
      4 __email__ = '[email protected]'
      5 __version__ = '0.5.0'
----> 8 from .samgeo import *

File /opt/conda/envs/geo/lib/python3.8/site-packages/samgeo/samgeo.py:11
      8 import numpy as np
      9 from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
---> 11 from .common import *
     14 class SamGeo:
     15     """The main class for segmenting geospatial data with the Segment Anything Model (SAM). See
     16     https://github.com/facebookresearch/segment-anything for details.
     17     """

File /opt/conda/envs/geo/lib/python3.8/site-packages/samgeo/common.py:13
     11 import shapely
     12 import pyproj
---> 13 import rasterio
     14 import geopandas as gpd
     15 import matplotlib.pyplot as plt

File /opt/conda/envs/geo/lib/python3.8/site-packages/rasterio/__init__.py:28
     24                     os.add_dll_directory(p)
     27 from rasterio._show_versions import show_versions
---> 28 from rasterio._version import gdal_version, get_geos_version, get_proj_version
     29 from rasterio.crs import CRS
     30 from rasterio.drivers import driver_from_extension, is_blacklisted

ImportError: libLerc.so.4: cannot open shared object file: No such file or directory

ValueError: Must pass either crs or epsg

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[11], line 2
      1 shapefile = 'segment_gpu_new.shp'
----> 2 sam.tiff_to_vector(mask, shapefile)

File ~\Desktop\segment-geospatial\samgeo\samgeo.py:172, in SamGeo.tiff_to_vector(self, tiff_path, output, simplify_tolerance, **kwargs)
    169         i["geometry"] = i["geometry"].simplify(tolerance=simplify_tolerance)
    171 gdf = gpd.GeoDataFrame.from_features(fc)
--> 172 gdf.set_crs(epsg=src.crs.to_epsg(), inplace=True)
    173 gdf.to_file(output, **kwargs)

File ~\Anaconda3\envs\segment-geospatial\lib\site-packages\geopandas\geodataframe.py:1279, in GeoDataFrame.set_crs(self, crs, epsg, inplace, allow_override)
   1277 else:
   1278     df = self
-> 1279 df.geometry = df.geometry.set_crs(
   1280     crs=crs, epsg=epsg, allow_override=allow_override, inplace=True
   1281 )
   1282 return df

File ~\Anaconda3\envs\segment-geospatial\lib\site-packages\geopandas\geoseries.py:1031, in GeoSeries.set_crs(self, crs, epsg, inplace, allow_override)
   1029     crs = CRS.from_epsg(epsg)
   1030 else:
-> 1031     raise ValueError("Must pass either crs or epsg.")
   1033 if not allow_override and self.crs is not None and not self.crs == crs:
   1034     raise ValueError(
   1035         "The GeoSeries already has a CRS which is not equal to the passed "
   1036         "CRS. Specify 'allow_override=True' to allow replacing the existing "
   1037         "CRS without doing any transformation. If you actually want to "
   1038         "transform the geometries, use 'GeoSeries.to_crs' instead."
   1039     )
ValueError: Must pass either crs or epsg.

Hello everyone, both the mask that is created and tiff have been projected with 4326, WGS 84 coordinate system. It can be easily seen on the map with their current projection without on the fly action. The mask that was created after sam generate function has 4326 coordinate system. However, when I try to use tiff_to_vector or tiff_to_package functions, I got no crs or epsg is set error. How to pass crs or epsg into these functions?

I implemented the notebook tutorial code with my projected tiff exactly as it is described.
Thanks in advance.

Already installed packages interfering with outputs

Environment Information

  • samgeo version: 0.7.0
  • Python version: 3.10
  • Operating System: Colab

Description

Using the tutorials cannot seem to add the raster to the map. Extracts the image and can process it and add the vector layer afterwards but when I run the following code in any example

m.layers[-1].visible = False
m.add_raster(image, layer_name="Image")
m

I just get the OpenStreetMap baselayer zoomed into the location.

What I Did

Tried restarting the runtime, checking packages are updated and when I tried it on another laptop using my same google account to log into Colab it works. A colleague can also get it to work so believe maybe some packages are causing the issue. Included below the output from %pip install segment-geospatial leafmap localtileserver an %pip install git+https://github.com/opengeos/segment-geospatial.git to see if you can understand what is going on

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting segment-geospatial
  Downloading segment_geospatial-0.7.0-py2.py3-none-any.whl (30 kB)
Collecting leafmap
  Downloading leafmap-0.20.3-py2.py3-none-any.whl (1.8 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1.8/1.8 MB 71.8 MB/s eta 0:00:00
Collecting localtileserver
  Downloading localtileserver-0.6.4-py3-none-any.whl (19.4 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 19.4/19.4 MB 57.5 MB/s eta 0:00:00
Collecting segment-anything-py (from segment-geospatial)
  Downloading segment_anything_py-1.0-py3-none-any.whl (40 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 40.2/40.2 kB 5.4 MB/s eta 0:00:00
Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (from segment-geospatial) (4.7.0.72)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.10/dist-packages (from segment-geospatial) (2.0.6)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from segment-geospatial) (3.7.1)
Collecting onnx (from segment-geospatial)
  Downloading onnx-1.14.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (14.6 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 14.6/14.6 MB 85.8 MB/s eta 0:00:00
Collecting geopandas (from segment-geospatial)
  Downloading geopandas-0.13.0-py3-none-any.whl (1.1 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1.1/1.1 MB 76.4 MB/s eta 0:00:00
Collecting rasterio (from segment-geospatial)
  Downloading rasterio-1.3.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20.0 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 20.0/20.0 MB 51.0 MB/s eta 0:00:00
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from segment-geospatial) (4.65.0)
Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (from segment-geospatial) (4.6.6)
Collecting xyzservices (from segment-geospatial)
  Downloading xyzservices-2023.5.0-py3-none-any.whl (56 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 56.5/56.5 kB 7.6 MB/s eta 0:00:00
Collecting pyproj (from segment-geospatial)
  Downloading pyproj-3.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.7 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 7.7/7.7 MB 84.9 MB/s eta 0:00:00
Collecting onnxruntime (from segment-geospatial)
  Downloading onnxruntime-1.14.1-cp310-cp310-manylinux_2_27_x86_64.whl (5.0 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 5.0/5.0 MB 53.9 MB/s eta 0:00:00
Collecting bqplot (from leafmap)
  Downloading bqplot-0.12.39-py2.py3-none-any.whl (1.2 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1.2/1.2 MB 82.9 MB/s eta 0:00:00
Collecting colour (from leafmap)
  Downloading colour-0.1.5-py2.py3-none-any.whl (23 kB)
Collecting folium<=0.13.0,>=0.11.0 (from leafmap)
  Downloading folium-0.13.0-py2.py3-none-any.whl (96 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 96.5/96.5 kB 13.6 MB/s eta 0:00:00
Collecting geojson (from leafmap)
  Downloading geojson-3.0.1-py3-none-any.whl (15 kB)
Collecting ipyevents (from leafmap)
  Downloading ipyevents-2.0.1-py2.py3-none-any.whl (130 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 130.5/130.5 kB 12.8 MB/s eta 0:00:00
Collecting ipyfilechooser>=0.6.0 (from leafmap)
  Downloading ipyfilechooser-0.6.0-py3-none-any.whl (11 kB)
Collecting ipyleaflet>=0.17.0 (from leafmap)
  Downloading ipyleaflet-0.17.2-py3-none-any.whl (3.7 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 3.7/3.7 MB 14.7 MB/s eta 0:00:00
Requirement already satisfied: ipywidgets<8.0.0 in /usr/local/lib/python3.10/dist-packages (from leafmap) (7.7.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from leafmap) (1.22.4)
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from leafmap) (1.5.3)
Collecting pyshp>=2.1.3 (from leafmap)
  Downloading pyshp-2.3.1-py2.py3-none-any.whl (46 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 46.5/46.5 kB 7.0 MB/s eta 0:00:00
Collecting pystac-client (from leafmap)
  Downloading pystac_client-0.6.1-py3-none-any.whl (30 kB)
Collecting python-box (from leafmap)
  Downloading python_box-7.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.2 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 3.2/3.2 MB 43.9 MB/s eta 0:00:00
Collecting scooby (from leafmap)
  Downloading scooby-0.7.2-py3-none-any.whl (16 kB)
Collecting whiteboxgui>=0.6.0 (from leafmap)
  Downloading whiteboxgui-2.3.0-py2.py3-none-any.whl (108 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 108.6/108.6 kB 17.2 MB/s eta 0:00:00
Requirement already satisfied: click in /usr/local/lib/python3.10/dist-packages (from localtileserver) (8.1.3)
Requirement already satisfied: flask>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from localtileserver) (2.2.4)
Collecting Flask-Caching (from localtileserver)
  Downloading Flask_Caching-2.0.2-py3-none-any.whl (28 kB)
Collecting flask-cors (from localtileserver)
  Downloading Flask_Cors-3.0.10-py2.py3-none-any.whl (14 kB)
Collecting flask-restx>=0.5.0 (from localtileserver)
  Downloading flask_restx-1.1.0-py2.py3-none-any.whl (2.8 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 2.8/2.8 MB 115.3 MB/s eta 0:00:00
Requirement already satisfied: GDAL in /usr/local/lib/python3.10/dist-packages (from localtileserver) (3.3.2)
Collecting large-image[gdal]>=1.14.1 (from localtileserver)
  Downloading large_image-1.20.6-py3-none-any.whl (71 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 71.3/71.3 kB 10.9 MB/s eta 0:00:00
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from localtileserver) (2.27.1)
Collecting server-thread (from localtileserver)
  Downloading server_thread-0.2.0-py3-none-any.whl (8.5 kB)
Requirement already satisfied: werkzeug in /usr/local/lib/python3.10/dist-packages (from localtileserver) (2.3.0)
Requirement already satisfied: Jinja2>=3.0 in /usr/local/lib/python3.10/dist-packages (from flask>=2.0.0->localtileserver) (3.1.2)
Requirement already satisfied: itsdangerous>=2.0 in /usr/local/lib/python3.10/dist-packages (from flask>=2.0.0->localtileserver) (2.1.2)
Collecting aniso8601>=0.82 (from flask-restx>=0.5.0->localtileserver)
  Downloading aniso8601-9.0.1-py2.py3-none-any.whl (52 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 52.8/52.8 kB 8.4 MB/s eta 0:00:00
Requirement already satisfied: jsonschema in /usr/local/lib/python3.10/dist-packages (from flask-restx>=0.5.0->localtileserver) (4.3.3)
Requirement already satisfied: pytz in /usr/local/lib/python3.10/dist-packages (from flask-restx>=0.5.0->localtileserver) (2022.7.1)
Requirement already satisfied: branca>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from folium<=0.13.0,>=0.11.0->leafmap) (0.6.0)
Collecting traittypes<3,>=0.2.1 (from ipyleaflet>=0.17.0->leafmap)
  Downloading traittypes-0.2.1-py2.py3-none-any.whl (8.6 kB)
Requirement already satisfied: ipykernel>=4.5.1 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<8.0.0->leafmap) (5.5.6)
Requirement already satisfied: ipython-genutils~=0.2.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<8.0.0->leafmap) (0.2.0)
Requirement already satisfied: traitlets>=4.3.1 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<8.0.0->leafmap) (5.7.1)
Requirement already satisfied: widgetsnbextension~=3.6.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<8.0.0->leafmap) (3.6.4)
Requirement already satisfied: ipython>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<8.0.0->leafmap) (7.34.0)
Requirement already satisfied: jupyterlab-widgets>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from ipywidgets<8.0.0->leafmap) (3.0.7)
Requirement already satisfied: cachetools>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from large-image[gdal]>=1.14.1->localtileserver) (5.3.0)
Requirement already satisfied: palettable in /usr/local/lib/python3.10/dist-packages (from large-image[gdal]>=1.14.1->localtileserver) (3.3.3)
Requirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from large-image[gdal]>=1.14.1->localtileserver) (8.4.0)
Collecting large-image-source-gdal>=1.20.6 (from large-image[gdal]>=1.14.1->localtileserver)
  Downloading large_image_source_gdal-1.20.6-py3-none-any.whl (21 kB)
Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.10/dist-packages (from werkzeug->localtileserver) (2.1.2)
Collecting ipytree (from whiteboxgui>=0.6.0->leafmap)
  Downloading ipytree-0.2.2-py2.py3-none-any.whl (1.3 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1.3/1.3 MB 92.4 MB/s eta 0:00:00
Collecting whitebox (from whiteboxgui>=0.6.0->leafmap)
  Downloading whitebox-2.3.1-py2.py3-none-any.whl (72 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 72.1/72.1 kB 11.4 MB/s eta 0:00:00
Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->leafmap) (2.8.2)
Collecting cachelib<0.10.0,>=0.9.0 (from Flask-Caching->localtileserver)
  Downloading cachelib-0.9.0-py3-none-any.whl (15 kB)
Requirement already satisfied: Six in /usr/local/lib/python3.10/dist-packages (from flask-cors->localtileserver) (1.16.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown->segment-geospatial) (3.12.0)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown->segment-geospatial) (4.11.2)
Collecting fiona>=1.8.19 (from geopandas->segment-geospatial)
  Downloading Fiona-1.9.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.5 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 16.5/16.5 MB 97.9 MB/s eta 0:00:00
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from geopandas->segment-geospatial) (23.1)
Requirement already satisfied: shapely>=1.7.1 in /usr/local/lib/python3.10/dist-packages (from geopandas->segment-geospatial) (2.0.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from pyproj->segment-geospatial) (2022.12.7)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial) (1.0.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial) (4.39.3)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial) (1.4.4)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial) (3.0.9)
Requirement already satisfied: protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx->segment-geospatial) (3.20.3)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.10/dist-packages (from onnx->segment-geospatial) (4.5.0)
Collecting coloredlogs (from onnxruntime->segment-geospatial)
  Downloading coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 46.0/46.0 kB 6.2 MB/s eta 0:00:00
Requirement already satisfied: flatbuffers in /usr/local/lib/python3.10/dist-packages (from onnxruntime->segment-geospatial) (23.3.3)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from onnxruntime->segment-geospatial) (1.11.1)
Collecting pystac>=1.7.0 (from pystac-client->leafmap)
  Downloading pystac-1.7.3-py3-none-any.whl (150 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 150.0/150.0 kB 20.3 MB/s eta 0:00:00
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->localtileserver) (1.26.15)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->localtileserver) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->localtileserver) (3.4)
Collecting affine (from rasterio->segment-geospatial)
  Downloading affine-2.4.0-py3-none-any.whl (15 kB)
Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial) (23.1.0)
Collecting cligj>=0.5 (from rasterio->segment-geospatial)
  Downloading cligj-0.7.2-py3-none-any.whl (7.1 kB)
Collecting snuggs>=1.4.1 (from rasterio->segment-geospatial)
  Downloading snuggs-1.4.7-py3-none-any.whl (5.4 kB)
Collecting click-plugins (from rasterio->segment-geospatial)
  Downloading click_plugins-1.1.1-py2.py3-none-any.whl (7.5 kB)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial) (67.7.2)
Requirement already satisfied: torch>=1.7 in /usr/local/lib/python3.10/dist-packages (from segment-anything-py->segment-geospatial) (2.0.1+cu118)
Requirement already satisfied: torchvision>=0.8 in /usr/local/lib/python3.10/dist-packages (from segment-anything-py->segment-geospatial) (0.15.2+cu118)
Collecting uvicorn (from server-thread->localtileserver)
  Downloading uvicorn-0.22.0-py3-none-any.whl (58 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 58.3/58.3 kB 7.6 MB/s eta 0:00:00
Requirement already satisfied: jupyter-client in /usr/local/lib/python3.10/dist-packages (from ipykernel>=4.5.1->ipywidgets<8.0.0->leafmap) (6.1.12)
Requirement already satisfied: tornado>=4.2 in /usr/local/lib/python3.10/dist-packages (from ipykernel>=4.5.1->ipywidgets<8.0.0->leafmap) (6.3.1)
Collecting jedi>=0.16 (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap)
  Downloading jedi-0.18.2-py2.py3-none-any.whl (1.6 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1.6/1.6 MB 88.8 MB/s eta 0:00:00
Requirement already satisfied: decorator in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (4.4.2)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (0.7.5)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (3.0.38)
Requirement already satisfied: pygments in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (2.14.0)
Requirement already satisfied: backcall in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (0.2.0)
Requirement already satisfied: matplotlib-inline in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (0.1.6)
Requirement already satisfied: pexpect>4.3 in /usr/local/lib/python3.10/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (4.8.0)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.7->segment-anything-py->segment-geospatial) (3.1)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.7->segment-anything-py->segment-geospatial) (2.0.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.7->segment-anything-py->segment-geospatial) (3.25.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.7->segment-anything-py->segment-geospatial) (16.0.5)
Requirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.10/dist-packages (from widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (6.4.8)
Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown->segment-geospatial) (2.4.1)
Collecting humanfriendly>=9.1 (from coloredlogs->onnxruntime->segment-geospatial)
  Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 86.8/86.8 kB 13.0 MB/s eta 0:00:00
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.10/dist-packages (from jsonschema->flask-restx>=0.5.0->localtileserver) (0.19.3)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests->localtileserver) (1.7.1)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->onnxruntime->segment-geospatial) (1.3.0)
Collecting h11>=0.8 (from uvicorn->server-thread->localtileserver)
  Downloading h11-0.14.0-py3-none-any.whl (58 kB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 58.3/58.3 kB 9.8 MB/s eta 0:00:00
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /usr/local/lib/python3.10/dist-packages (from jedi>=0.16->ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (0.8.3)
Requirement already satisfied: pyzmq>=17 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (23.2.1)
Requirement already satisfied: argon2-cffi in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (21.3.0)
Requirement already satisfied: jupyter-core>=4.6.1 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (5.3.0)
Requirement already satisfied: nbformat in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (5.8.0)
Requirement already satisfied: nbconvert in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (6.5.4)
Requirement already satisfied: nest-asyncio>=1.5 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (1.5.6)
Requirement already satisfied: Send2Trash>=1.8.0 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (1.8.0)
Requirement already satisfied: terminado>=0.8.3 in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.17.1)
Requirement already satisfied: prometheus-client in /usr/local/lib/python3.10/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.16.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.10/dist-packages (from pexpect>4.3->ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.10/dist-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=4.0.0->ipywidgets<8.0.0->leafmap) (0.2.6)
Requirement already satisfied: platformdirs>=2.5 in /usr/local/lib/python3.10/dist-packages (from jupyter-core>=4.6.1->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (3.3.0)
Requirement already satisfied: argon2-cffi-bindings in /usr/local/lib/python3.10/dist-packages (from argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (21.2.0)
Requirement already satisfied: lxml in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (4.9.2)
Requirement already satisfied: bleach in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (6.0.0)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.7.1)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.4)
Requirement already satisfied: jupyterlab-pygments in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.2.2)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.8.4)
Requirement already satisfied: nbclient>=0.5.0 in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.7.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (1.5.0)
Requirement already satisfied: tinycss2 in /usr/local/lib/python3.10/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (1.2.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.10/dist-packages (from nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (2.16.3)
Requirement already satisfied: cffi>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from argon2-cffi-bindings->argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (1.15.1)
Requirement already satisfied: webencodings in /usr/local/lib/python3.10/dist-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (0.5.1)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0->leafmap) (2.21)
Installing collected packages: colour, aniso8601, xyzservices, whitebox, traittypes, snuggs, scooby, python-box, pyshp, pyproj, onnx, large-image, jedi, humanfriendly, h11, geojson, cligj, click-plugins, cachelib, affine, uvicorn, rasterio, pystac, large-image-source-gdal, fiona, coloredlogs, server-thread, pystac-client, onnxruntime, geopandas, folium, flask-restx, flask-cors, Flask-Caching, localtileserver, ipytree, ipyleaflet, ipyfilechooser, ipyevents, bqplot, whiteboxgui, leafmap, segment-anything-py, segment-geospatial
  Attempting uninstall: folium
    Found existing installation: folium 0.14.0
    Uninstalling folium-0.14.0:
      Successfully uninstalled folium-0.14.0
Successfully installed Flask-Caching-2.0.2 affine-2.4.0 aniso8601-9.0.1 bqplot-0.12.39 cachelib-0.9.0 click-plugins-1.1.1 cligj-0.7.2 coloredlogs-15.0.1 colour-0.1.5 fiona-1.9.4 flask-cors-3.0.10 flask-restx-1.1.0 folium-0.13.0 geojson-3.0.1 geopandas-0.13.0 h11-0.14.0 humanfriendly-10.0 ipyevents-2.0.1 ipyfilechooser-0.6.0 ipyleaflet-0.17.2 ipytree-0.2.2 jedi-0.18.2 large-image-1.20.6 large-image-source-gdal-1.20.6 leafmap-0.20.3 localtileserver-0.6.4 onnx-1.14.0 onnxruntime-1.14.1 pyproj-3.5.0 pyshp-2.3.1 pystac-1.7.3 pystac-client-0.6.1 python-box-7.0.1 rasterio-1.3.6 scooby-0.7.2 segment-anything-py-1.0 segment-geospatial-0.7.0 server-thread-0.2.0 snuggs-1.4.7 traittypes-0.2.1 uvicorn-0.22.0 whitebox-2.3.1 whiteboxgui-2.3.0 xyzservices-2023.5.0
%pip install git+https://github.com/opengeos/segment-geospatial.git
 

output

Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting git+https://github.com/opengeos/segment-geospatial.git
  Cloning https://github.com/opengeos/segment-geospatial.git to /tmp/pip-req-build-uh5p8m9r
  Running command git clone --filter=blob:none --quiet https://github.com/opengeos/segment-geospatial.git /tmp/pip-req-build-uh5p8m9r
  Resolved https://github.com/opengeos/segment-geospatial.git to commit 39f70871712fceb93deb8eca2acd965e47ded7fe
  Preparing metadata (setup.py) ... done
Requirement already satisfied: segment-anything-py in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (1.0)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (4.7.0.72)
Requirement already satisfied: pycocotools in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (2.0.6)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (3.7.1)
Requirement already satisfied: onnx in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (1.14.0)
Requirement already satisfied: geopandas in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (0.13.0)
Requirement already satisfied: rasterio in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (1.3.6)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (4.65.0)
Requirement already satisfied: gdown in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (4.6.6)
Requirement already satisfied: xyzservices in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (2023.5.0)
Requirement already satisfied: pyproj in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (3.5.0)
Requirement already satisfied: onnxruntime in /usr/local/lib/python3.10/dist-packages (from segment-geospatial==0.7.0) (1.14.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from gdown->segment-geospatial==0.7.0) (3.12.0)
Requirement already satisfied: requests[socks] in /usr/local/lib/python3.10/dist-packages (from gdown->segment-geospatial==0.7.0) (2.27.1)
Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from gdown->segment-geospatial==0.7.0) (1.16.0)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.10/dist-packages (from gdown->segment-geospatial==0.7.0) (4.11.2)
Requirement already satisfied: fiona>=1.8.19 in /usr/local/lib/python3.10/dist-packages (from geopandas->segment-geospatial==0.7.0) (1.9.4)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from geopandas->segment-geospatial==0.7.0) (23.1)
Requirement already satisfied: pandas>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from geopandas->segment-geospatial==0.7.0) (1.5.3)
Requirement already satisfied: shapely>=1.7.1 in /usr/local/lib/python3.10/dist-packages (from geopandas->segment-geospatial==0.7.0) (2.0.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from pyproj->segment-geospatial==0.7.0) (2022.12.7)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (1.0.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (4.39.3)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (1.4.4)
Requirement already satisfied: numpy>=1.20 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (1.22.4)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (8.4.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->segment-geospatial==0.7.0) (2.8.2)
Requirement already satisfied: protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx->segment-geospatial==0.7.0) (3.20.3)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /usr/local/lib/python3.10/dist-packages (from onnx->segment-geospatial==0.7.0) (4.5.0)
Requirement already satisfied: coloredlogs in /usr/local/lib/python3.10/dist-packages (from onnxruntime->segment-geospatial==0.7.0) (15.0.1)
Requirement already satisfied: flatbuffers in /usr/local/lib/python3.10/dist-packages (from onnxruntime->segment-geospatial==0.7.0) (23.3.3)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from onnxruntime->segment-geospatial==0.7.0) (1.11.1)
Requirement already satisfied: affine in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial==0.7.0) (2.4.0)
Requirement already satisfied: attrs in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial==0.7.0) (23.1.0)
Requirement already satisfied: click>=4.0 in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial==0.7.0) (8.1.3)
Requirement already satisfied: cligj>=0.5 in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial==0.7.0) (0.7.2)
Requirement already satisfied: snuggs>=1.4.1 in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial==0.7.0) (1.4.7)
Requirement already satisfied: click-plugins in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial==0.7.0) (1.1.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from rasterio->segment-geospatial==0.7.0) (67.7.2)
Requirement already satisfied: torch>=1.7 in /usr/local/lib/python3.10/dist-packages (from segment-anything-py->segment-geospatial==0.7.0) (2.0.1+cu118)
Requirement already satisfied: torchvision>=0.8 in /usr/local/lib/python3.10/dist-packages (from segment-anything-py->segment-geospatial==0.7.0) (0.15.2+cu118)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas>=1.1.0->geopandas->segment-geospatial==0.7.0) (2022.7.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.7->segment-anything-py->segment-geospatial==0.7.0) (3.1)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.7->segment-anything-py->segment-geospatial==0.7.0) (3.1.2)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.7->segment-anything-py->segment-geospatial==0.7.0) (2.0.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.7->segment-anything-py->segment-geospatial==0.7.0) (3.25.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.7->segment-anything-py->segment-geospatial==0.7.0) (16.0.5)
Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.10/dist-packages (from beautifulsoup4->gdown->segment-geospatial==0.7.0) (2.4.1)
Requirement already satisfied: humanfriendly>=9.1 in /usr/local/lib/python3.10/dist-packages (from coloredlogs->onnxruntime->segment-geospatial==0.7.0) (10.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown->segment-geospatial==0.7.0) (1.26.15)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown->segment-geospatial==0.7.0) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown->segment-geospatial==0.7.0) (3.4)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /usr/local/lib/python3.10/dist-packages (from requests[socks]->gdown->segment-geospatial==0.7.0) (1.7.1)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->onnxruntime->segment-geospatial==0.7.0) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.7->segment-anything-py->segment-geospatial==0.7.0) (2.1.2)

Fresh conda-forge based install unable to import samgeo.

Environment Information

  • samgeo version: 0.7.0
  • Python version: 3.10.11
  • Operating System: MS Windows Version 22H2 (OS Build 22621.1702)
  • Visual Studio Code: Version 1.78.2

Description

I want to run samgeo for some planet data. I did a fresh install using conda-forge and mamba. I'm using the input prompt example as a guideline.. This fresh install is failing to call rasterio due to a DLL failure, which I think is related to a rasterio-gdal conflict (conda-forge/rasterio-feedstock#240). Maybe related to #53.

I tried the pip install version but got the MS Visual Studio C++ error documented in the QGIS-SAM plugin FAQ and don't want to install the 9 GB VS C++ development kit if I can avoid it.

P.S. GREAT WORK!

What I Did

In command line:

`mamba create -n samgeo segment-geospatial python leafmap -c conda-forge`

In visual studio code:

import os
import leafmap
from samgeo import SamGeo, tms_to_geotiff

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Untitled-1 in line 4
      [2](untitled:Untitled-1?line=1) import os
      [3](untitled:Untitled-1?line=2) import leafmap
----> [4](untitled:Untitled-1?line=3) from samgeo import SamGeo, tms_to_geotiff

File [~\.conda\envs\samgeo\lib\site-packages\samgeo\__init__.py:8](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/__init__.py:8)
      [4](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/__init__.py?line=3) __email__ = '[email protected]'
      [5](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/__init__.py?line=4) __version__ = '0.7.0'
----> [8](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/__init__.py?line=7) from .samgeo import *

File [~\.conda\envs\samgeo\lib\site-packages\samgeo\samgeo.py:11](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py:11)
      [8](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py?line=7) import numpy as np
      [9](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py?line=8) from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
---> [11](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py?line=10) from .common import *
     [14](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py?line=13) class SamGeo:
     [15](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py?line=14)     """The main class for segmenting geospatial data with the Segment Anything Model (SAM). See
     [16](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py?line=15)     https://github.com/facebookresearch/segment-anything for details.
     [17](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/samgeo.py?line=16)     """

File [~\.conda\envs\samgeo\lib\site-packages\samgeo\common.py:13](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/common.py:13)
     [11](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/common.py?line=10) import shapely
     [12](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/common.py?line=11) import pyproj
---> [13](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/common.py?line=12) import rasterio
     [14](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/common.py?line=13) import geopandas as gpd
     [15](file:///~/.conda/envs/samgeo/lib/site-packages/samgeo/common.py?line=14) import matplotlib.pyplot as plt

File [~\.conda\envs\samgeo\lib\site-packages\rasterio\__init__.py:28](file:///~/.conda/envs/samgeo/lib/site-packages/rasterio/__init__.py:28)
     [24](file:///~/.conda/envs/samgeo/lib/site-packages/rasterio/__init__.py?line=23)                     os.add_dll_directory(os.path.abspath(p))
     [27](file:///~/.conda/envs/samgeo/lib/site-packages/rasterio/__init__.py?line=26) from rasterio._show_versions import show_versions
---> [28](file:///~/.conda/envs/samgeo/lib/site-packages/rasterio/__init__.py?line=27) from rasterio._version import gdal_version, get_geos_version, get_proj_version
     [29](file:///~/.conda/envs/samgeo/lib/site-packages/rasterio/__init__.py?line=28) from rasterio.crs import CRS
     [30](file:///~/.conda/envs/samgeo/lib/site-packages/rasterio/__init__.py?line=29) from rasterio.drivers import driver_from_extension, is_blacklisted

ImportError: DLL load failed while importing _version: The specified procedure could not be found.

Clip downloaded tif file.

In tms_to_geotiff((), please add an option to either download image as rectangle extent or clip the image as per provided polygon shapefile or KMZ file.

Optmization

Please add script models for fine tunning
like:

optimizer = torch.optim.Adam(sam_model.mask_decoder.parameters())
loss_fn = torch.nn.MSELoss()
with torch.no_grad():
image_embedding = sam_model.image_encoder(input_image)

low_res_masks, iou_predictions = sam_model.mask_decoder(
image_embeddings=image_embedding,
image_pe=sam_model.prompt_encoder.get_dense_pe(),
sparse_prompt_embeddings=sparse_embeddings,
dense_prompt_embeddings=dense_embeddings,
multimask_output=False,
)

Import error and GPU not used in v.0.4.0

I tried to directly access to version segment-geospatial v0.4.0 but I can access to the notebook only downloading it and uploading on my Google colab.
when I install the required libraries I have problems in the import:

ImportError                               Traceback (most recent call last)
[<ipython-input-2-2e92bcd49274>](https://localhost:8080/#) in <cell line: 3>()
      1 import os
      2 import leafmap
----> 3 from samgeo import SamGeo, show_image, download_file, overlay_images, tms_to_geotiff

ImportError: cannot import name 'show_image' from 'samgeo' (/usr/local/lib/python3.10/dist-packages/samgeo/__init__.py)

I tried also to mix the previous notebook v 0.30 to use sam_kwargs but I noticed that it doesn't use GPU even if torch cuda is available.

sam.generate() RuntimeError Could not infer dtype of numpy.uint8

Environment Information

  • samgeo version:

segment-anything-py==1.0
segment-geospatial==0.4.0

  • Python version: 3.11.3
  • Operating System: Fedora 37

Description

I tried to run https://samgeo.gishub.org/examples/automatic_mask_generator/ but got this error at step

Automatic mask generation

Segment the image and save the results to a GeoTIFF file. Set unique=True to assign a unique ID to each object.

sam.generate(image, output="masks.tif", foreground=True, unique=True)

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In [9], line 1
----> 1 sam.generate(image, output="masks.tif", foreground=True, unique=True)

File ~/.local/lib/python3.11/site-packages/samgeo/samgeo.py:179, in SamGeo.generate(self, source, output, foreground, batch, erosion_kernel, mask_multiplier, unique, **kwargs)
    177 self.image = image  # Store the input image as a numpy array
    178 mask_generator = self.mask_generator  # The automatic mask generator
--> 179 masks = mask_generator.generate(image)  # Segment the input image
    180 self.masks = masks  # Store the masks as a list of dictionaries
    182 if output is not None:
    183     # Save the masks to the output path. The output is either a binary mask or a mask of objects with unique values.

File ~/.local/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
    112 @functools.wraps(func)
    113 def decorate_context(*args, **kwargs):
    114     with ctx_factory():
--> 115         return func(*args, **kwargs)

File ~/.local/lib/python3.11/site-packages/segment_anything/automatic_mask_generator.py:163, in SamAutomaticMaskGenerator.generate(self, image)
    138 """
    139 Generates masks for the given image.
    140 
   (...)
    159          the mask, given in XYWH format.
    160 """
    162 # Generate masks
--> 163 mask_data = self._generate_masks(image)
    165 # Filter small disconnected regions and holes in masks
    166 if self.min_mask_region_area > 0:

File ~/.local/lib/python3.11/site-packages/segment_anything/automatic_mask_generator.py:206, in SamAutomaticMaskGenerator._generate_masks(self, image)
    204 data = MaskData()
    205 for crop_box, layer_idx in zip(crop_boxes, layer_idxs):
--> 206     crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
    207     data.cat(crop_data)
    209 # Remove duplicate masks between crops

File ~/.local/lib/python3.11/site-packages/segment_anything/automatic_mask_generator.py:236, in SamAutomaticMaskGenerator._process_crop(self, image, crop_box, crop_layer_idx, orig_size)
    234 cropped_im = image[y0:y1, x0:x1, :]
    235 cropped_im_size = cropped_im.shape[:2]
--> 236 self.predictor.set_image(cropped_im)
    238 # Get points for this crop
    239 points_scale = np.array(cropped_im_size)[None, ::-1]

File ~/.local/lib/python3.11/site-packages/segment_anything/predictor.py:57, in SamPredictor.set_image(self, image, image_format)
     55 # Transform the image to the form expected by the model
     56 input_image = self.transform.apply_image(image)
---> 57 input_image_torch = torch.as_tensor(input_image, device=self.device)
     58 input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :]
     60 self.set_torch_image(input_image_torch, image.shape[:2])

RuntimeError: Could not infer dtype of numpy.uint8

What I Did

I copied https://samgeo.gishub.org/examples/automatic_mask_generator/automatic_mask_generator.ipynb and used it locally in Jupyter notebook. While "satellite.tif" shows up in leafmap, the subsequent automatic mask generation failed.

Feature Request: Fine-tuning

Description

I know this is a big feature request, but the addition of fine-tuning support would be nice as a component of this package.

TorchGeo could probably be used nicely for sampling from large images when training

Add support for input prompts such as points or boxes

For now, segment-geospatial segments the entire image. It would be great to add support for input prompts such as points or boxes. This will be useful for extracting specific features from satellite imagery, such as buildings, roads, waterbodies, etc.

image

image

image

It is amazing to see SAM used in remote sensing.I have a few questions and suggestions:

1.When using text prompts to varies segment images, DINO doesn't seem to be very friendly to remote sensing images. Even with two very low threshold values, there are still many instances where segmentation fails or is incorrect. Considering the accuracy, are you planning on creating a model specifically tailored for remote sensing images?

2.Regarding the auto segmentation, during the fine segmentation process, even after adjusting the pred_iou_thresh to remove duplicates, there are still a significant number of repeated results in sorted_anns[i]["segmentation"]. At this point, simply converting the masks to vector SHP format will result in the loss of a large number of patches, which is not conducive to subsequent manual modifications. It would be better to preserve all the patches corresponding to each sorted_anns[i]["segmentation"]. This can be achieved by having a geodataframe object corresponding to each sorted_anns[i]["segmentation"], and finally merging and exporting them together.

SamGeoPredictor Geo_Box

Description

The geo_box parameter of the SamGeoPredictor.predict function seems to be hard coded to EPSG:4326 here. It would be useful to allow for different projections.

In addition, this also causes problem when setting geo_box to None and using the masks_to_geotiff function considering width, height, geo_transform and crs are not previously defined.

"Input type uint16 is not supported" error on sam.generate

thank you for the great work. I am trying to segment RGB from S2_SR. Downloaded tif file:
geemap.ee_export_image(image.select(['B4', 'B3', 'B2']), filename='sr2.tif', scale=10, region=aoi)

added it to map by
m.add_raster(image, layer_name='Images')
and it works fine, tif exists and shows up.

at the stage of
sam.generate(image, mask)

gives me:

TypeError: Input type uint16 is not supported

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.