{"name":"cochlea-synapseg","display_name":"Cochlea SynapSeg","visibility":"public","icon":"","categories":["Annotation","Segmentation","Acquisition"],"schema_version":"0.2.1","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"cochlea-synapseg.get_reader","title":"Open data with Cochlea SynapSeg","python_name":"napari_cochlea_synapse_seg._reader:napari_get_reader","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"cochlea-synapseg.make_sample_data","title":"Load sample data from Cochlea SynapSeg","python_name":"napari_cochlea_synapse_seg._sample_data:make_sample_data","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"cochlea-synapseg.make_sample_data_with_noise","title":"Load sample data from Cochlea SynapSeg 2","python_name":"napari_cochlea_synapse_seg._sample_data:make_sample_data_with_noise","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"cochlea-synapseg.make_sample_data_pairs","title":"Load sample data from Cochlea SynapSeg 3","python_name":"napari_cochlea_synapse_seg._sample_data:make_sample_data_pairs","short_title":null,"category":null,"icon":null,"enablement":null},{"id":"cochlea-synapseg.make_bigwidget","title":"Make SynapSegWidget","python_name":"napari_cochlea_synapse_seg:SynapSegWidget","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":[{"command":"cochlea-synapseg.get_reader","filename_patterns":["*.xml","*.csv","*.xls","*.XLS"],"accepts_directories":false}],"writers":null,"widgets":[{"command":"cochlea-synapseg.make_bigwidget","display_name":"SynapSeg Widget","autogenerate":false}],"sample_data":[{"command":"cochlea-synapseg.make_sample_data_with_noise","key":"unique_id.1","display_name":"Simulated Synapses With Noise"},{"command":"cochlea-synapseg.make_sample_data","key":"unique_id.2","display_name":"Simulated Synapses"},{"command":"cochlea-synapseg.make_sample_data_pairs","key":"unique_id.3","display_name":"Simulated Synapses Pairs"}],"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.4","name":"cochlea-synapseg","version":"0.1.2","dynamic":["license-file"],"platform":null,"supported_platform":null,"summary":"A plugin to segment cochlear ribbon synapses automatically, as well as edit and adjust","description":"# Cochlea-SynapSeg\n\n[![License BSD-3](https://img.shields.io/pypi/l/cochlea-synapseg.svg?color=green)](https://github.com/ucsdmanorlab/cochlea-synapseg/raw/main/LICENSE)\n[![PyPI](https://img.shields.io/pypi/v/cochlea-synapseg.svg?color=green)](https://pypi.org/project/cochlea-synapseg)\n[![Python Version](https://img.shields.io/pypi/pyversions/cochlea-synapseg.svg?color=green)](https://python.org)\n[![DOI](https://zenodo.org/badge/865642960.svg)](https://doi.org/10.5281/zenodo.16433552)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/cochlea-synapseg)](https://napari-hub.org/plugins/cochlea-synapseg)\n<!--\n[![tests](https://github.com/ucsdmanorlab/cochlea-synapseg/workflows/tests/badge.svg)](https://github.com/ucsdmanorlab/cochlea-synapseg/actions)\n[![codecov](https://codecov.io/gh/ucsdmanorlab/cochlea-synapseg/branch/main/graph/badge.svg)](https://codecov.io/gh/ucsdmanorlab/cochlea-synapseg)\n[![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/cochlea-synapseg)](https://napari-hub.org/plugins/cochlea-synapseg)\n-->\n\nA napari plugin to segment cochlear ribbon synapses. \n\nMore is in the works, but currently includes:\n1. pre-processing functions,\n2. tools to quickly generate ground truth ribbon segmentation,\n3. deep-learning based ribbon segmentation prediction, and\n4. tools to check for synapse pairs, and export montage images.\n\n<!--\nDon't miss the full getting started guide to set up your new package:\nhttps://github.com/napari/cookiecutter-napari-plugin#getting-started\n\nand review the napari docs for plugin developers:\nhttps://napari.org/stable/plugins/index.html\n-->\n\n## Installation\n\nYou can install `cochlea-synapseg` (recommended: in a new conda environment with up-to-date napari) via [pip]:\n\n    python -m pip install cochlea-synapseg\n\n## Usage\n\nAfter installation, you can find the plugin next time you launch napari (_Plugins > Cochlea SynapSeg_ > SynapSeg Widget / Montage Widget).\n\nSynapSeg Widget includes all core functionalities. SynapSeg widget is divided into multiple tabs and sections, for \"quick use\", be sure to check the settings denoted with asterisks below.\n\nMontage Widget is under development and will later be rolled into the SynapSeg Widget. \n\nJump to: [Usage](#usage) | [Preprocess](#preprocess-tab) | [Ground Truth](#ground-truth-tab) | [Predict](#predict-tab) | [Analyze](#analyze-tab)\n\n----------------------------------\n### Preprocess Tab\n<img width=\"347\" height=\"461\" alt=\"preprocess tab screenshot\" src=\"https://github.com/user-attachments/assets/a6a31d0c-be31-41cd-bd40-f67c643606f3\" />\n\n#### Image tools\n* **image layer** - select an image layer (must already be loaded into napari) for preprocessing\n* **xy/z resolution** - (optional) in um/pixel, auto-loaded from .tifs when available in metadata\n* **split channels** - if your loaded image layer contains multiple channels, select the channel axis and click split channels. Channels must be separated for ribbon segmentation. (defaults to the smallest dimension)\n  \n#### Points tools\n* **points layer** - select a points layer (must already be loaded into napari) to use the preprocessing tools below:\n* **real -> pixel units** - if you've loaded points that were saved in real units, make sure the pixel size information above in image tools is correct, then click \"real -> pixel units\" to convert\n* **chan ->z convert** - some points (like ImageJ/FIJI rois or CellCounter points), show up in the wrong z plane because their \"slice\" coordinates are a combination of both slice and channel info. If this happens, set the number of channels (in the original image, where the ROIs were created!), and then click \"chan -> z convert\". Z coordinates of the points layer will be divided by the number of channels specified.\n* **snap to max** - snap all points in the selected points layer to the local max, within +- the selected number of pixels in x, y, and z\n  \n#### Labels tools\n* **labels layer** and **make labels editable** - if you loaded a labels layer from .zarr or certain other formats, it may be stored in a dask array and not editable in napari. You can check this box to load it into local memory, allowing editing\n\nJump to: [Usage](#usage) | [Preprocess](#preprocess-tab) | [Ground Truth](#ground-truth-tab) | [Predict](#predict-tab) | [Analyze](#analyze-tab)\n\n----------------------------------\n### Ground Truth Tab\n#### Image Tools\n<img width=\"331\" height=\"123\" alt=\"GT_image\" src=\"https://github.com/user-attachments/assets/c797d423-d7f5-43e4-8940-a87c235b9c2d\" />\n\n\n* \\***presynaptic layer** - use the dropdown to select a loaded image layer that contains the ribbon stain\n* **xy/z resolution** - (optional) in um/pixel, auto-loaded from .tifs when available in metadata, used for scaling\n* **Scale z dimension** - check to scale all layers in the 3D viewer for isotropic viewing\n\n#### Points Tools & Points to Labels\n\n!! DEC 2025 UPDATE: Big thanks to Brad Buran for his work making point annotations work in 3D! This widget has been updated with functions adapted from Brad's [Synaptogram plugin]. !!\n\n<img width=\"346\" height=\"294\" alt=\"points_tools\" src=\"https://github.com/user-attachments/assets/dd73941a-d5d2-49b7-b5a1-a4e5ccb28227\" />\n\n**1. Points Layer Selection** - use the dropdown to select an existing points layer (or skip to #5 if not loading in existing points)\n\n**2. Find peaks above** - use an automatic peak finder to find peaks above a certain value. Useful as a starting point for manual annotation. \n\n**3. Guess** - use the image intensity information to guess an appropriate peak value for #2\n\n\\***4. New Points Layer** - if starting annotation from scratch, click to create a new points layer. You can then add points in 3D by selecting the pan/zoom tool, and right-clicking (or ctrl+clicking) to add points. \n\n\\***5. Snap to max** - when you drop points, allow the point to \"snap\" to the local maximum, with a range specified by the number to the right. \n\n\\***6. Points to Labels** - the key functionality of the module, creates a label layer by performing a local segmentation on all points.\n\n**7. Advanced Settings** - adjust settings for the points to labels function to optimize local segmentation and watershed separation of points. \n\n### Labels Tools\n<img width=\"332\" height=\"291\" alt=\"GT_labels\" src=\"https://github.com/user-attachments/assets/e6dadc17-9df4-4f45-913a-44ed348b09d2\" />\n\n**1. Labels Layer Selection** - use the dropdown to select the labels layer that represents ribbon segmentation \n\n**2. Make Labels Editable** - some file formats (including zarrs) load in as dask arrays, which don't allow editing. Checking this box will make the labels layer editable to add/remove new labels by converting to a numpy array (will load the layer into memory, so be careful if dealing with large images!).\n\n**3. New label** - if hand-painting labels, set the active label in the label layer to an unused ID before painting\n\n\\***4. Remove a Label** - use the labels layer eyedropper tool to identify the ID of an unwanted label, then type in the box and click \"Remove label\"\n\n\\***5. Merge labels** - if iteratively creating new labels, merge existing labels (in dropdown #1) with new labels (specified in dropdown #5). This function will automatically ensure overlapping label IDs are not used. \n\n**6. Labels to Points** - an existing label layer can be converted to a points layer based on label centroids. Used with #7. \n\n**7. Keep labels from points** - After using (#6), use the points editing tools to quickly remove unwanted points. Click (#7) to retain only the labels that correspond with a point. (Labels specified in \"labels layer\", points specified in \"points layer\" above.)\n\n### Save to Zarr\n\n![save_zarr](https://github.com/user-attachments/assets/1d824f49-012f-4fac-8fa1-64d7d319cd34)\n\nFunctionality to save to .zarr format. Saves presynapse image as 'raw', and labels as 'labeled' if they exist. Used for later prediction of ribbon segmentation.  \n\n\\***21. File Path** - the directory in which to save the zarr; use the folder icon to search for an existing directory\n\n\\***22. File Name** - the zarr name to save to; use the magnifying glass icon to select an existing .zarr\n\n**23. From Source** - set the file path and name to where the image layer was loaded from. (Caution: if you loaded a zarr, this will result in the zarr being overwritten!)\n\n\\***24. Save zarr** - saves the presynapse image layer (as selected above), and labels layers (as selected, if it exists) in the specified .zarr, as 'raw' and 'labeled', respectively. These can be drag + dropped into napari for viewing later, and can be fed directly into prediction. \n\nJump to: [Usage](#usage) | [Preprocess](#preprocess-tab) | [Ground Truth](#ground-truth-tab) | [Predict](#predict-tab) | [Analyze](#analyze-tab)\n\n----------------------------------\n### Predict Tab\n\n#### Predict\n<img width=\"336\" height=\"291\" alt=\"image\" src=\"https://github.com/user-attachments/assets/1c83529d-2102-433b-999e-5bc75c884252\" />\n\n* **model path** - location of pre-trained model (defaults to saved ribbon model in this repo)\n* **input zarr path** - select a zarr (saved using the tools in Ground Truth -> Save Zarr) on which to predict\n* **predict** - run prediction (may be slow if GPU is not enabled), output is saved in zarr\n* **show** - show prediction as a new layer in the napari viewer. Prediction is a distance transform (values between -1 and 1), and can be converted to labels using the \"labels from prediction\" functions below.\n\n#### Labels from prediction\n<img width=\"337\" height=\"217\" alt=\"image\" src=\"https://github.com/user-attachments/assets/d63ba53c-e617-418a-8ea8-083cbaf56d13\" />\n\n* **pred layer** - select the prediction layer loaded in the viewer\n* **mask threshold** - the threshold used to determine the _object boundaries_ from the prediction. For most images, 0-0.1 works well.\n* **peak threshold** - the threshold used to determine whether nearby objects will be split, and whether objects without a bright center will be retained. For most images, 0.1-0.4 works well.\n* **Prediction to labels** - generate labels using the settings selected above\n\nJump to: [Usage](#usage) | [Preprocess](#preprocess-tab) | [Ground Truth](#ground-truth-tab) | [Predict](#predict-tab) | [Analyze](#analyze-tab)\n\n----------------------------------\n### Analyze Tab\nUsed to generate montages of synapses and orphan ribbons (red), and tools to quickly navigate between montage view and the original image. \n\n<img width=\"445\" height=\"447\" alt=\"montage-paired\" src=\"https://github.com/user-attachments/assets/905d0a98-b9b0-4c00-9bcf-1eb947c1d4b4\" />\n\n<img width=\"423\" height=\"385\" alt=\"montage-menu1\" src=\"https://github.com/user-attachments/assets/a9939aad-da60-4dad-9a3e-9262be47fe3c\" />\n\n**\\*1-3.** - Select the presynaptic and postsynaptic image layers, and the presynaptic labels layer loaded in the viewer\n\n**4.** Optional options to adjust where postsynaptic signal is detected. \n\n**5.** Options for the montage display (size of crops, and sorting of crops)\n\n**6.** Option to save all crops (will save when create montage button is pressed)\n\n**\\*7. Create montage** - show the montage in the viewer, and save crops if selected. \n\n<img width=\"421\" height=\"180\" alt=\"montage-menu2\" src=\"https://github.com/user-attachments/assets/ecd34cf9-6f26-4ba5-a959-23b52227db54\" />\n\n**\\*8. Zoom to label** - specify a label (numbers shown on montage), and move to the location of that label in the original image\n\n**9.** Option to manually (left) change the zoom level when \"zoom to montage\" is selected, or to set the zoom based on the current level (right)\n\n**\\*10. Zoom to montage** - recenter the viewer at the montage; use to return to the montage viewer after \"zoom to label\" is used\n\n<img width=\"944\" height=\"197\" alt=\"montage_zoom2label\" src=\"https://github.com/user-attachments/assets/0ad3e808-1c42-4e97-9100-254e7aa90d22\" />\n\n<!-- \n## Contributing\n\nContributions are very welcome. Tests can be run with [tox], please ensure\nthe coverage at least stays the same before you submit a pull request.\n-->\n## License\n\nDistributed under the terms of the [BSD-3] license,\n\"cochlea-synapseg\" is free and open source software\n\n## Issues\n\nIf you encounter any problems, please [file an issue] along with a detailed description.\n\n[napari]: https://github.com/napari/napari\n[Cookiecutter]: https://github.com/audreyr/cookiecutter\n[@napari]: https://github.com/napari\n[MIT]: http://opensource.org/licenses/MIT\n[BSD-3]: http://opensource.org/licenses/BSD-3-Clause\n[GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt\n[GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt\n[Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0\n[Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt\n[cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin\n[file an issue]: https://github.com/ucsdmanorlab/cochlea-synapseg/issues/new\n[Synaptogram plugin]: https://github.com/bburan/napari-synaptogram\n\n[napari]: https://github.com/napari/napari\n[tox]: https://tox.readthedocs.io/en/latest/\n[pip]: https://pypi.org/project/pip/\n[PyPI]: https://pypi.org/\n\n----------------------------------\n\nThis [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.\n\n","description_content_type":"text/markdown","keywords":null,"home_page":null,"download_url":null,"author":"Cayla Miller","author_email":"cayla@ucsd.edu","maintainer":null,"maintainer_email":null,"license":"Copyright (c) 2024, Cayla Miller\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n  this list of conditions and the following disclaimer in the documentation\n  and/or other materials provided with the distribution.\n\n* Neither the name of copyright holder nor the names of its\n  contributors may be used to endorse or promote products derived from\n  this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n","classifier":["Development Status :: 2 - Pre-Alpha","Framework :: napari","Intended Audience :: Developers","License :: OSI Approved :: BSD License","Operating System :: OS Independent","Programming Language :: Python","Programming Language :: Python :: 3","Programming Language :: Python :: 3 :: Only","Programming Language :: Python :: 3.10","Programming Language :: Python :: 3.11","Programming Language :: Python :: 3.12","Topic :: Scientific/Engineering :: Image Processing"],"requires_dist":["numpy","qtpy","scikit-image","scipy","zarr<3","matplotlib","napari[all]","gunpowder","torch","torchvision","torchaudio","tox; extra == \"testing\"","pytest; extra == \"testing\"","pytest-cov; extra == \"testing\"","pytest-qt; extra == \"testing\"","napari; extra == \"testing\"","pyqt5; extra == \"testing\""],"requires_python":">=3.10","requires_external":null,"project_url":null,"provides_extra":["testing"],"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}