{"name":"napari-fenestra","display_name":"FenestRA","visibility":"public","icon":"","categories":[],"schema_version":"0.2.1","on_activate":null,"on_deactivate":null,"contributions":{"commands":[{"id":"napari-fenestra.make_pipeline_widget","title":"Open FenestRA Pipeline","python_name":"fenestra._widget:FenestraWidget","short_title":null,"category":null,"icon":null,"enablement":null}],"readers":null,"writers":null,"widgets":[{"command":"napari-fenestra.make_pipeline_widget","display_name":"FenestRA Pipeline","autogenerate":false}],"sample_data":null,"themes":null,"menus":{},"submenus":null,"keybindings":null,"configuration":[]},"package_metadata":{"metadata_version":"2.4","name":"napari-fenestra","version":"0.2.11","dynamic":["license-file"],"platform":null,"supported_platform":null,"summary":"FenestRA: A Napari plugin for LSEC AFM Super-Resolution & Fenestration Analysis.","description":"<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/LIVR-VUB/FenestRA/main/misc/FenestRA.jpg\" alt=\"FenestRA Logo\" width=\"450\"/>\n</p>\n\n# FenestRA\n**Fenestration Resolution & Analysis Pipeline**\n\n![Python](https://img.shields.io/badge/python-3.10-blue.svg)\n![Napari](https://img.shields.io/badge/napari-plugin-orange.svg)\n![CUDA](https://img.shields.io/badge/CUDA-12.4-76B900.svg?logo=nvidia)\n![PyTorch](https://img.shields.io/badge/PyTorch-2.4-ee4c2c.svg?logo=pytorch)\n[![DOI](https://zenodo.org/badge/1213499953.svg)](https://doi.org/10.5281/zenodo.19700659)\n[![PyPI](https://img.shields.io/pypi/v/napari-fenestra.svg?labelColor=000000&color=blue)](https://pypi.org/project/napari-fenestra/)\n[![run with docker](https://img.shields.io/badge/run%20with-docker-0db7ed.svg?labelColor=000000&logo=docker)](https://www.docker.com/)\n[![run with apptainer/singularity](https://img.shields.io/badge/run%20with-apptainer%2Fsingularity-1E95D3.svg?labelColor=000000&logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB3aWR0aD0iMjQ1IiBoZWlnaHQ9IjI0MCIgdmlld0JveD0iNjAgMCAzMTAgMjUwIiBmaWxsPSJub25lIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPgk8cGF0aCBkPSJtIDI3MC4xOCwyNTMuOTggYyAtMS44LC0xLjIgLTMuNCwtMyAtNC40LC01LjIgbCAtNTIuNiwtMTE3LjQgYyAtMi4yLC00LjggLTMuOCwtOC42IC01LjIsLTExLjYgLTIuMiwtNC40IC0yLjIsLTUuNiAtMi4yLC02LjQgMCwtMi4yIDAuOCwtMy44IDIuNiwtNC44IHYgLTQuNCBoIC00My4yIHYgNC40IGMgMC44LDAuNCAxLjIsMS4yIDEuOCwxLjggMC40LDAuOCAwLjgsMS44IDAuOCwzIDAsMS4yIC0wLjQsMyAtMS44LDUuNiAtMS4yLDIuNiAtMi42LDUuNiAtNC40LDkuNCBsIC01MS44LDExNyBjIC0wLjgsMS44IC0yLjIsNC40IC0zLjgsNy40IC0xLjgsMyAtNC44LDQuNCAtOC4yLDQuOCB2IDMuOCBoIDQ5LjYgdiAtMy44IGMgLTUuNiwwIC04LjIsLTIuMiAtOC4yLC01LjYgMCwtMS44IDAuOCwtNC44IDMsLTkgMS44LC0zLjQgMy44LC03LjggNS42LC0xMiAyNC42LDkuNCA1Mi4yLDEwIDc2LjgsMC44IDIuMiw0LjQgMy44LDguMiA1LjIsMTEuMiAxLjgsMy40IDIuNiw2LjQgMi42LDguNiAwLDIuMiAtMC48LDMuOCAtMi4yLDQuOCAtMS4yLDAuNCAtMi4yLDAuOCAtMy40LDEuMiB2IDMuOCBoIDUwLjQgdiAtMy44IGMgLTIuOCwtMS44IC01LjQsLTIuOCAtNywtMy42IHogbSAtMTExLjQsLTQ3IDI3LjYsLTYxLjQgMjgsNjIuMiBjIC0xOCw2IC0zNy40LDYgLTU1LjYsLTAuOCB6IiBmaWxsPSJ3aGl0ZSIvPiA8cGF0aCBkPSJtIDg5Ljc4LDE0MC45OCBjIDAsLTkgMS4yLC0xNy42IDMuNCwtMjYuNCBsIC0yOCwtMTIuNiBjIC0zLjgsMTIgLTYsMjQuNiAtNiwzNy42IDAsMzUgMTQuMiw2OC42IDM5LjgsOTIuOCBsIDEuOCwtMy40IDExLjIsLTI1LjQgYyAtMTMuNiwtMTcuNCAtMjIuMiwtMzkgLTIyLjIsLTYyLjYgeiIgZmlsbD0iIzkzOTU5OCIvPiA8cGF0aCBkPSJtIDMxMC4xOCwxMDIuNTggLTI4LDEyLjYgYyAyLjIsOC4yIDMuNCwxNi44IDMuNCwyNS44IDAsMjMuOCAtOC42LDQ1LjggLTIyLjgsNjIuNiBsIDExLjYsMjUuNCAxLjgsMy40IGMgMjUuNCwtMjQuMiAzOS44LC01Ny44IDM5LjgsLTkyLjggLTAuMiwtMTIuNCAtMi4yLC0yNSAtNS44LC0zNyB6IiBmaWxsPSIjRjc5NDIxIi8%2BIDxwYXRoIGQ9Im0gNzEuMTgsODYuOTggMjcuNiwxMi42IGMgMTQuNiwtMzEgNDQuOCwtNTMgODAuMiwtNTYuMiB2IC0zMC42IGMgLTQ2LDIuNiAtODguNCwzMS40IC0xMDcuOCw3NC4yIHoiIGZpbGw9IiMxRTk1RDMiLz4gPHBhdGggZD0ibSAzMDQuMTgsODYuOTggYyAtMTkuNCwtNDIuOCAtNjEuOCwtNzEuNiAtMTA4LjQsLTc0LjYgdiAzMC42IGMgMzUuOCwzIDY2LDI1IDgwLjYsNTYuMiB6IiBmaWxsPSIjNkZCNTQ0Ii8%2BPC9zdmc%2B)](https://sylabs.io/docs/)\n\nFenestRA is a custom Napari plugin built for the Advanced LSEC AFM Pipeline. It bridges the gap between interactive Napari features, legacy deep-learning upscale repositories via containerized backends, and state-of-the-art Cellpose instance segmentation. \n\nBy combining Deep Learning-based Super Resolution (HAT / SwinIR) with automated morphological analysis, FenestRA drastically simplifies the workflow of extracting robust physical porosity and fenestration morphology metrics directly from raw `.jpk.qi-image` files.\n\n> [!IMPORTANT]\n> **Pre-Publication Notice**  \n> This repository provides the public codebase and scaffolding for the FenestRA pipeline. The fine-tuned deep learning model weights (specifically for the HAT, SwinIR, and custom Cellpose LSEC segmentation models) are currently kept private. They will be made completely publicly available alongside the peer-reviewed manuscript immediately upon its formal publication.\n\n---\n\n## Features\n\n- **Cross-Platform Container Engine:** Seamlessly toggle between **Docker** (Windows / macOS) and **Singularity / Apptainer** (Linux / HPC) directly from the Napari UI. No code changes needed when switching platforms.\n- **Hub-and-Spoke Deep Learning Architecture:** Run legacy Python 3.8 dependent upscale models (HAT, SwinIR) asynchronously inside a container without freezing your modern Napari GUI.\n- **Post-DL Image Enhancement:** Optional CLAHE contrast equalization and Unsharp Masking applied directly to the Deep Learning output to sharpen fenestration edges before segmentation.\n- **Native JPK Ingestion:** Automatically reads native physical scale (`nm / px`) from `.jpk.qi-image` files using AFMReader.\n- **Synchronized 4-Pane Analysis:** Auto-generates a synchronized Napari viewer layout combining Raw, Upsampled, Mask, and Boundary Overlays natively.\n- **Configurable CPU Fallback:** Includes high-fidelity Python-based CLAHE and unsharp masking functions when DL inference isn't required.\n- **Sub-cellular Quantification:** Automatically calculates standard metrics (area, perimeter, equivalent diameter, eccentricity, porosity) with digital-to-physical size translations directly to `.csv`.\n- **Batch Analysis:** Process an entire folder of `.jpk-qi-image` files in one automated run. Produces a single consolidated `.xlsx` Excel file with metrics from all images, plus individual upsampled TIFFs and Cellpose mask TIFFs.\n\n---\n\n## Installation\n\n### 1. Requirements\n- Python 3.10+\n- An NVIDIA GPU with CUDA 12.4 drivers (recommended for DL inference)\n\n> [!CAUTION]\n> <small>**Hardware Compatibility Warning:** FenestRA requires deep learning hardware capable of running modern tensor operations. Extremely old legacy GPUs based on the Maxwell architecture (Compute Capability 5.2 or earlier, such as the Quadro M4000) physically lack hardware support for BFloat16 (`CUDA_R_16BF`) math. Running the plugin on these ancient GPUs will cause PyTorch and Cellpose to instantly crash with a `CUBLAS_STATUS_NOT_SUPPORTED` error.</small>\n\n- **Linux:** Apptainer / Singularity\n- **Windows / macOS:** Docker Desktop\n\n### 2. Create the Host Environment\nCreate a clean Anaconda environment optimized for Cellpose targeting CUDA 12.4:\n\n```bash\nconda create -n fenestra-env -c conda-forge python=3.10 numpy=1.26.4\nconda activate fenestra-env\n\n# Install base GUI tools, Napari, and core scientific dependencies\npip install \"napari[all]\" magicgui qtpy scipy scikit-image pandas tifffile \"numpy<2\" openpyxl\n\n# Install PyTorch mapped explicitly to CUDA 12.4 to ensure GPU hardware acceleration works\npip install --index-url https://download.pytorch.org/whl/cu124 torch==2.4.0 torchvision==0.19.0\n\n# Install Cellpose for fenestration instance segmentation\npip install cellpose\n\n# Install AFMReader for handling raw JPK AFM metadata\npip install git+https://github.com/AFM-SPM/AFMReader.git\n```\n\n### 3. Install FenestRA\nSince FenestRA is now available as a Python package on PyPI, you can install it directly using pip:\n```bash\npip install napari-fenestra\n\n# To update an existing installation to the latest version, run:\npip install --upgrade napari-fenestra\n```\n\n### 4. Setup the Deep Learning Backend (Docker vs Singularity)\n\nFenestRA runs its massive deep learning architectures completely independently from the modern Napari UI. You must compile the container engine based on your Operating System.\n\nFirst, clone the repository to download the Docker and Singularity setup files:\n```bash\ngit clone https://github.com/LIVR-VUB/FenestRA.git\ncd FenestRA\n```\n\n**For Windows & macOS Users (Docker Desktop):**\nBecause Apple and Windows systems cannot securely install Singularity, we use Docker.\n1. Install [Docker Desktop](https://www.docker.com/products/docker-desktop/) on your machine.\n2. Open a terminal and navigate to this repository's `containers/` directory.\n3. Build the backend image (Windows/Mac users do NOT need `sudo`):\n```bash\ndocker build -t livrvub/dl-upsampling:latest -f Dockerfile ..\n```\n*(In Napari, select **Docker** from the Engine dropdown. No file browsing needed!)*\n\n**For Native Linux Users (Singularity / Apptainer):**\nLinux systems heavily restrict Docker permissions. For ultimate performance and hassle-free paths on Linux, use Apptainer/Singularity.\n1. Install Apptainer natively on your Linux distribution.\n2. Open a terminal and build the container using the provided definition recipe:\n```bash\nsudo apptainer build dl_upsampling.sif containers/dl_upsampling.def\n```\n*(In Napari, select **Singularity** from the Engine dropdown, and use the `...` button to select that `.sif` file!)*\n\n---\n\n## Usage\n\n### Single Image Analysis\n\n1. Activate your environment: `conda activate fenestra-env`\n2. Launch napari: `napari`\n3. Navigate to `Plugins > FenestRA Pipeline` to open the widget!\n4. **Step 1 — Input Data:** Load your `*.jpk-qi-image` file.\n5. **Step 2 — Upsampling:** Select a method (CLAHE, HAT, or SwinIR). For DL methods, specify the model `.pth`, choose your Engine (Docker or Singularity), and optionally enable **\"Apply Post-DL Sharpening\"** with adjustable Clip Limit and Unsharp parameters. Hit **Run Upsampling**.\n6. **Step 3 — Segmentation:** Configure Cellpose parameters (Diameter, Cellprob Threshold, Flow Threshold). Optionally load a custom Cellpose model. Hit **Run Cellpose**.\n7. **Step 4 — Layout & Analysis:** Click **Arrange 4-Pane Grid** for a synchronized review of Raw, Upsampled, Mask, and Overlay views. Click **Quantify Fenestrations** to export your CSV metrics.\n\n### Batch Analysis\n\n1. Configure your preferred upsampling method, model paths, and Cellpose parameters using the single-image sections above.\n2. Scroll down to **Section 5 — Batch Analysis**.\n3. Select an **Input Directory** containing your `.jpk-qi-image` files.\n4. Select an **Output Directory** where results will be saved.\n5. Click **Run Batch**. The status label will update in real-time showing progress (e.g., `Processing 3/10: sample.jpk-qi-image`).\n6. When complete, the output directory will contain:\n   - `batch_results.xlsx` — Consolidated Excel file with metrics from all images (with `Image_Name` column).\n   - `<image_name>_upsampled.tif` — Upsampled TIFF for each input image.\n   - `<image_name>_mask.tif` — Cellpose segmentation mask for each input image.\n\n---\n\n## Changelog\n\n### v0.2\n- **Batch Analysis Module:** New Section 5 in the Napari UI for processing entire folders of `.jpk-qi-image` files. Outputs a single consolidated `.xlsx` Excel file with fenestration metrics from all images, plus individual upsampled TIFFs and Cellpose mask TIFFs.\n- **Post-DL Image Enhancement:** Added an optional \"Apply Post-DL Sharpening\" checkbox that applies CLAHE contrast equalization and Unsharp Masking to the Deep Learning output before Cellpose segmentation.\n- **UI Restructuring:** Separated the Clip Limit / Unsharp Radius / Amount sliders into a shared post-processing group that is dynamically visible for both CLAHE and DL workflows.\n\n### v0.1\n- **Cross-Platform Docker Support:** Added a `Dockerfile` mirroring the Singularity `.def` environment. Users can now toggle between Docker and Singularity engines directly from the Napari UI.\n- **Engine Toggle UI:** New \"Engine\" dropdown in the Upsampling section. Selecting Docker shows a tag input; selecting Singularity shows a `.sif` file picker.\n- **Container Recipes:** Both `Dockerfile` and `dl_upsampling.def` are now bundled in the `containers/` directory.\n- **Cross-Platform README:** Added installation instructions for Windows, macOS, and Linux users.\n\n---\n\n## Acknowledgments & Citations\n\nIf you use FenestRA in your research, please ensure you properly cite the core technologies that make this pipeline possible:\n\n- **Cellpose** (Instance Segmentation Engine):\n  > Stringer, C., Wang, T., Michaelos, M., & Pachitariu, M. (2021). Cellpose: a generalist algorithm for cellular segmentation. *Nature Methods*, 18(1), 100-106. https://doi.org/10.1038/s41592-020-01018-x\n- **AFMReader** (JPK File Ingestion):\n  > Our native support for `.jpk-qi-image` AFM files is powered by the [AFMReader library](https://github.com/AFM-SPM/AFMReader) maintained by the AFM-SPM community.\n- **HAT / SwinIR** (Generative Deep Learning Models):\n  > Chen, X. et al. (2023). Activating More Pixels in Image Super-Resolution Transformer. (HAT)\n  > Liang, J. et al. (2021). SwinIR: Image Restoration Using Swin Transformer. \n\n---\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/b/b7/Flag_of_Europe.svg\" width=\"50\" alt=\"EU Flag\"> \n\n*This project has received funding from the European Union’s Horizon research and innovation programme under the Marie Skłodowska-Curie grant agreement No 101119613, as part of the [ImAge-d MSCA Doctoral network](https://uit.no/research/image-d).*\n","description_content_type":"text/markdown","keywords":null,"home_page":"https://github.com/LIVR-VUB/FenestRA","download_url":null,"author":"Arkajyoti Sarkar","author_email":"arkajyoti.sarkar@vub.be","maintainer":null,"maintainer_email":null,"license":"BSD-3-Clause","classifier":["Development Status :: 3 - Alpha","Framework :: napari","Intended Audience :: Developers","License :: OSI Approved :: BSD License","Operating System :: OS Independent","Programming Language :: Python","Programming Language :: Python :: 3","Programming Language :: Python :: 3 :: Only","Programming Language :: Python :: 3.10","Topic :: Scientific/Engineering :: Image Processing"],"requires_dist":["numpy<2.0.0,>=1.26.0","magicgui","qtpy","scikit-image","scipy","tifffile","cellpose","pandas","openpyxl"],"requires_python":">=3.10","requires_external":null,"project_url":null,"provides_extra":null,"provides_dist":null,"obsoletes_dist":null},"npe1_shim":false}