NeAR is a relightable 3D generation and rendering project built on top of TRELLIS-style Structured Latents (SLAT) and a lighting-aware neural renderer. Given a casually lit input image, NeAR estimates relightable neural assets and renders them under novel environment lighting and viewpoints.
This repository combines:
- a TRELLIS-derived latent pipeline for image-conditioned SLAT prediction,
- a lighting-aware neural renderer conditioned on HDR environment maps,
- an optional geometry frontend based on Hunyuan3D-2.1,
- tools for single-view relighting, novel-view relighting, HDRI rotation videos, and GLB export.
- [â] Checkpoints / model weights
- [â] Inference code
- [â] Hugging Face demo
- [â] Data release
- Training code
- Inference code and Checkpoints have been released!
- â 2025.04 â NeAR has been selected as a Highlight at CVPR 2026!
- The Hugging Face demo is currently being deployed.
- Data and training code are coming soon.
Relightable 3D generative rendering results. Columns from left to right depict the target illumination, the casually lit input image, Blender-rendered results from Trellis 3D, Hunyuan 3D-2.1 (with PBR materials), our method's estimated multi-view PBR materials back-projected onto the given mesh, our neural rendering results, and ground truth.
The following videos are produced by the local NeAR example pipeline and are useful for quickly previewing:
- Novel-view relighting video: camera moves while the illumination stays fixed.
- HDRI rotation preview: environment map rotates while the camera stays fixed.
- Relighting under rotating HDRI: material response changes under time-varying illumination.
NeAR couples asset representation and renderer design:
- Asset side: from an input image, a structured latent representation stores geometry-aware and material-aware information in a compact sparse latent.
- Renderer side: a neural renderer takes the latent, view parameters, and an HDR environment map, then predicts relightable outputs such as color, base color, metallic, roughness, and shadow.
Compared with a standard image-to-3D pipeline, NeAR focuses on:
- relighting under novel HDR illumination,
- view-consistent rendering,
- fast feed-forward inference, and
- material-aware rendering outputs.
Key files and directories:
example.pyâ minimal end-to-end inference example.app_e.pyâ Gradio-style demo / app script.app_viser.pyâ interactive neural relight viewer (viser); orbit camera + HDRI controls, full-viewport relit RGB (no GLB).setup.shâ environment setup helper.checkpoints/â local pipeline configuration and model checkpoints.trellis/pipelines/near_image_to_relightable_3d.pyâ main NeAR inference pipeline.trellis/utils/render_utils_rl.pyâ relighting rendering utilities.trellis/datasets/hdri_processer.pyâ HDRI preprocessing and rotation helpers.hy3dshape/â Hunyuan3D shape utilities from Tencent-Hunyuan/Hunyuan3D-2.1/hy3dshape.
- Linux
- NVIDIA GPU
- Python 3.10+ recommended
- CUDA-compatible PyTorch environment
NeAR inherits many dependencies from TRELLIS and additionally uses relighting-related packages such as pyexr, simple_ocio, open3d, and the local hy3dshape module.
Use the provided setup script as a starting point:
git clone --recursive https://github.com/Luh1124/NeAR.git
cd NeAR
. ./setup.sh --helpA typical TRELLIS-style setup may look like:
. ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --kaolin --nvdiffrast --hy3d --gsplatDepending on your environment, you may still need to manually install extra packages used by NeAR, for example:
pip install pyexr simple-ocio open3d rembg imageio easydictThe local pipeline configuration is defined in:
checkpoints/pipeline.yaml
It references the main model components used by NeAR, including:
decoderhdri_encoderneural_basisrendererslat_flow_model
The geometry model is currently run separately in example.py via:
tencent/Hunyuan3D-2.1
The first-stage training data is available on Hugging Face:
- luh0502/NeAR â stage-1 dataset
Preprocessed HDR environment maps used for training and inference:
- luh0502/hdr_envmaps_exr_1K â 1K resolution, normalized to 0â65536 float EXR
NeAR supports two inference paths:
- Image â relightable result â preprocess image â generate geometry (Hunyuan3D) â predict SLAT â render under target HDRI.
- Existing SLaT â relightable result â skip geometry/latent generation, render directly from a saved
.npz.
For detailed instructions, command-line examples, output descriptions, and API usage, see doc/infer.md.
Quick start:
python example.py \
--image assets/example_image/T.png \
--hdri assets/hdris/studio_small_03_1k.exr \
--out_dir relight_outThis repository builds on and adapts ideas, codebases, and problem settings from several recent works on structured 3D latents, relighting, inverse rendering, and PBR-aware 3D generation, including:
- TRELLIS â structured latent generation and sparse 3D asset representations
- Hunyuan3D 2.1 â image-to-geometry generation and image examples
- DiLightNet â diffusion-based lighting control
- Neural Gaffer â object relighting
- DiffusionRenderer â neural inverse / forward rendering
- MeshGen â PBR textured mesh generation
- RGBâX â material- and lighting-aware decomposition and synthesis
We thank the authors of these projects for releasing their papers, code, models, and project pages. If you use this repository, please also check the licenses and terms of the upstream dependencies and models.
If you find this project useful, please consider citing our paper:
@inproceedings{li2025near,
title={NeAR: Coupled Neural Asset-Renderer Stack},
author={Li, Hong and Ye, Chongjie and Chen, Houyuan and Xiao, Weiqing and Yan, Ziyang and Xiao, Lixing and Chen, Zhaoxi and Xiang, Jianfeng and Xu, Shaocong and Liu, Xuhui and Wang, Yikai and Zhang, Baochang and Han, Xiaoguang and Yang, Jiaolong and Zhao, Hao},
booktitle={CVPR},
year={2026}
}
