|
1 |
| -# NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement |
| 1 | +# [NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement](https://arxiv.org/abs/2306.11920) |
| 2 | + |
| 3 | +[](https://arxiv.org/abs/2306.11920) |
| 4 | +[<a href=""><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="colab demo"></a>]() |
| 5 | +[<a href="https://www.kaggle.com/code/jesucristo/super-resolution-demo-swin2sr-official/"><img src="https://upload.wikimedia.org/wikipedia/commons/7/7c/Kaggle_logo.png?20140912155123" alt="kaggle demo" width=50></a>]() |
2 | 6 |
|
3 |
| -[]() |
4 | 7 |
|
5 | 8 | [Marcos V. Conde](https://scholar.google.com/citations?user=NtB1kjYAAAAJ&hl=en), [Javier Vazquez-Corral](https://scholar.google.com/citations?user=gjnuPMoAAAAJ&hl=en), [Michael S. Brown](https://scholar.google.com/citations?hl=en&user=Gv1QGSMAAAAJ), [Radu Timofte](https://scholar.google.com/citations?user=u3MwH5kAAAAJ&hl=en)
|
6 | 9 |
|
7 | 10 |
|
8 |
| -**TL;DR** NILUT uses neural representations for controllable photorealistic image enhancement. |
| 11 | +**TL;DR** NILUT uses neural representations for controllable photorealistic image enhancement. 🚀 Demo Tutorial and pretrained models available. |
| 12 | + |
| 13 | + |
| 14 | +<img src="media/nilut-intro.gif" alt="NILUT" width="800"> |
| 15 | + |
| 16 | +---- |
| 17 | + |
| 18 | +3D lookup tables (3D LUTs) are a key component for image enhancement. Modern image signal processors (ISPs) have dedicated support for these as part of the camera rendering pipeline. Cameras typically provide multiple options for picture styles, where each style is usually obtained by applying a unique handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is required. For this reason and other implementation limitations, their use on mobile devices is less popular. |
| 19 | + |
| 20 | +In this work, we propose a Neural Implicit LUT (NILUT), an implicitly defined continuous 3D color transformation parameterized by a neural network. We show that NILUTs are capable of accurately emulating real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles into a single network with the ability to blend styles implicitly. Our novel approach is memory-efficient, controllable and can complement previous methods, including learned ISPs. |
| 21 | + |
9 | 22 |
|
10 | 23 | **Topics** Image Enhancement, Image Editing, Color Manipulation, Tone Mapping, Presets
|
11 | 24 |
|
12 | 25 | ***Website and repo in progress.*** **See also [AISP](https://github.com/mv-lab/AISP)** for image signal processing code and papers.
|
13 | 26 |
|
| 27 | +---- |
14 | 28 |
|
15 |
| -<br> |
| 29 | +**Pre-trained** sample models are available at `models/`. We provide `nilutx3style.pt` a NILUT that encodes three 3D LUT styles (1,3,4) with high accuracy. |
16 | 30 |
|
17 |
| - <img src="nilut-intro.gif" alt="NILUT" width="800"> |
| 31 | +**Demo Tutorial** in [nilut-multiblend.ipynb](nilut-multiblend.ipynb) we provide a simple tutorial on how to use NILUT for multi-style image enhancement and blending. The corresponding training code will be released soon. |
18 | 32 |
|
19 |
| - <br> |
| 33 | +**Dataset** The folder `dataset/` includes 100 images from the Adobe MIT 5K Dataset. The images were processed using professional 3D LUTs on Adobe Lightroom. The structure of the dataset is: |
20 | 34 |
|
21 |
| ----- |
| 35 | +``` |
| 36 | +dataset/ |
| 37 | +├── 001_blend.png |
| 38 | +├── 001_LUT01.png |
| 39 | +├── 001_LUT02.png |
| 40 | +├── 001_LUT03.png |
| 41 | +├── 001_LUT04.png |
| 42 | +├── 001_LUT05.png |
| 43 | +├── 001_LUT08.png |
| 44 | +├── 001_LUT10.png |
| 45 | +└── 001.png |
| 46 | +... |
| 47 | +``` |
22 | 48 |
|
23 |
| -3D lookup tables (3D LUTs) are a key component for image enhancement. Modern image signal processors (ISPs) have dedicated support for these as part of the camera rendering pipeline. Cameras typically provide multiple options for picture styles, where each style is usually obtained by applying a unique handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is required. For this reason and other implementation limitations, their use on mobile devices is less popular. |
| 49 | +where `001.png` is the input unprocessed image, `001_LUTXX.png` is the result of applying each corresponding LUT and `001_blend.png` is the example target for evaluating sytle-blending (in the example the blending is between styles 1,3, and 4 with equal weights 0.33). |
| 50 | +The complete dataset includes 100 images `aaa.png` and their enhanced variants for each 3D LUT. |
24 | 51 |
|
25 |
| -In this work, we propose a Neural Implicit LUT (NILUT), an implicitly defined continuous 3D color transformation parameterized by a neural network. We show that NILUTs are capable of accurately emulating real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles into a single network with the ability to blend styles implicitly. Our novel approach is memory-efficient, controllable and can complement previous methods, including learned ISPs. |
26 | 52 |
|
27 | 53 | ----
|
28 | 54 |
|
29 |
| - **Contact** marcos.conde[at]uni-wuerzburg.de |
| 55 | +Hope you like it 🤗 If you find this interesting/insightful/inspirational or you use it, do not forget to acknowledge our work: |
| 56 | + |
| 57 | +``` |
| 58 | +@article{conde2023nilut, |
| 59 | + title={NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement}, |
| 60 | + author={Conde, Marcos V and Vazquez-Corral, Javier and Brown, Michael S and Timofte, Radu}, |
| 61 | + journal={arXiv preprint arXiv:2306.11920}, |
| 62 | + year={2023} |
| 63 | +} |
| 64 | +``` |
| 65 | + |
| 66 | +**Contact** marcos.conde[at]uni-wuerzburg.de |
| 67 | + |
0 commit comments