Skip to content

Commit ea11ff6

Browse files
committed
Added related projects
1 parent dc7d303 commit ea11ff6

File tree

5 files changed

+45
-5
lines changed

5 files changed

+45
-5
lines changed

_config.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
#theme: bulma-clean-theme
1+
theme: bulma-clean-theme
22
remote_theme: chrisrhymes/bulma-clean-theme
33
title: "TACO: The Tensor Algebra Compiler"
44
#tagline: A fast and versatile library for linear and tensor algebra

_data/publications.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@
55
is_thesis: false
66
paper_link: https://arxiv.org/pdf/2111.14947.pdf
77
abstract: >
8-
While loop reordering and fusion can make big impacts on the constant-factor performance of dense tensor programs, the effects on sparse tensor programs are asymptotic, often leading to orders of magnitude performance differences in practice. Sparse tensors also introduce a choice of compressed storage formats that can have asymptotic effects. Research into sparse tensor compilers has led to simplified languages that express these tradeoffs, but the user is expected to provide a schedule that makes the decisions. This is challenging because schedulers must anticipate the interaction between sparse formats, loop structure, potential sparsity patterns, and the compiler itself. Automating this decision making process stands to finally make sparse tensor compilers accessible to end users.
9-
<br><br>
10-
We present, to the best of our knowledge, the first automatic asymptotic scheduler for sparse tensor programs. We provide an approach to abstractly represent the asymptotic cost of schedules and to choose between them. We narrow down the search space to a manageably small Pareto frontier of asymptotically undominated kernels. We test our approach by compiling these kernels with the TACO sparse tensor compiler and comparing them with those generated with the default TACO schedules. Our results show that our approach reduces the scheduling space by orders of magnitude and that the generated kernels perform asymptotically better than those generated using the default schedules.
8+
While loop reordering and fusion can make big impacts on the constant-factor performance of dense tensor programs, the effects on sparse tensor programs are asymptotic, often leading to orders of magnitude performance differences in practice. Sparse tensors also introduce a choice of compressed storage formats that can have asymptotic effects. Research into sparse tensor compilers has led to simplified languages that express these tradeoffs, but the user is expected to provide a schedule that makes the decisions. This is challenging because schedulers must anticipate the interaction between sparse formats, loop structure, potential sparsity patterns, and the compiler itself. Automating this decision making process stands to finally make sparse tensor compilers accessible to end users.
9+
<br><br>
10+
We present, to the best of our knowledge, the first automatic asymptotic scheduler for sparse tensor programs. We provide an approach to abstractly represent the asymptotic cost of schedules and to choose between them. We narrow down the search space to a manageably small Pareto frontier of asymptotically undominated kernels. We test our approach by compiling these kernels with the TACO sparse tensor compiler and comparing them with those generated with the default TACO schedules. Our results show that our approach reduces the scheduling space by orders of magnitude and that the generated kernels perform asymptotically better than those generated using the default schedules.
1111
bibtex: >
1212
@article{ahrens:2022:autoscheduling,
1313
title={Autoscheduling for Sparse Tensor Algebra with an Asymptotic Cost Model},

_data/related.yml

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
- heading: MLIR
2+
link: https://mlir.llvm.org/
3+
desc: >
4+
MLIR is an open-source project that provides an extensible infrastructure for building compilers for domain-specific programming languages.
5+
MLIR provides first-class support for sparse tensor operations through the <code>SparseTensor</code> dialect, which the MLIR compiler can compile to LLVM IR using an implementation of TACO's sparse tensor algebra compiler theory.
6+
pubs:
7+
- title: Compiler Support for Sparse Tensor Computations in MLIR
8+
authors: Aart J.C. Bik, Penporn Koanantakool, Tatiana Shpeisman, Nicolas Vasilache, Bixia Zheng, and Fredrik Kjolstad
9+
is_thesis: false
10+
paper_link: https://arxiv.org/abs/2202.04305
11+
slide_link: https://llvm.org/devmtg/2021-11/slides/2021-CompilerSupportforSparseTensorComputationsinMLIR.pdf
12+
youtube: x-nHc3hBxHM
13+
abstract: >
14+
Sparse tensors arise in problems in science, engineering, machine learning, and data analytics. Programs that operate on such tensors can exploit sparsity to reduce storage requirements and computational time. Developing and maintaining sparse software by hand, however, is a complex and error-prone task. Therefore, we propose treating sparsity as a property of tensors, not a tedious implementation task, and letting a sparse compiler generate sparse code automatically from a sparsity-agnostic definition of the computation. This paper discusses integrating this idea into MLIR.
15+
bibtex: >
16+
@article{https://doi.org/10.48550/arxiv.2202.04305,
17+
doi = {10.48550/ARXIV.2202.04305},
18+
url = {https://arxiv.org/abs/2202.04305},
19+
author = {Bik, Aart J. C. and Koanantakool, Penporn and Shpeisman, Tatiana and Vasilache, Nicolas and Zheng, Bixia and Kjolstad, Fredrik},
20+
keywords = {Programming Languages (cs.PL), FOS: Computer and information sciences, FOS: Computer and information sciences},
21+
title = {Compiler Support for Sparse Tensor Computations in MLIR},
22+
publisher = {arXiv},
23+
year = {2022},
24+
copyright = {Creative Commons Attribution 4.0 International}
25+
}
26+

_includes/publist.html

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
{% assign sections=site.data.[include.pubs] %}
22
{% for section in sections %}
3-
<p class="title is-3 my-5">{{ section.heading }}</p>
3+
<p class="title is-3 my-5">{% if section.link %}<a href="{{ section.link }}">{% endif %}{{ section.heading }}{% if section.link %}</a>{% endif %}</p>
4+
{% if section.desc %}
5+
<p>{{ section.desc }}</p>
6+
{% endif %}
47
{% for pub in section.pubs %}
58
<details>
69
<summary>

related.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
title: Related Projects
3+
hero_height: is-small
4+
layout: page
5+
---
6+
7+
Below are some examples of other projects that are also built on top of TACO's sparse tensor algebra compiler theory.
8+
9+
(If you are aware of any other project that builds on TACO but is not currently on this list, please reach out to us!)
10+
11+
{% include publist.html pubs="related" %}

0 commit comments

Comments
 (0)