profile pic

Aravind Sukumaran Rajam

I am currently working as a Senior research associate with Prof. P. Sadayappan (Saday) at The Ohio State University, Columbus. Before joining Prof. P. Sadayappan, I worked with Prof. Philippe Clauss at INRIA (in collaboration with University of Strasbourg, France). I received my PhD in computer science from the University of Strasbourg. I did my Master's in Indian Institute of Science, India where I was advised by Prof. Uday Bondhugula (Uday Reddy B).

Get in touch with me

Research Interests

My areas of interest include High Performance Computing, Machine learning, Compilers, Auto Parallelization, Performance Optimizations, Dynamic Optimizations and Algorithms. Currently, I am working on various projects related to compilers, machine learning, DSLs and parallelization & optimizations for various targets such as Multi/Many core, GPUs and FPGAs. Even though I don't have much experience in robotics and embedded systems, I am very interested in these fields.

Publications

[1] Changwan Hong, Aravind Sukumaran-Rajam, Israt Nisa, Kunal Singh, and P Sadayappan. Adaptive sparse tiling for sparse matrix multiplication. In Proceedings of the 24th Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), 2019.
[2] Changwan Hong, Aravind Sukumaran-Rajam, Jinsung Kim, Prashant Singh Rawat, Sriram Krishnamoorthy, Louis-Noël Pouchet, Fabrice Rastello, and P. Sadayappan. Gpu code optimization using abstract kernel emulation and sensitivity analysis. In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), 2018. [  DOI ]
[3] Jinsung Kim, Aravind Sukumaran-Rajam, Changwan Hong, Ajay Panyala, Rohit Kumar Srivastava, Sriram Krishnamoorthy, and P. Sadayappan. Optimizing tensor contractions in CCSD(T) for efficient execution on GPUs. In Proceedings of the 2018 International Conference on Supercomputing (ICS), 2018.
[4] Gordon E. Moon, Israt Nisa, Aravind Sukumaran-Rajam, Bortik Bandyopadhyay, Srinivasan Parthasarathy, and P. Sadayappan. Parallel latent Dirichlet allocation on GPUs. In International Conference on Computational Science (ICCS), 2018.
[5] Israt Nisa, Aravind Sukumaran-Rajam, Rakshith Kunchum, and P. Sadayappan. Parallel CCD++ on GPU for matrix factorization. In Proceedings of the General Purpose GPUs (GPGPU@PPoPP), 2017.
[6] Jinsung Kim, Aravind Sukumaran-Rajam, Ajay Panyala, Vineeth Reddy, Sriram Krishnamoorthy, and P. Sadayappan. A code generator for high-performance tensor contractions on GPUs. In Proceedings of the 2019 International Symposium on Code Generation and Optimization (CGO), 2019.
[7] Prashant Singh Rawat, Miheer Vaidya, Aravind Sukumaran-Rajam, Mahesh Ravishankar, Vinod Grover, Atanas Rountev, Louis-Noël Pouchet, and P Sadayappan. Domain-specific optimization and generation of high-performance GPU code for stencil computations. Proceedings of the IEEE, 2018.
[8] Changwan Hong, Aravind Sukumaran-Rajam, Bortik Bandyopadhyay, Jinsung Kim, Süreyya Emre Kurt, Israt Nisa, Shivani Sabhlok, Ümit V Çatalyürek, Srinivasan Parthasarathy, and P Sadayappan. Efficient sparse-matrix multi-vector product on GPUs. In Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing (HPDC), 2018.
[9] Prashant Singh Rawat, Aravind Sukumaran-Rajam, Atanas Rountev, Fabrice Rastello, Louis-Noël Pouchet, and P Sadayappan. Associative instruction reordering to alleviate register pressure. In The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC), 2018. [  DOI ]
[10] Israt Nisa, Aravind Sukumaran-Rajam, Changwan Hong, Abhinav Vishnu, and P. Sadayappan. Efficient sampled dense-dense matrix product (SDDMM) for GPUs. In IEEE 25th International Conference on High Performance Computing (HiPC), 2018.
[11] Jyothi Vedurada, Arjun Suresh, Aravind Sukumaran Rajam, Jinsung Kim, Changwan Hong, Ajay Panyala, Sriram Krishnamoorthy, V Krishna Nandivada, Rohit Kumar Srivastava, and P Sadayappan. TTLG - an efficient tensor transposition library for GPUs. In IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2018. [  DOI ]
[12] Prashant Singh Rawat, Fabrice Rastello, Aravind Sukumaran-Rajam, Louis-Noël Pouchet, Atanas Rountev, and P. Sadayappan. Register optimizations for stencils on GPUs. In Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), 2018. [  DOI ]
[13] Changwan Hong, Aravind Sukumaran-Rajam, Jinsung Kim, and P Sadayappan. MultiGraph: Efficient graph processing on GPUs. In 26th International Conference on Parallel Architectures and Compilation Techniques (PACT), 2017. [  DOI ]
[14] Rakshith Kunchum, Ankur Chaudhry, Aravind Sukumaran-Rajam, Qingpeng Niu, Israt Nisa, and P. Sadayappan. On improving performance of sparse matrix-matrix multiplication on GPUs. In Proceedings of the International Conference on Supercomputing (ICS), 2017.
[15] Süreyya Emre Kurt, Vineeth Thumma, Changwan Hong, Aravind Sukumaran-Rajam, and P Sadayappan. Characterization of data movement requirements for sparse matrix computations on GPUs. In IEEE 24th International Conference on High Performance Computing (HiPC), 2017. [  DOI ]
[16] Prashant Singh Rawat, Aravind Sukumaran-Rajam, Atanas Rountev, Fabrice Rastello, Louis-Noël Pouchet, and P Sadayappan. Poster: Statement reordering to alleviate register pressure for stencils on GPUs. In 26th International Conference on Parallel Architectures and Compilation Techniques (PACT), 2017. [  DOI ]
[17] Juan Manuel Martinez Caamaño, Aravind Sukumaran-Rajam, Artiom Baloian, Manuel Selva, and Philippe Clauss. APOLLO: Automatic speculative POLyhedral Loop Optimizer. In 7th International Workshop on Polyhedral Compilation Techniques (IMPACT), 2017.
[18] Gordon E Moon, Aravind Sukumaran-Rajam, and P Sadayappan. Parallel LDA with over-decomposition. In IEEE 24th International Conference on High Performance Computing Workshops (HiPCW), Dec 2017. [  DOI ]
[19] Aravind Sukumaran-Rajam and Philippe Clauss. The polyhedral model of nonlinear loops. ACM Transactions on Architecture and Code Optimization (TACO), 12(4):48:1--48:27, December 2016. [  DOI | http ]
[20] Aravind Sukumaran-Rajam, Luis Esteban Campostrini, Juan Manuel Martinez Caamaño, and Philippe Clauss. Speculative runtime parallelization of loop nests: Towards greater scope and efficiency. In IEEE International Parallel and Distributed Processing Symposium Workshop, (IPDPSW), 2015.
[21] Aravind Sukumaran-Rajam, Juan Manuel Martinez Caamaño, Willy Wolff, Alexandra Jimborean, and Philippe Clauss. Speculative program parallelization with scalable and decentralized runtime verification. In Runtime Verification - 5th International Conference (RV), 2014.
[22] Alexandra Jimborean, Philippe Clauss, Juan Manuel Martinez Caamaño, and Aravind Sukumaran-Rajam. Online dynamic dependence analysis for speculative polyhedral parallelization. In Euro-Par 2013 Parallel Processing, 2013.

Thesis

[1] Aravind Sukumaran-Rajam. Beyond the Realm of the Polyhedral Model: Combining Speculative Program Parallelization with Polyhedral Compilation. PhD thesis, University of Strasbourg, France, 2015. [  http ]
[2] Aravind Sukumaran-Rajam. Revisiting Pipelined Parallelism in the Polyhedral Framework. Master's thesis, Indian Institute of Science(IISc), India, 2012.

Professional Service

Program Committee

  • ISC High Performance 2018

Reviewer

  • International Parallel and Distributed Processing Symposium, 2018
  • International Parallel and Distributed Processing Symposium, 2017
  • Journal of Parallel and Distributed Computing, 2017
  • International Journal of Parallel Programming, 2017

Course Materials

Feel free to contact me if any course material is missing

Email: sukumaranrajam dot 1 at osu dot edu

Linkedin: Click Here

Do not contact me with unsolicited services or offers