Mert Kilickaya

Hello! I am an AI researcher. I focus on deep learning of self-supervised representations. My goal is to develop autonomous visual learners that can continuously improve by extracting their own supervision from dynamic visual data streams.

I have worked at the Learning to Learn Lab, where I advanced techniques in self-supervised continual learning, enabling AI systems to autonomously adapt and improve without relying on external annotations. Before that, I obtained my PhD at the QUvA Lab , on deep vision under the guidance of Arnold Smeulders.

Email  |  CV  |  Google Scholar  |  LinkedIn

News

Experience

TU Eindhoven
Learning to Learn Lab, Eindhoven University of Technology, Netherlands
2022-2023

Researcher

QUvA Deep Vision Lab
QUvA Deep Vision Lab, University of Amsterdam, Netherlands
2017-2022

PhD Researcher

Huawei Visual Search Lab
Huawei Visual Search Lab, Helsinki, Finland
2021 March - 2022 January

Research Scientist Intern

Université Laval
Université Laval Computer Vision Lab, Quebec, Canada
2015 March - 2015 October

Research Intern

Research

PontTuset HyTAS: A Hyperspectral Image Transformer Architecture Search Benchmark and Analysis
Fangqin Zhou, Mert Kilickaya, Joaquin Vanschoren Ran Piao
ECCV 2024
arXiv / Github

We present HyTAS, a quick method for hyperspectral vision transformer architecture search.

PontTuset Locality-Aware Hyperspectral Classification
Fangqin Zhou, Mert Kilickaya, Joaquin Vanschoren
BMVC 2023
arXiv / Github

We propose HyLITE, a vision-transformer for Hyperspectral image classification.

PontTuset Towards Label-Efficient Incremental Learning: A Survey
Mert Kilickaya, Joost van de Weijer, Yuki Asano
Preprint 2023
arXiv / Slides / Github

We survey semi-, few-shot, and self-supervised incremental learning.

PontTuset Are Labels Needed for Incremental Instance Learning?
Mert Kilickaya, Joaquin Vanschoren
CVPRW 2023 (Oral)
arXiv / Slides

We introduce VINIL, a self-supervised incremental instance learner.

PontTuset Human-Object Interaction Detection via Weak Supervision
Mert Kilickaya, Arnold Smeulders
BMVC 2021
arXiv / Slides

We introduce Align-Former to detect human-object interactions from an image.

PontTuset Structured Visual Search via Composition-aware Learning
Mert Kilickaya, Arnold Smeulders
WACV 2021
arXiv / Slides

We introduce composition-aware learning to improve visual image search.

PontTuset Re-evaluating Automatic Metrics for Image Captioning
Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, Erkut Erdem
EACL 2017
arXiv / Slides

We propose Word-Mover Distance to evaluate image captioning.

Patents

Visual Image Search via Conversational Interaction (Huawei)

Mert Kilickaya, Baiqiang XIA

WO Patent, 2022

Network for Interacted Object Localization (Qualcomm)

Mert Kilickaya, Arnold Smeulders

US Patent, 2022

Subject-Object Interaction Recognition Model (Qualcomm)

Mert Kilickaya, Stratis Gavves, Arnold Smeulders

US Patent, 2022

Context-driven Learning of Human-object Interactions (Qualcomm)

Mert Kilickaya, Noureldien Hussein, Stratis Gavves, Arnold Smeulders

US Patent, 2021

Supervision

I am very fortunate to have crossroads with the kind souls below.

Presentations

Here are some papers from other researchers that I have enjoyed studying and presenting. Congratulations to the authors on their awesome work!


Source code credit to Dr. Jon Barron and ChatGPT