Mert Kilickaya

Hello!

I am a deep learning researcher. My goal is to build autonomous learners that can self-improve with minimal human intervention: How can we learn from uncurated and unlabeled data?

I worked at the Learning to Learn Lab, where I focus on self-supervision and continual learning. Before that, I obtained my PhD at the QUvA Lab , where I tackled visual detection using limited supervision, under the guidance of Arnold Smeulders.

When I'm not researching, I love capturing beautiful cities with my GoPro. Tavira, Portugal, and Montreal, Canada, are among my favorite places. If you have any suggestions for my next adventure, feel free to contact me!

Email  |  CV  |  Google Scholar  |  LinkedIn

News

Experience

TU Eindhoven
Learning to Learn Lab, Eindhoven University of Technology, Netherlands
2022-2023

Researcher

QUvA Deep Vision Lab
QUvA Deep Vision Lab, University of Amsterdam, Netherlands
2017-2022

PhD Researcher

Huawei Visual Search Lab
Huawei Visual Search Lab, Helsinki, Finland
2021 March - 2022 January

Research Scientist Intern

Université Laval
Université Laval Computer Vision Lab, Quebec, Canada
2015 March - 2015 October

Research Intern

Research

PontTuset Locality-Aware Hyperspectral Classification
Fangqin Zhou, Mert Kilickaya, Joaquin Vanschoren
BMVC 2023
arXiv / Github

We propose HyLITE, a vision-transformer for Hyperspectral image classification.

PontTuset Towards Label-Efficient Incremental Learning: A Survey
Mert Kilickaya, Joost van de Weijer, Yuki Asano
Preprint 2023
arXiv / Slides / Github

We survey semi-, few-shot, and self-supervised incremental learning.

PontTuset Are Labels Needed for Incremental Instance Learning?
Mert Kilickaya, Joaquin Vanschoren
CVPRW 2023 (Oral)
arXiv / Slides

We introduce VINIL, a self-supervised incremental instance learner.

PontTuset Human-Object Interaction Detection via Weak Supervision
Mert Kilickaya, Arnold Smeulders
BMVC 2021
arXiv / Slides

We introduce Align-Former to detect human-object interactions from an image.

PontTuset Structured Visual Search via Composition-aware Learning
Mert Kilickaya, Arnold Smeulders
WACV 2021
arXiv / Slides

We introduce composition-aware learning to improve visual image search.

PontTuset Re-evaluating Automatic Metrics for Image Captioning
Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, Erkut Erdem
EACL 2017
arXiv / Slides

We propose Word-Mover Distance to evaluate image captioning.

Patents

Visual Image Search via Conversational Interaction (Huawei)

Mert Kilickaya, Baiqiang XIA

WO Patent, 2022

Network for Interacted Object Localization (Qualcomm)

Mert Kilickaya, Arnold Smeulders

US Patent, 2022

Subject-Object Interaction Recognition Model (Qualcomm)

Mert Kilickaya, Stratis Gavves, Arnold Smeulders

US Patent, 2022

Context-driven Learning of Human-object Interactions (Qualcomm)

Mert Kilickaya, Noureldien Hussein, Stratis Gavves, Arnold Smeulders

US Patent, 2021

Supervision

I am very fortunate to have crossroads with the kind souls below.

Presentations

Here are some papers from other researchers that I have enjoyed studying and presenting. Congratulations to the authors on their awesome work!


Source code credit to Dr. Jon Barron and ChatGPT