Accepted papers at TMLR
@TmlrPub
Followers
4K
Following
1
Media
0
Statuses
3K
Joined March 2022
Bags of Projected Nearest Neighbours: Competitors to Random Forests? David P. Hofmeyr. Action editor: Andres Masegosa. https://t.co/Nfveqb2Ygp
#classifiers #classifier #ensembles
openreview.net
In this paper we introduce a simple and intuitive adaptive k nearest neighbours classifier, and explore its utility within the context of bootstrap aggregating (“bagging”). The approach is based on...
0
0
0
On the Problem of Consistent Anomalies in Zero-Shot Industrial Anomaly Detection Tai Le Gia, Jaehyun Ahn. Action editor: Satoshi Hara. https://t.co/Xua7qGrFCn
#anomaly #anomalies #codegraph
openreview.net
Zero-shot image anomaly classification (AC) and anomaly segmentation (AS) play a crucial role in industrial quality control, where defects must be detected without prior training data. Current...
0
0
0
Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Rei... Jiacheng Lin, Tian Wang, Kun Qian. Action editor: Nino Vieillard. https://t.co/itLcpXsRmg
#rec #forgetting #reinforcement
openreview.net
We propose Rec-R1, a general reinforcement learning framework that bridges large language models (LLMs) with recommendation systems through closed-loop optimization. Unlike prompting and supervised...
0
0
0
Optimizing Time Series Forecasting Architectures: A Hierarchical Neural Architecture Search Approach Difan Deng, Marius Lindauer. Action editor: Yu Cheng. https://t.co/zV2a11gNSc
#forecasting #architectures #architecture
openreview.net
The rapid development of time series forecasting research has brought many deep learning-based modules to this field. However, despite the increasing number of new forecasting architectures, it is...
1
0
1
Learning to Be Cautious Montaser Mohammedalamen, Dustin Morrill, Alexander Sieusahai, yash satsangi, Michael Bowling. Action editor: Dileep Kalathil. https://t.co/jgcTiOUrsC
#reinforcement #caution #cautious
openreview.net
A key challenge in the field of reinforcement learning is to develop agents that behave cautiously in novel situations. It is generally impossible to anticipate all situations that an autonomous...
0
0
1
Recurrent Natural Policy Gradient for POMDPs Semih Cayci, Atilla Eryilmaz. Action editor: Martha White. https://t.co/AIvLhUcqn1
#rnns #rnn #reinforcement
openreview.net
Solving partially observable Markov decision processes (POMDPs) is a long-standing challenge in reinforcement learning (RL) due to the inherent curse of dimensionality arising from the...
0
0
1
Improving Single-round Active Adaptation: A Prediction Variability Perspective Xiaoyang Wang, Yibo Jacky Zhang, Olawale Elijah Salaudeen et al.. Action editor: Soma Biswas. https://t.co/tNUHGz1DxE
#annotating #annotation #adaptation
openreview.net
Machine learning models trained with offline data often suffer from distribution shifts in online environments and require fast adaptation to online data. The high volume of online data further...
0
0
0
Adapting Chat Language Models Using Only Target Unlabeled Language Data Atsuki Yamaguchi, Terufumi Morishita, Aline Villavicencio, Nikolaos Aletras. Action editor: Ruoyu Sun. https://t.co/4TaDtyKoWt
#elchat #chat #conversation
openreview.net
Vocabulary expansion (VE) is the de-facto approach to language adaptation of large language models (LLMs) by adding new tokens and continuing pre-training on target data. While this is effective...
0
0
2
FORTRESS: Fast, Tuning-Free Retrieval Ensemble for Scalable LLM Safety Chi-Wei Chang, Richard Tzong-Han Tsai. Action editor: Huazheng Wang. https://t.co/i6cdqhTYxi
#adversarial #threat #threats
openreview.net
The rapid adoption of Large Language Models in user-facing applications has magnified security risks, as adversarial prompts continue to circumvent built-in safeguards with increasing...
0
0
1
Pre-Training Representations of Binary Code Using Contrastive Learning Yifan Zhang, Chen Huang, Yueke Zhang, Huajie Shao, Kevin Leach, Yu Huang. Action editor: Chang Xu. https://t.co/VgyBGr6Hml
#binary #code #binaries
openreview.net
Binary code analysis and comprehension is critical to applications in reverse engineering and computer security tasks where source code is not available. Unfortunately, unlike source code, binary...
0
0
0
Permissive Information-Flow Analysis for Large Language Models Shoaib Ahmed Siddiqui, Radhika Gaonkar, Boris Köpf et al.. Action editor: Jonathan Ullman. https://t.co/fyL2vaRrfN
#taint #security #confidential
openreview.net
Large Language Models (LLMs) are rapidly becoming commodity components of larger software systems. This poses natural security and privacy problems: poisoned data retrieved from one component can...
0
0
0
An Asymptotically Optimal Algorithm for the Convex Hull Membership Problem Gang Qiao, Ambuj Tewari. Action editor: Ilan Shomorony. https://t.co/VMgy2tZPBT
#optimal #exploration #bandit
openreview.net
We study the convex hull membership (CHM) problem in the pure exploration setting where one aims to efficiently and accurately determine if a given point lies in the convex hull of means of a...
0
1
5
PixelWorld: Towards Perceiving Everything as Pixels Zhiheng Lyu, Xueguang Ma, Wenhu Chen. Action editor: Stephen James. https://t.co/yVFz1hg5Zp
#visual #semantics #semantic
openreview.net
Recent agentic language models increasingly accept raw camera pixels rather than tokenized text, underscoring the need for a unified perception paradigm. We explore this idea through Perceive...
0
0
0
On Convolutions, Intrinsic Dimension, and Diffusion Models Kin Kwan Leung, Rasa Hosseinzadeh, Gabriel Loaiza-Ganem. Action editor: Qing Qu. https://t.co/eH8FZ1EGzz
#dimensional #intrinsic #generative
openreview.net
The manifold hypothesis asserts that data of interest in high-dimensional ambient spaces, such as image data, lies on unknown low-dimensional submanifolds. Diffusion models (DMs) -- which operate...
0
0
1
Equivalent Linear Mappings of Large Language Models James Robert Golden. Action editor: Shay Cohen. https://t.co/wgS9QtmhGT
#decoders #representations #transform
openreview.net
Despite significant progress in transformer interpretability, an understanding of the computational mechanisms of large language models (LLMs) remains a fundamental challenge. Many approaches...
0
1
1
Taxonomy, Opportunities, and Challenges of Representation Engineering for Large Language Models Jan Wehner, Sahar Abdelnabi, Daniel Tan, David Krueger, Mario Fritz. Action editor: Jake Snell. https://t.co/vBXOBl6jt1
#representation #representations
openreview.net
Representation Engineering (RepE) is a novel paradigm for controlling the behavior of LLMs. Unlike traditional approaches that modify inputs or fine-tune the model, RepE directly manipulates the...
0
0
0
Where are we with calibration under dataset shift in image classification? Mélanie Roschewitz, Raghav Mehta, Fabio De Sousa Ribeiro, Ben Glocker. Action editor: Lei Feng. https://t.co/LZvI7mXa6h
#calibration #calibrated #classification
openreview.net
We conduct an extensive study on the state of calibration under real-world dataset shift for image classification. Our work provides important insights on the choice of post-hoc and in-training...
0
0
1
Diversity-Enhanced and Classification-Aware Prompt Learning for Few-Shot Learning via Stable Diff... Gaoqin Chang, Jun Shu, Xiang Yuan, Deyu Meng. Action editor: Brian Kulis. https://t.co/ZH5oMMTaGe
#generative #images #classification
openreview.net
Recent text-to-image generative models have exhibited an impressive ability to generate fairly realistic images from some text prompts. In this work, we explore to leverage off-the-shelf...
0
0
0
Understanding Self-supervised Contrastive Learning through Supervised Objectives Byeongchan Lee. Action editor: Han Bao. https://t.co/B76Eb208WY
#supervised #representation #contrastive
openreview.net
Self-supervised representation learning has achieved impressive empirical success, yet its theoretical understanding remains limited. In this work, we provide a theoretical perspective by...
0
0
1
Is isotropy a good proxy for generalization in time series forecasting with transformers? Rashed Shelim, Shengzhe Xu, Walid Saad, Naren Ramakrishnan. Action editor: Jacek Cyranka. https://t.co/qnvdX98bn9
#softmax #embeddings #representations
openreview.net
Vector representations of contextual embeddings learned by transformer-based models have been shown to be effective even for downstream tasks in \emph{numerical domains} such as time series...
0
0
0