Accepted papers at TMLR Profile
Accepted papers at TMLR

@TmlrPub

Followers
4K
Following
1
Media
0
Statuses
3K

Joined March 2022
Don't wanna be here? Send us removal request.
@TmlrPub
Accepted papers at TMLR
10 hours
Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Rei... Jiacheng Lin, Tian Wang, Kun Qian. Action editor: Nino Vieillard. https://t.co/itLcpXsRmg #rec #forgetting #reinforcement
openreview.net
We propose Rec-R1, a general reinforcement learning framework that bridges large language models (LLMs) with recommendation systems through closed-loop optimization. Unlike prompting and supervised...
0
0
0
@TmlrPub
Accepted papers at TMLR
14 hours
Optimizing Time Series Forecasting Architectures: A Hierarchical Neural Architecture Search Approach Difan Deng, Marius Lindauer. Action editor: Yu Cheng. https://t.co/zV2a11gNSc #forecasting #architectures #architecture
openreview.net
The rapid development of time series forecasting research has brought many deep learning-based modules to this field. However, despite the increasing number of new forecasting architectures, it is...
1
0
1
@TmlrPub
Accepted papers at TMLR
18 hours
Learning to Be Cautious Montaser Mohammedalamen, Dustin Morrill, Alexander Sieusahai, yash satsangi, Michael Bowling. Action editor: Dileep Kalathil. https://t.co/jgcTiOUrsC #reinforcement #caution #cautious
openreview.net
A key challenge in the field of reinforcement learning is to develop agents that behave cautiously in novel situations. It is generally impossible to anticipate all situations that an autonomous...
0
0
1
@TmlrPub
Accepted papers at TMLR
1 day
Improving Single-round Active Adaptation: A Prediction Variability Perspective Xiaoyang Wang, Yibo Jacky Zhang, Olawale Elijah Salaudeen et al.. Action editor: Soma Biswas. https://t.co/tNUHGz1DxE #annotating #annotation #adaptation
openreview.net
Machine learning models trained with offline data often suffer from distribution shifts in online environments and require fast adaptation to online data. The high volume of online data further...
0
0
0
@TmlrPub
Accepted papers at TMLR
1 day
Adapting Chat Language Models Using Only Target Unlabeled Language Data Atsuki Yamaguchi, Terufumi Morishita, Aline Villavicencio, Nikolaos Aletras. Action editor: Ruoyu Sun. https://t.co/4TaDtyKoWt #elchat #chat #conversation
openreview.net
Vocabulary expansion (VE) is the de-facto approach to language adaptation of large language models (LLMs) by adding new tokens and continuing pre-training on target data. While this is effective...
0
0
2
@TmlrPub
Accepted papers at TMLR
1 day
FORTRESS: Fast, Tuning-Free Retrieval Ensemble for Scalable LLM Safety Chi-Wei Chang, Richard Tzong-Han Tsai. Action editor: Huazheng Wang. https://t.co/i6cdqhTYxi #adversarial #threat #threats
openreview.net
The rapid adoption of Large Language Models in user-facing applications has magnified security risks, as adversarial prompts continue to circumvent built-in safeguards with increasing...
0
0
1
@TmlrPub
Accepted papers at TMLR
2 days
Pre-Training Representations of Binary Code Using Contrastive Learning Yifan Zhang, Chen Huang, Yueke Zhang, Huajie Shao, Kevin Leach, Yu Huang. Action editor: Chang Xu. https://t.co/VgyBGr6Hml #binary #code #binaries
openreview.net
Binary code analysis and comprehension is critical to applications in reverse engineering and computer security tasks where source code is not available. Unfortunately, unlike source code, binary...
0
0
0
@TmlrPub
Accepted papers at TMLR
2 days
Permissive Information-Flow Analysis for Large Language Models Shoaib Ahmed Siddiqui, Radhika Gaonkar, Boris Köpf et al.. Action editor: Jonathan Ullman. https://t.co/fyL2vaRrfN #taint #security #confidential
openreview.net
Large Language Models (LLMs) are rapidly becoming commodity components of larger software systems. This poses natural security and privacy problems: poisoned data retrieved from one component can...
0
0
0
@TmlrPub
Accepted papers at TMLR
3 days
Taxonomy, Opportunities, and Challenges of Representation Engineering for Large Language Models Jan Wehner, Sahar Abdelnabi, Daniel Tan, David Krueger, Mario Fritz. Action editor: Jake Snell. https://t.co/vBXOBl6jt1 #representation #representations
openreview.net
Representation Engineering (RepE) is a novel paradigm for controlling the behavior of LLMs. Unlike traditional approaches that modify inputs or fine-tune the model, RepE directly manipulates the...
0
0
0
@TmlrPub
Accepted papers at TMLR
3 days
Where are we with calibration under dataset shift in image classification? Mélanie Roschewitz, Raghav Mehta, Fabio De Sousa Ribeiro, Ben Glocker. Action editor: Lei Feng. https://t.co/LZvI7mXa6h #calibration #calibrated #classification
openreview.net
We conduct an extensive study on the state of calibration under real-world dataset shift for image classification. Our work provides important insights on the choice of post-hoc and in-training...
0
0
1
@TmlrPub
Accepted papers at TMLR
3 days
Diversity-Enhanced and Classification-Aware Prompt Learning for Few-Shot Learning via Stable Diff... Gaoqin Chang, Jun Shu, Xiang Yuan, Deyu Meng. Action editor: Brian Kulis. https://t.co/ZH5oMMTaGe #generative #images #classification
openreview.net
Recent text-to-image generative models have exhibited an impressive ability to generate fairly realistic images from some text prompts. In this work, we explore to leverage off-the-shelf...
0
0
0
@TmlrPub
Accepted papers at TMLR
3 days
Is isotropy a good proxy for generalization in time series forecasting with transformers? Rashed Shelim, Shengzhe Xu, Walid Saad, Naren Ramakrishnan. Action editor: Jacek Cyranka. https://t.co/qnvdX98bn9 #softmax #embeddings #representations
openreview.net
Vector representations of contextual embeddings learned by transformer-based models have been shown to be effective even for downstream tasks in \emph{numerical domains} such as time series...
0
0
0