FunAI Profile
FunAI

@FunAILab

Followers
209
Following
11
Media
1
Statuses
9

Research lab led by @y_m_asano at @utn_nuremberg. We conduct fundamental AI research and develop core technology for future Foundation Models.

Nuremberg
Joined July 2024
Don't wanna be here? Send us removal request.
@y_m_asano
Yuki
2 months
@FunAILab + CVMP Lab of @EddyIlg retreat: โ˜‘. From mountains to hackathon to good food, we've had some intense but good days with lots of new ideas ๐ŸŽ‰.
0
1
9
@y_m_asano
Yuki
2 months
Now finally accepted at @emnlpmeeting! I think the technique and high-level ideas i) allow bidirectional attention for prompt & ii) (maybe) process input-query differently from answer generation will stick around.
@y_m_asano
Yuki
1 year
Today we introduce Bidirectional Instruction Tuning (Bitune). It's a new way of adapting LLMs for the instruction->answering stage. It allows the model to process the instruction/question with bidirectional attention, while the answer generation remains causal.
1
4
50
@y_m_asano
Yuki
3 months
Today we release Franca, a new vision Foundation Model that matches and sometimes outperforms DINOv2. The data, the training code and the model weights (with intermediate checkpoints) are open-source, allowing everyone to build on this. Methodologically, we introduce two new
@shawshank_v
Shashank
3 months
Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research๐Ÿงต
3
24
174
@FragileGoodwill
Ryousuke Yamada
3 months
Hello FunAI Lab at UTN ๐Ÿ‘‹ Iโ€™m excited to start a new chapter of my research journey here in Nuremberg as a visiting postdoc. Excited for inspiring collaborations and impactful research ahead with @y_m_asano and the amazing students๐Ÿ˜€
0
7
21
@y_m_asano
Yuki
11 months
LoRA et al. enable personalised model generation and serving, which is crucial as finetuned models still outperform general ones in many tasks. However, serving a base model with many LoRAs is very inefficient! Now, there's a better way: enter Prompt Generation Networks,
1
9
93
@AdinaYakup
Adina Yakup
1 year
Is the community trying to surprise us today? ๐Ÿคฏ Because these benchmark-related papers from different research labs all dropped on the Daily Papers page at once! ๐ŸŽ‰๐Ÿ“‘ https://t.co/pizTMDvIGc โœจ LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal
Tweet card summary image
huggingface.co
0
6
24
@y_m_asano
Yuki
1 year
Today, we're introducing TVBench! ๐Ÿ“น๐Ÿ’ฌ Video-language evaluation is crucial, but are we doing it right? We find that current benchmarks fall short in testing temporal understanding. ๐Ÿงต๐Ÿ‘‡
2
13
68
@FunAILab
FunAI
1 year
First paper with our FunAI Lab affiliation :)
@y_m_asano
Yuki
1 year
Ever wondered if better LLMs actually have a better understanding of the visual world? ๐Ÿค” As it turns out, they do! We find: An LLM's MMLU performance correlates positively with zero-shot performance in a CLIP-like case when using that LLM to encode the text. ๐Ÿงต๐Ÿ‘‡
0
0
3
@FunAILab
FunAI
1 year
0
0
23