Abstract: Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative sensemaking, and visual storytelling. However, ...
InterPLM: Discovering Interpretable Features in Protein Language Models via Sparse Autoencoders InterPLM is a toolkit for extracting, analyzing, and visualizing interpretable features from protein ...
Abstract: Large-scale pre-trained vision-language models (e.g., CLIP) have shown incredible generalization performance in downstream tasks such as video-text retrieval (VTR). Traditional approaches ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results