id large_stringlengths 9 16 | submitter large_stringlengths 1 64 ⌀ | authors large_stringlengths 4 60.7k | title large_stringlengths 1 381 | comments large_stringlengths 1 827 ⌀ | journal-ref large_stringlengths 1 557 ⌀ | doi large_stringlengths 8 153 ⌀ | report-no large_stringlengths 2 509 ⌀ | categories large_stringlengths 5 125 | license large_stringclasses 9
values | abstract large_stringlengths 6 5.67k | update_date timestamp[ms]date 2007-05-23 00:00:00 2026-02-20 00:00:00 | classification_label stringclasses 2
values | is_new_dataset bool 2
classes | confidence_score float64 0.5 0.72 | classification_date stringdate 2026-03-01 00:45:07 2026-03-01 00:45:07 | model_version stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2505.24687 | Shengyuan Liu | Shengyuan Liu, Wenting Chen, Boyun Zheng, Wentao Pan, Xiang Li, Yixuan Yuan | TumorGen: Boundary-Aware Tumor-Mask Synthesis with Rectified Flow Matching | 10 pages, 4 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumor data synthesis offers a promising solution to the shortage of annotated medical datasets. However, current approaches either limit tumor diversity by using predefined masks or employ computationally expensive two-stage processes with multiple denoising steps, causing computational inefficiency. Additionally, these methods typically rely on binary masks that fail to capture the gradual transitions characteristic of tumor boundaries. We present TumorGen, a novel Boundary-Aware Tumor-Mask Synthesis with Rectified Flow Matching for efficient 3D tumor synthesis with three key components: a Boundary-Aware Pseudo Mask Generation module that replaces strict binary masks with flexible bounding boxes; a Spatial-Constraint Vector Field Estimator that simultaneously synthesizes tumor latents and masks using rectified flow matching to ensure computational efficiency; and a VAE-guided mask refiner that enhances boundary realism. TumorGen significantly improves computational efficiency by requiring fewer sampling steps while maintaining pathological accuracy through coarse and fine-grained spatial constraints. Experimental results demonstrate TumorGen's superior performance over existing tumor synthesis methods in both efficiency and realism, offering a valuable contribution to AI-driven cancer diagnostics. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711725 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24689 | Sander Land | Sander Land, Catherine Arnett | BPE Stays on SCRIPT: Structured Encoding for Robust Multilingual Pretokenization | 9 pages, 2 figures. For associated code, see https://github.com/sanderland/script_bpe | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Byte Pair Encoding (BPE) tokenizers, widely used in Large Language Models, face challenges in multilingual settings, including penalization of non-Western scripts and the creation of tokens with partial UTF-8 sequences. Pretokenization, often reliant on complex regular expressions, can also introduce fragility and unexpected edge cases. We propose SCRIPT (Script Category Representation in PreTokenization), a novel encoding scheme that bypasses UTF-8 byte conversion by using initial tokens based on Unicode script and category properties. This approach enables a simple, rule-based pretokenization strategy that respects script boundaries, offering a robust alternative to pretokenization strategies based on regular expressions. We also introduce and validate a constrained BPE merging strategy that enforces character integrity, applicable to both SCRIPT-BPE and byte-based BPE. Our experiments demonstrate that SCRIPT-BPE achieves competitive compression while eliminating encoding-based penalties for non-Latin-script languages. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709918 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24690 | Simone Alberto Peirone | Simone Alberto Peirone and Francesca Pistilli and Antonio Alliegro and Tatiana Tommasi and Giuseppe Averta | Learning reusable concepts across different egocentric video understanding tasks | Extended abstract derived from arXiv:2502.02487. Presented at the Second Joint Egocentric Vision (EgoVis) Workshop (CVPR 2025) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our comprehension of video streams depicting human activities is naturally multifaceted: in just a few moments, we can grasp what is happening, identify the relevance and interactions of objects in the scene, and forecast what will happen soon, everything all at once. To endow autonomous systems with such holistic perception, learning how to correlate concepts, abstract knowledge across diverse tasks, and leverage tasks synergies when learning novel skills is essential. In this paper, we introduce Hier-EgoPack, a unified framework able to create a collection of task perspectives that can be carried across downstream tasks and used as a potential source of additional insights, as a backpack of skills that a robot can carry around and use when needed. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.706948 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24692 | Derek Everett | Derek Everett, Fred Lu, Edward Raff, Fernando Camacho, James Holt | Quick-Draw Bandits: Quickly Optimizing in Nonstationary Environments with Extremely Many Arms | KDD 2025, Research Track | null | 10.1145/3711896.3737097 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Canonical algorithms for multi-armed bandits typically assume a stationary reward environment where the size of the action space (number of arms) is small. More recently developed methods typically relax only one of these assumptions: existing non-stationary bandit policies are designed for a small number of arms, while Lipschitz, linear, and Gaussian process bandit policies are designed to handle a large (or infinite) number of arms in stationary reward environments under constraints on the reward function. In this manuscript, we propose a novel policy to learn reward environments over a continuous space using Gaussian interpolation. We show that our method efficiently learns continuous Lipschitz reward functions with $\mathcal{O}^*(\sqrt{T})$ cumulative regret. Furthermore, our method naturally extends to non-stationary problems with a simple modification. We finally demonstrate that our method is computationally favorable (100-10000x faster) and experimentally outperforms sliding Gaussian process policies on datasets with non-stationarity and an extremely large number of arms. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712301 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24693 | Julio Silva-Rodr\'iguez | Julio Silva-Rodr\'iguez and Ismail Ben Ayed and Jose Dolz | Conformal Prediction for Zero-Shot Models | CVPR 2025. Code: https://github.com/jusiro/CLIP-Conformal | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-language models pre-trained at large scale have shown unprecedented adaptability and generalization to downstream tasks. Although its discriminative potential has been widely explored, its reliability and uncertainty are still overlooked. In this work, we investigate the capabilities of CLIP models under the split conformal prediction paradigm, which provides theoretical guarantees to black-box models based on a small, labeled calibration set. In contrast to the main body of literature on conformal predictors in vision classifiers, foundation models exhibit a particular characteristic: they are pre-trained on a one-time basis on an inaccessible source domain, different from the transferred task. This domain drift negatively affects the efficiency of the conformal sets and poses additional challenges. To alleviate this issue, we propose Conf-OT, a transfer learning setting that operates transductive over the combined calibration and query sets. Solving an optimal transport problem, the proposed method bridges the domain gap between pre-training and adaptation without requiring additional data splits but still maintaining coverage guarantees. We comprehensively explore this conformal prediction strategy on a broad span of 15 datasets and three non-conformity scores. Conf-OT provides consistent relative improvements of up to 20% on set efficiency while being 15 times faster than popular transductive approaches. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711678 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24697 | Aaron Conrardy | Aaron Conrardy, Alfredo Capozucca, Jordi Cabot | Towards a unified user modeling language for engineering human centered AI systems | Accepted at the Third Workshop on Engineering Interactive Systems Embedding AI Technologies (EISEAIT workshop at EICS 2025) | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | In today's digital society, personalization has become a crucial aspect of software applications, significantly impacting user experience and engagement. A new wave of intelligent user interfaces, such as AI-based conversational agents, has the potential to enable such personalization beyond what other types of interfaces could offer in the past. Personalization requires the ability to specify a complete user profile, covering as many dimensions as possible, such as potential accessibility constraints, interaction preferences, and even hobbies. In this sense, this paper presents the concepts of a unified user modeling language, aimed to combine previous approaches in a single proposal. Additionally, a proof of concept has been developed that leverages user profiles modeled using our language to automatically adapt a conversational agent. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709283 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24701 | Tejul Pandit | Tejul Pandit, Meet Raval, and Dhvani Upadhyay | Multi-Domain ABSA Conversation Dataset Generation via LLMs for Real-World Evaluation and Model Comparison | 11 pages, 3 figures, 5 tables, 6th International Conference on Natural Language Computing and AI (NLCAI 2025), ISBN : 978-1-923107-59-5, Computer Science & Information Technology (CS & IT), ISSN : 2231 - 5403, Volume 15, Number 10, May 2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Aspect-Based Sentiment Analysis (ABSA) offers granular insights into opinions but often suffers from the scarcity of diverse, labeled datasets that reflect real-world conversational nuances. This paper presents an approach for generating synthetic ABSA data using Large Language Models (LLMs) to address this gap. We detail the generation process aimed at producing data with consistent topic and sentiment distributions across multiple domains using GPT-4o. The quality and utility of the generated data were evaluated by assessing the performance of three state-of-the-art LLMs (Gemini 1.5 Pro, Claude 3.5 Sonnet, and DeepSeek-R1) on topic and sentiment classification tasks. Our results demonstrate the effectiveness of the synthetic data, revealing distinct performance trade-offs among the models: DeepSeekR1 showed higher precision, Gemini 1.5 Pro and Claude 3.5 Sonnet exhibited strong recall, and Gemini 1.5 Pro offered significantly faster inference. We conclude that LLM-based synthetic data generation is a viable and flexible method for creating valuable ABSA resources, facilitating research and model evaluation without reliance on limited or inaccessible real-world labeled data. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.704983 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24703 | Dennis Jacob | Dennis Jacob, Chong Xiang, Prateek Mittal | PatchDEMUX: A Certifiably Robust Framework for Multi-label Classifiers Against Adversarial Patches | CVPR 2025 | null | null | null | cs.CR cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning techniques have enabled vast improvements in computer vision technologies. Nevertheless, these models are vulnerable to adversarial patch attacks which catastrophically impair performance. The physically realizable nature of these attacks calls for certifiable defenses, which feature provable guarantees on robustness. While certifiable defenses have been successfully applied to single-label classification, limited work has been done for multi-label classification. In this work, we present PatchDEMUX, a certifiably robust framework for multi-label classifiers against adversarial patches. Our approach is a generalizable method which can extend any existing certifiable defense for single-label classification; this is done by considering the multi-label classification task as a series of isolated binary classification problems to provably guarantee robustness. Furthermore, in the scenario where an attacker is limited to a single patch we propose an additional certification procedure that can provide tighter robustness bounds. Using the current state-of-the-art (SOTA) single-label certifiable defense PatchCleanser as a backbone, we find that PatchDEMUX can achieve non-trivial robustness on the MS-COCO and PASCAL VOC datasets while maintaining high clean performance | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712214 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24704 | Hideaki Kim | Hideaki Kim, Tomoharu Iwata, Akinori Fujino | K$^2$IE: Kernel Method-based Kernel Intensity Estimators for Inhomogeneous Poisson Processes | Accepted to ICML 2025 | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Kernel method-based intensity estimators, formulated within reproducing kernel Hilbert spaces (RKHSs), and classical kernel intensity estimators (KIEs) have been among the most easy-to-implement and feasible methods for estimating the intensity functions of inhomogeneous Poisson processes. While both approaches share the term "kernel", they are founded on distinct theoretical principles, each with its own strengths and limitations. In this paper, we propose a novel regularized kernel method for Poisson processes based on the least squares loss and show that the resulting intensity estimator involves a specialized variant of the representer theorem: it has the dual coefficient of unity and coincides with classical KIEs. This result provides new theoretical insights into the connection between classical KIEs and kernel method-based intensity estimators, while enabling us to develop an efficient KIE by leveraging advanced techniques from RKHS theory. We refer to the proposed model as the kernel method-based kernel intensity estimator (K$^2$IE). Through experiments on synthetic datasets, we show that K$^2$IE achieves comparable predictive performance while significantly surpassing the state-of-the-art kernel method-based estimator in computational efficiency. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710724 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24709 | Soichiro Nishimori | Soichiro Nishimori, Yu-Jie Zhang, Thanawat Lodkaew, and Masashi Sugiyama | On Symmetric Losses for Robust Policy Optimization with Noisy Preferences | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Optimizing policies based on human preferences is key to aligning language models with human intent. This work focuses on reward modeling, a core component in reinforcement learning from human feedback (RLHF), and offline preference optimization, such as direct preference optimization. Conventional approaches typically assume accurate annotations. However, real-world preference data often contains noise due to human errors or biases. We propose a principled framework for robust policy optimization under noisy preferences, viewing reward modeling as a classification problem. This allows us to leverage symmetric losses, known for their robustness to label noise in classification, leading to our Symmetric Preference Optimization (SymPO) method. We prove that symmetric losses enable successful policy optimization even under noisy labels, as the resulting reward remains rank-preserving -- a property sufficient for policy improvement. Experiments on synthetic and real-world tasks demonstrate the effectiveness of SymPO. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710004 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24710 | Wei Chen | Wei Chen, Jiahao Zhang, Haipeng Zhu, Boyan Xu, Zhifeng Hao, Keli Zhang, Junjian Ye, Ruichu Cai | Causal-aware Large Language Models: Enhancing Decision-Making Through Learning, Adapting and Acting | Accepted by IJCAI 2025 | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have shown great potential in decision-making due to the vast amount of knowledge stored within the models. However, these pre-trained models are prone to lack reasoning abilities and are difficult to adapt to new environments, further hindering their application to complex real-world tasks. To address these challenges, inspired by the human cognitive process, we propose Causal-aware LLMs, which integrate the structural causal model (SCM) into the decision-making process to model, update, and utilize structured knowledge of the environment in a ``learning-adapting-acting" paradigm. Specifically, in the learning stage, we first utilize an LLM to extract the environment-specific causal entities and their causal relations to initialize a structured causal model of the environment. Subsequently,in the adapting stage, we update the structured causal model through external feedback about the environment, via an idea of causal intervention. Finally, in the acting stage, Causal-aware LLMs exploit structured causal knowledge for more efficient policy-making through the reinforcement learning agent. The above processes are performed iteratively to learn causal knowledge, ultimately enabling the causal-aware LLMs to achieve a more accurate understanding of the environment and make more efficient decisions. Experimental results across 22 diverse tasks within the open-world game ``Crafter" validate the effectiveness of our proposed method. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.71257 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24712 | Guido Ivetta | Guido Ivetta (1 and 2), Marcos J. Gomez (1 and 2), Sof\'ia Martinelli (1), Pietro Palombini (1), M. Emilia Echeveste (1 and 2), Nair Carolina Mazzeo (2), Beatriz Busaniche (2), Luciana Benotti (1 and 2) ((1) Universidad Nacional de C\'ordoba, Argentina, (2) Fundaci\'on V\'ia Libre) | HESEIA: A community-based dataset for evaluating social biases in large language models, co-designed in real school settings in Latin America | null | null | null | null | cs.CL cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Most resources for evaluating social biases in Large Language Models are developed without co-design from the communities affected by these biases, and rarely involve participatory approaches. We introduce HESEIA, a dataset of 46,499 sentences created in a professional development course. The course involved 370 high-school teachers and 5,370 students from 189 Latin-American schools. Unlike existing benchmarks, HESEIA captures intersectional biases across multiple demographic axes and school subjects. It reflects local contexts through the lived experience and pedagogical expertise of educators. Teachers used minimal pairs to create sentences that express stereotypes relevant to their school subjects and communities. We show the dataset diversity in term of demographic axes represented and also in terms of the knowledge areas included. We demonstrate that the dataset contains more stereotypes unrecognized by current LLMs than previous datasets. HESEIA is available to support bias assessments grounded in educational communities. | 2025-06-02T00:00:00 | new_dataset | true | 0.709577 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24713 | Badr M. Abdullah | Badr M. Abdullah, Matthew Baas, Bernd M\"obius, Dietrich Klakow | Voice Conversion Improves Cross-Domain Robustness for Spoken Arabic Dialect Identification | Accepted in Interspeech 2025 | null | null | null | cs.CL cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Arabic dialect identification (ADI) systems are essential for large-scale data collection pipelines that enable the development of inclusive speech technologies for Arabic language varieties. However, the reliability of current ADI systems is limited by poor generalization to out-of-domain speech. In this paper, we present an effective approach based on voice conversion for training ADI models that achieves state-of-the-art performance and significantly improves robustness in cross-domain scenarios. Evaluated on a newly collected real-world test set spanning four different domains, our approach yields consistent improvements of up to +34.1% in accuracy across domains. Furthermore, we present an analysis of our approach and demonstrate that voice conversion helps mitigate the speaker bias in the ADI dataset. We release our robust ADI model and cross-domain evaluation dataset to support the development of inclusive speech technologies for Arabic. | 2025-06-02T00:00:00 | new_dataset | true | 0.711175 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24714 | Junyu Luo | Junyu Luo, Zhizhuo Kou, Liming Yang, Xiao Luo, Jinsheng Huang, Zhiping Xiao, Jingshu Peng, Chengzhong Liu, Jiaming Ji, Xuanzhe Liu, Sirui Han, Ming Zhang, Yike Guo | FinMME: Benchmark Dataset for Financial Multi-Modal Reasoning Evaluation | ACL 2025 Main Conference | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Multimodal Large Language Models (MLLMs) have experienced rapid development in recent years. However, in the financial domain, there is a notable lack of effective and specialized multimodal evaluation datasets. To advance the development of MLLMs in the finance domain, we introduce FinMME, encompassing more than 11,000 high-quality financial research samples across 18 financial domains and 6 asset classes, featuring 10 major chart types and 21 subtypes. We ensure data quality through 20 annotators and carefully designed validation mechanisms. Additionally, we develop FinScore, an evaluation system incorporating hallucination penalties and multi-dimensional capability assessment to provide an unbiased evaluation. Extensive experimental results demonstrate that even state-of-the-art models like GPT-4o exhibit unsatisfactory performance on FinMME, highlighting its challenging nature. The benchmark exhibits high robustness with prediction variations under different prompts remaining below 1%, demonstrating superior reliability compared to existing datasets. Our dataset and evaluation protocol are available at https://huggingface.co/datasets/luojunyu/FinMME and https://github.com/luo-junyu/FinMME. | 2025-06-02T00:00:00 | new_dataset | true | 0.715607 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24715 | Prabhu Teja Sivaprasad | Fabio Fehr, Prabhu Teja Sivaprasad, Luca Franceschi, Giovanni Zappella | CoRet: Improved Retriever for Code Editing | ACL 2025 | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we introduce CoRet, a dense retrieval model designed for code-editing tasks that integrates code semantics, repository structure, and call graph dependencies. The model focuses on retrieving relevant portions of a code repository based on natural language queries such as requests to implement new features or fix bugs. These retrieved code chunks can then be presented to a user or to a second code-editing model or agent. To train CoRet, we propose a loss function explicitly designed for repository-level retrieval. On SWE-bench and Long Code Arena's bug localisation datasets, we show that our model substantially improves retrieval recall by at least 15 percentage points over existing models, and ablate the design choices to show their importance in achieving these results. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712118 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24716 | Christopher Buss | Christopher Buss, Mahdis Safari, Arash Termehchy, Stefan Lee, David Maier | Towards Scalable Schema Mapping using Large Language Models | null | null | null | null | cs.DB cs.AI | http://creativecommons.org/licenses/by/4.0/ | The growing need to integrate information from a large number of diverse sources poses significant scalability challenges for data integration systems. These systems often rely on manually written schema mappings, which are complex, source-specific, and costly to maintain as sources evolve. While recent advances suggest that large language models (LLMs) can assist in automating schema matching by leveraging both structural and natural language cues, key challenges remain. In this paper, we identify three core issues with using LLMs for schema mapping: (1) inconsistent outputs due to sensitivity to input phrasing and structure, which we propose methods to address through sampling and aggregation techniques; (2) the need for more expressive mappings (e.g., GLaV), which strain the limited context windows of LLMs; and (3) the computational cost of repeated LLM calls, which we propose to mitigate through strategies like data type prefiltering. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.713058 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24717 | Benjamin Holzschuh | Benjamin Holzschuh, Qiang Liu, Georg Kohl and Nils Thuerey | PDE-Transformer: Efficient and Versatile Transformers for Physics Simulations | ICML 2025. Code available at https://github.com/tum-pbs/pde-transformer | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce PDE-Transformer, an improved transformer-based architecture for surrogate modeling of physics simulations on regular grids. We combine recent architectural improvements of diffusion transformers with adjustments specific for large-scale simulations to yield a more scalable and versatile general-purpose transformer architecture, which can be used as the backbone for building large-scale foundation models in physical sciences. We demonstrate that our proposed architecture outperforms state-of-the-art transformer architectures for computer vision on a large dataset of 16 different types of PDEs. We propose to embed different physical channels individually as spatio-temporal tokens, which interact via channel-wise self-attention. This helps to maintain a consistent information density of tokens when learning multiple types of PDEs simultaneously. We demonstrate that our pre-trained models achieve improved performance on several challenging downstream tasks compared to training from scratch and also beat other foundation model architectures for physics simulations. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710279 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24721 | Nick Rossenbach | Nick Rossenbach, Benedikt Hilmes, Leon Brackmann, Moritz Gunz, Ralf Schl\"uter | Running Conventional Automatic Speech Recognition on Memristor Hardware: A Simulated Approach | Accepted for the Blue Sky track at Interspeech 2025 | null | null | null | cs.LG cs.AR cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Memristor-based hardware offers new possibilities for energy-efficient machine learning (ML) by providing analog in-memory matrix multiplication. Current hardware prototypes cannot fit large neural networks, and related literature covers only small ML models for tasks like MNIST or single word recognition. Simulation can be used to explore how hardware properties affect larger models, but existing software assumes simplified hardware. We propose a PyTorch-based library based on "Synaptogen" to simulate neural network execution with accurately captured memristor hardware properties. For the first time, we show how an ML system with millions of parameters would behave on memristor hardware, using a Conformer trained on the speech recognition task TED-LIUMv2 as example. With adjusted quantization-aware training, we limit the relative degradation in word error rate to 25% when using a 3-bit weight precision to execute linear operations via simulated analog computation. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709616 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24724 | Zhipeng Wang | Xihan Xiong, Zhipeng Wang, Qin Wang, Endong Liu, Pascal Berrang, William Knottenbelt | Talking Transactions: Decentralized Communication through Ethereum Input Data Messages (IDMs) | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Can you imagine, blockchain transactions can talk! In this paper, we study how they talk and what they talk about. We focus on the input data field of Ethereum transactions, which is designed to allow external callers to interact with smart contracts. In practice, this field also enables users to embed natural language messages into transactions. Users can leverage these Input Data Messages (IDMs) for peer-to-peer communication. This means that, beyond Ethereum's well-known role as a financial infrastructure, it also serves as a decentralized communication medium.
We present the first large-scale analysis of Ethereum IDMs from the genesis block to February 2024 (3134 days). We filter IDMs to extract 867,140 transactions with informative IDMs and use LLMs for language detection. We find that English (95.4%) and Chinese (4.4%) dominate the use of natural languages in IDMs. Interestingly, English IDMs center on security and scam warnings (24%) with predominantly negative emotions, while Chinese IDMs emphasize emotional expression and social connection (44%) with a more positive tone. We also observe that longer English IDMs often transfer high ETH values for protocol-level purposes, while longer Chinese IDMs tend to involve symbolic transfer amounts for emotional intent. Moreover, we find that the IDM participants tend to form small, loosely connected communities (59.99%). Our findings highlight culturally and functionally divergent use cases of the IDM channel across user communities. We further examine the security relevance of IDMs in on-chain attacks. Many victims use them to appeal to attackers for fund recovery. IDMs containing negotiations or reward offers are linked to higher reply rates. We also analyze IDMs' regulatory implications. Their misuse for abuse, threats, and sexual solicitation reveals the urgent need for content moderation and regulation in decentralized systems. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.707087 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24725 | Ali W. Elshaari | Jun Gao, Jin Chang, Bruno Lopez Rodriguez, Iman Esmaeil Zadeh, Val Zwiller, Ali W. Elshaari | From Pixels to Camera: Scaling Superconducting Nanowire Single-Photon Detectors for Imaging at the Quantum-Limit | null | null | null | null | quant-ph cond-mat.supr-con physics.app-ph physics.optics | http://creativecommons.org/licenses/by/4.0/ | Superconducting nanowire single-photon detectors (SNSPDs) have emerged as essential devices that push the boundaries of photon detection with unprecedented sensitivity, ultrahigh timing precision, and broad spectral response. Recent advancements in materials engineering, superconducting electronics integration, and cryogenic system design are enabling the evolution of SNSPDs from single-pixel detectors toward scalable arrays and large-format single-photon time tagging cameras. This perspective article surveys the rapidly evolving technological landscape underpinning this transition, focusing on innovative superconducting materials, advanced multiplexed read-out schemes, and emerging cryo-compatible electronics. We highlight how these developments are set to profoundly impact diverse applications, including quantum communication networks, deep-tissue biomedical imaging, single-molecule spectroscopy, remote sensing with unprecedented resolution, and the detection of elusive dark matter signals. By critically discussing both current challenges and promising solutions, we aim to articulate a clear, coherent vision for the next generation of SNSPD-based quantum imaging systems. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.71324 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24726 | Melisa Russak | Shelly Bensal, Umar Jamil, Christopher Bryant, Melisa Russak, Kiran Kamble, Dmytro Mozolevskyi, Muayad Ali, Waseem AlShikh | Reflect, Retry, Reward: Self-Improving LLMs via Reinforcement Learning | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We explore a method for improving the performance of large language models through self-reflection and reinforcement learning. By incentivizing the model to generate better self-reflections when it answers incorrectly, we demonstrate that a model's ability to solve complex, verifiable tasks can be enhanced even when generating synthetic data is infeasible and only binary feedback is available. Our framework operates in two stages: first, upon failing a given task, the model generates a self-reflective commentary analyzing its previous attempt; second, the model is given another attempt at the task with the self-reflection in context. If the subsequent attempt succeeds, the tokens generated during the self-reflection phase are rewarded. Our experimental results show substantial performance gains across a variety of model architectures, as high as 34.7% improvement at math equation writing and 18.1% improvement at function calling. Notably, smaller fine-tuned models (1.5 billion to 7 billion parameters) outperform models in the same family that are 10 times larger. Our novel paradigm is thus an exciting pathway to more useful and reliable language models that can self-improve on challenging tasks with limited external feedback. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.71251 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24727 | Xiaochen Zhang | Xiaochen Zhang and Haoyi Xiong | Knockoff-Guided Compressive Sensing: A Statistical Machine Learning Framework for Support-Assured Signal Recovery | null | null | null | null | stat.ML cs.LG eess.SP | http://creativecommons.org/publicdomain/zero/1.0/ | This paper introduces a novel Knockoff-guided compressive sensing framework, referred to as \TheName{}, which enhances signal recovery by leveraging precise false discovery rate (FDR) control during the support identification phase. Unlike LASSO, which jointly performs support selection and signal estimation without explicit error control, our method guarantees FDR control in finite samples, enabling more reliable identification of the true signal support. By separating and controlling the support recovery process through statistical Knockoff filters, our framework achieves more accurate signal reconstruction, especially in challenging scenarios where traditional methods fail. We establish theoretical guarantees demonstrating how FDR control directly ensures recovery performance under weaker conditions than traditional $\ell_1$-based compressive sensing methods, while maintaining accurate signal reconstruction. Extensive numerical experiments demonstrate that our proposed Knockoff-based method consistently outperforms LASSO-based and other state-of-the-art compressive sensing techniques. In simulation studies, our method improves F1-score by up to 3.9x over baseline methods, attributed to principled false discovery rate (FDR) control and enhanced support recovery. The method also consistently yields lower reconstruction and relative errors. We further validate the framework on real-world datasets, where it achieves top downstream predictive performance across both regression and classification tasks, often narrowing or even surpassing the performance gap relative to uncompressed signals. These results establish \TheName{} as a robust and practical alternative to existing approaches, offering both theoretical guarantees and strong empirical performance through statistically grounded support selection. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.71148 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24728 | Dongzi Jin | Dongzi Jin, Yong Xiao, Yingyu Li | Robust Federated Learning against Model Perturbation in Edge Networks | Accepted by IEEE ICC 2025 | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) is a promising paradigm for realizing edge intelligence, allowing collaborative learning among distributed edge devices by sharing models instead of raw data. However, the shared models are often assumed to be ideal, which would be inevitably violated in practice due to various perturbations, leading to significant performance degradation. To overcome this challenge, we propose a novel method, termed Sharpness-Aware Minimization-based Robust Federated Learning (SMRFL), which aims to improve model robustness against perturbations by exploring the geometrical property of the model landscape. Specifically, SMRFL solves a min-max optimization problem that promotes model convergence towards a flat minimum by minimizing the maximum loss within a neighborhood of the model parameters. In this way, model sensitivity to perturbations is reduced, and robustness is enhanced since models in the neighborhood of the flat minimum also enjoy low loss values. The theoretical result proves that SMRFL can converge at the same rate as FL without perturbations. Extensive experimental results show that SMRFL significantly enhances robustness against perturbations compared to three baseline methods on two real-world datasets under three perturbation scenarios. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710632 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24729 | Magamed Taimeskhanov | Magamed Taimeskhanov, Damien Garreau | Feature Attribution from First Principles | 30 pages, 3 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Feature attribution methods are a popular approach to explain the behavior of machine learning models. They assign importance scores to each input feature, quantifying their influence on the model's prediction. However, evaluating these methods empirically remains a significant challenge. To bypass this shortcoming, several prior works have proposed axiomatic frameworks that any feature attribution method should satisfy. In this work, we argue that such axioms are often too restrictive, and propose in response a new feature attribution framework, built from the ground up. Rather than imposing axioms, we start by defining attributions for the simplest possible models, i.e., indicator functions, and use these as building blocks for more complex models. We then show that one recovers several existing attribution methods, depending on the choice of atomic attribution. Subsequently, we derive closed-form expressions for attribution of deep ReLU networks, and take a step toward the optimization of evaluation metrics with respect to feature attributions. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710865 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24731 | Alan Sun | Alan Sun | Circuit Stability Characterizes Language Model Generalization | 16 pages, 10 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Extensively evaluating the capabilities of (large) language models is difficult. Rapid development of state-of-the-art models induce benchmark saturation, while creating more challenging datasets is labor-intensive. Inspired by the recent developments in mechanistic interpretability, we introduce circuit stability as a new way to assess model performance. Circuit stability refers to a model's ability to apply a consistent reasoning process-its circuit-across various inputs. We mathematically formalize circuit stability and circuit equivalence. Then, through three case studies, we empirically show that circuit stability and the lack thereof can characterize and predict different aspects of generalization. Our proposed methods offer a step towards rigorously relating the generality of models to their interpretability. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.708485 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24733 | Jiaxu Zhang | Jiaxu Zhang, Xianfang Zeng, Xin Chen, Wei Zuo, Gang Yu, Guosheng Lin, Zhigang Tu | DreamDance: Animating Character Art via Inpainting Stable Gaussian Worlds | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents DreamDance, a novel character art animation framework capable of producing stable, consistent character and scene motion conditioned on precise camera trajectories. To achieve this, we re-formulate the animation task as two inpainting-based steps: Camera-aware Scene Inpainting and Pose-aware Video Inpainting. The first step leverages a pre-trained image inpainting model to generate multi-view scene images from the reference art and optimizes a stable large-scale Gaussian field, which enables coarse background video rendering with camera trajectories. However, the rendered video is rough and only conveys scene motion. To resolve this, the second step trains a pose-aware video inpainting model that injects the dynamic character into the scene video while enhancing background quality. Specifically, this model is a DiT-based video generation model with a gating strategy that adaptively integrates the character's appearance and pose information into the base background video. Through extensive experiments, we demonstrate the effectiveness and generalizability of DreamDance, producing high-quality and consistent character animations with remarkable camera dynamics. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711625 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24735 | Yu Hin (Gary) Au | Yu Hin Au and Levent Tun\c{c}el | A Computational Search for Minimal Obstruction Graphs for the Lov\'{a}sz--Schrijver SDP Hierarchy | null | null | null | null | math.CO cs.DM math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the lift-and-project relaxations of the stable set polytope of graphs generated by $\text{LS}_+$, the SDP lift-and-project operator devised by Lov\'{a}sz and Schrijver. In particular, we focus on searching for $\ell$-minimal graphs, which are graphs on $3\ell$ vertices whose stable set polytope has rank $\ell$ with respect to $\text{LS}_+$. These are the graphs which are the most challenging for the $\text{LS}_+$ operator according to one of the main complexity measures (smallest graphs with largest $\text{LS}_+$-rank). We introduce the notion of $\text{LS}_+$ certificate packages, which is a framework that allows for efficient and reliable verification of membership of points in $\text{LS}_+$-relaxations. Using this framework, we present numerical certificates which (combined with other results) show that there are at least $49$ $3$-minimal graphs, as well as over $4000$ $4$-minimal graphs. This marks a significant leap from the $14$ $3$-minimal and $588$ $4$-minimal graphs known before this work, with many of the newly-discovered graphs containing novel structures which helps enrich and recalibrate our understanding of $\ell$-minimal graphs. Some of this computational work leads to interesting conjectures. We also find all of the smallest vertex-transitive graphs with $\text{LS}_+$-rank $\ell$ for every $\ell \leq 4$. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.706275 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24736 | Julio Cesar Cavalcanti | Julio Cesar Cavalcanti, Gabriel Skantze | "Dyadosyncrasy", Idiosyncrasy and Demographic Factors in Turn-Taking | Accepted to Interspeech 2025 | null | null | null | eess.AS cs.CL | http://creativecommons.org/licenses/by/4.0/ | Turn-taking in dialogue follows universal constraints but also varies significantly. This study examines how demographic (sex, age, education) and individual factors shape turn-taking using a large dataset of US English conversations (Fisher). We analyze Transition Floor Offset (TFO) and find notable interspeaker variation. Sex and age have small but significant effects female speakers and older individuals exhibit slightly shorter offsets - while education shows no effect. Lighter topics correlate with shorter TFOs. However, individual differences have a greater impact, driven by a strong idiosyncratic and an even stronger "dyadosyncratic" component - speakers in a dyad resemble each other more than they resemble themselves in different dyads. This suggests that the dyadic relationship and joint activity are the strongest determinants of TFO, outweighing demographic influences. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.704007 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24737 | Erchi Wang | Erchi Wang, Yuqing Zhu, Yu-Xiang Wang | Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the problem of differentially private empirical risk minimization (DP-ERM) for binary linear classification. We obtain an efficient $(\varepsilon,\delta)$-DP algorithm with an empirical zero-one risk bound of $\tilde{O}\left(\frac{1}{\gamma^2\varepsilon n} + \frac{|S_{\mathrm{out}}|}{\gamma n}\right)$ where $n$ is the number of data points, $S_{\mathrm{out}}$ is an arbitrary subset of data one can remove and $\gamma$ is the margin of linear separation of the remaining data points (after $S_{\mathrm{out}}$ is removed). Here, $\tilde{O}(\cdot)$ hides only logarithmic terms. In the agnostic case, we improve the existing results when the number of outliers is small. Our algorithm is highly adaptive because it does not require knowing the margin parameter $\gamma$ or outlier subset $S_{\mathrm{out}}$. We also derive a utility bound for the advanced private hyperparameter tuning algorithm. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711735 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24740 | Kalina Dimitrova | Kalina Dimitrova, Venelin Kozhuharov, Ruslan Nastaev and Peicho Petkov | Cluster Reconstruction in Electromagnetic Calorimeters Using Machine Learning Methods | null | null | null | null | physics.ins-det | http://creativecommons.org/licenses/by/4.0/ | Machine-learning-based methods can be developed for the reconstruction of clusters in segmented detectors for high energy physics experiments. Convolutional neural networks with autoencoder architecture trained on labeled data from a simulated dataset reconstruct events by providing information about the hit point and energy of each particle that has entered the detector. The correct reconstruction of the positionand the energy of the incident particles is crucial for the accurate events reconstruction. The presented method shows the ability to reconstruct the impact point within the same segment as the true position and determines the particle energy with good precision. It can be applied in a wide range of cases of event reconstruction where the good separation of overlapping signals plays a key role in the data analysis. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712568 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24741 | Alfonso M. Ganan-Calvo | Alfonso M. Ganan-Calvo and Miguel A. Herrada and Jens Eggers | Cone-jet Stokes solutions in strong viscous flows: the vanishing flow rate limit | 38 pages, 15 figures, 1 appendix | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Steady tip streaming in the vanishing flow rate limit has been evidenced both experimentally and numerically in the literature. However, local conical Stokes flow solutions supporting these results at vanishing small scales around the emitting tip have remained elusive. This work presents approximate local conical solutions in liquid-liquid flow focusing and tip streaming, in general, as the limit of a macroscopic vanishing issued flow rate. This provides mathematical foundations for the existence of an asymptotically vanishing scale at the tip of an intermediate conical flow geometry with angle $\alpha$. For a sufficiently small inner-to-outer liquid viscosity ratio $\lambda$, these solutions exhibit a universal power-law relationship between this ratio and the cone angle as $\alpha=k \lambda^{1/2}$, where the prefactor $k$, of the order of unity, depends on the geometric details of the macroscopic flow. This confirms the existing proposals that anticipate the use of flow focusing and tip streaming technologies for tight control of microscopic scales, down to those where diffuse liquid-liquid interfaces become manifested. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712639 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24742 | Andres Munoz-Arcentales Ph. D. | Irene Plaza-Ortiz, Andres Munoz-Arcentales, Joaqu\'in Salvach\'ua, Carlos Aparicio, Gabriel Huecas, Enrique Barra | Authentication and authorization in Data Spaces: A relationship-based access control approach for policy specification based on ODRL | Accepted: OPAL 2025: ODRL And Beyond: Practical Applications And Challenges For Policy-Base Access And Usage Control., June 01--02, 2025, Portoro\v{z}, Slovenia | null | null | null | cs.CR cs.ET | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data has become a crucial resource in the digital economy, fostering initiatives for secure and sovereign data sharing frameworks such as Data Spaces. However, these distributed environments require fine-grained access control mechanisms that balance openness with sovereignty and security. This paper proposes an extension of the Open Digital Rights Language (ODRL) standard, the ODRL Data Spaces (ODS) profile, aimed at supporting authorization and complementing existing authentication mechanisms throughout the data lifecycle. Additionally, a policy execution engine is introduced to translate ODRL policies into executable formats, enabling effective enforcement. The approach is validated through a use case involving OpenFGA, demonstrating its applicability to relationship-based access control scenarios. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711695 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24746 | Jiazhong Cen | Jiazhong Cen, Xudong Zhou, Jiemin Fang, Changsong Wen, Lingxi Xie, Xiaopeng Zhang, Wei Shen, Qi Tian | Tackling View-Dependent Semantics in 3D Language Gaussian Splatting | ICML 2025 camera ready. Project Page: https://jumpat.github.io/laga-page/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in 3D Gaussian Splatting (3D-GS) enable high-quality 3D scene reconstruction from RGB images. Many studies extend this paradigm for language-driven open-vocabulary scene understanding. However, most of them simply project 2D semantic features onto 3D Gaussians and overlook a fundamental gap between 2D and 3D understanding: a 3D object may exhibit various semantics from different viewpoints--a phenomenon we term view-dependent semantics. To address this challenge, we propose LaGa (Language Gaussians), which establishes cross-view semantic connections by decomposing the 3D scene into objects. Then, it constructs view-aggregated semantic representations by clustering semantic descriptors and reweighting them based on multi-view semantics. Extensive experiments demonstrate that LaGa effectively captures key information from view-dependent semantics, enabling a more comprehensive understanding of 3D scenes. Notably, under the same settings, LaGa achieves a significant improvement of +18.7% mIoU over the previous SOTA on the LERF-OVS dataset. Our code is available at: https://github.com/SJTU-DeepVisionLab/LaGa. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712592 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24751 | Sangmin Kim | Sangmin Kim and Hae-Won Park | EL-AGHF: Extended Lagrangian Affine Geometric Heat Flow | 6 pages, 4 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | We propose a constrained Affine Geometric Heat Flow (AGHF) method that evolves so as to suppress the dynamics gaps associated with inadmissible control directions. AGHF provides a unified framework applicable to a wide range of motion planning problems, including both holonomic and non-holonomic systems. However, to generate admissible trajectories, it requires assigning infinite penalties to inadmissible control directions. This design choice, while theoretically valid, often leads to high computational cost or numerical instability when the penalty becomes excessively large. To overcome this limitation, we extend AGHF in an Augmented Lagrangian method approach by introducing a dual trajectory related to dynamics gaps in inadmissible control directions. This method solves the constrained variational problem as an extended parabolic partial differential equation defined over both the state and dual trajectorys, ensuring the admissibility of the resulting trajectory. We demonstrate the effectiveness of our algorithm through simulation examples. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.7127 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24754 | Yingchaojie Feng | Yingchaojie Feng, Yiqun Sun, Yandong Sun, Minfeng Zhu, Qiang Huang, Anthony K. H. Tung, Wei Chen | Don't Reinvent the Wheel: Efficient Instruction-Following Text Embedding based on Guided Space Transformation | Accepted to ACL 2025 | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | In this work, we investigate an important task named instruction-following text embedding, which generates dynamic text embeddings that adapt to user instructions, highlighting specific attributes of text. Despite recent advancements, existing approaches suffer from significant computational overhead, as they require re-encoding the entire corpus for each new instruction. To address this challenge, we propose GSTransform, a novel instruction-following text embedding framework based on Guided Space Transformation. Our key observation is that instruction-relevant information is inherently encoded in generic embeddings but remains underutilized. Instead of repeatedly encoding the corpus for each instruction, GSTransform is a lightweight transformation mechanism that adapts pre-computed embeddings in real time to align with user instructions, guided by a small amount of text data with instruction-focused label annotation. We conduct extensive experiments on three instruction-awareness downstream tasks across nine real-world datasets, demonstrating that GSTransform improves instruction-following text embedding quality over state-of-the-art methods while achieving dramatic speedups of 6~300x in real-time processing on large-scale datasets. The source code is available at https://github.com/YingchaojieFeng/GSTransform. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711039 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24756 | Diego Clerissi | Dario Olianas, Diego Clerissi, Maurizio Leotta, Filippo Ricca | TESTQUEST: A Web Gamification Tool to Improve Locators and Page Objects Quality | 4 pages, 3 figures, submitted to 51st Euromicro Conference Series on Software Engineering and Advanced Applications (SEAA) 2025 | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Web applications play a crucial role in our daily lives, making it essential to employ testing methods that ensure their quality. Typically, Web testing automation frameworks rely on locators to interact with the graphical user interface, acting as connection points to the elements on a Web page. Nevertheless, locators are widely recognized as a major vulnerability in Web testing, as they are highly sensitive to the frequent changes in Web page structures caused by rapid software evolution. The adoption of the Page Object pattern to separate test logic from structural layout - supporting code reuse and maintainability - has generally led to more robust test cases. However, their implementation is a manually intensive task, and even automated support may require manual realignment efforts. Although gamification strategies have recently been integrated into the Web testing process to boost user engagement, using tasks and rewards aligned with testing activities, they have not yet been employed to enhance the robustness of locators and support the implementation of Page Objects.
In this paper, we introduce TESTQUEST, a tool designed to improve test robustness by applying gamification to locators and Page Objects, boosting user engagement while guiding them toward the adoption of best practices. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709317 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24763 | Steve Blandino | Steve Blandino, Nada Golmie, Anirudha Sahoo, Thao Nguyen, Tanguy Ropitault, David Griffith and Amala Sonny | Detecting Airborne Objects with 5G NR Radars | null | null | null | null | eess.SP cs.NI | http://creativecommons.org/licenses/by/4.0/ | The integration of sensing capabilities into 5G New Radio (5G NR) networks offers an opportunity to enable the detection of airborne objects without the need for dedicated radars. This paper investigates the feasibility of using standardized Positioning Reference Signals (PRS) to detect UAVs in Urban Micro (UMi) and Urban Macro (UMa) propagation environments. A full 5G NR radar processing chain is implemented, including clutter suppression, angle and range estimation, and 3D position reconstruction. Simulation results show that performance strongly depends on the propagation environment. 5G NR radars exhibit the highest missed detection rate, up to 16%, in UMi, due to severe clutter. Positioning error increases with target distance, resulting in larger errors in UMa scenarios and at higher UAV altitudes. In particular, the system achieves a position error within 4m in the UMi environment and within 8m in UMa. The simulation platform has been released as open-source software to support reproducible research in integrated sensing and communication (ISAC) systems. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709798 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24768 | Haoyu Li | Haoyu Li, Xuhong Li, Yiming Dong, Kun Liu | From Macro to Micro: Probing Dataset Diversity in Language Model Fine-Tuning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dataset diversity plays a pivotal role for the successful training of many machine learning models, particularly in the supervised fine-tuning (SFT) stage of large language model (LLM) development. Despite increasing recognition of its importance, systematic analyses of dataset diversity still remain underexplored. To address this gap, this work presents a systematic taxonomy of existing diversity-control strategies, which primarily focus on the instruction component, operating at either macroscopic (entire instruction semantics) or mesoscopic levels (instruction units), and furthermore introduces a novel analysis of microscopic diversity within the response component, specifically analyzing the statistical distribution of tokens in SFT training samples. In the experimental evaluation, we construct fixed-size datasets (e.g., 10,000 samples each) from a corpus of 117,000 open-source SFT samples, incorporating six distinct diversity-control strategies spanning macro-, meso-, and microscopic levels applied to both instructions and responses. We then fine-tune LLMs on these datasets to assess the six diversity-control strategies. Results reveal that while macroscopic and mesoscopic strategies lead to higher performance with increasing diversity, the microscopic strategy in responses exhibits both a stronger correlation between model performance and the degree of diversity and superior performance with maximum diversity across all strategies. These findings offer actionable insights for constructing high-performance SFT datasets. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.675912 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24776 | Zachary Bastiani | Zachary Bastiani, Robert M. Kirby, Jacob Hochhalter, Shandian Zhe | Diffusion-Based Symbolic Regression | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Diffusion has emerged as a powerful framework for generative modeling, achieving remarkable success in applications such as image and audio synthesis. Enlightened by this progress, we propose a novel diffusion-based approach for symbolic regression. We construct a random mask-based diffusion and denoising process to generate diverse and high-quality equations. We integrate this generative processes with a token-wise Group Relative Policy Optimization (GRPO) method to conduct efficient reinforcement learning on the given measurement dataset. In addition, we introduce a long short-term risk-seeking policy to expand the pool of top-performing candidates, further enhancing performance. Extensive experiments and ablation studies have demonstrated the effectiveness of our approach. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.71144 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24780 | RunZe He | Run-Ze He, Jun-Jian Su, Su-Juan Qin, Zheng-Ping Jin, Fei Gao | QGAN-based data augmentation for hybrid quantum-classical neural networks | null | null | null | null | cs.LG quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum neural networks converge faster and achieve higher accuracy than classical models. However, data augmentation in quantum machine learning remains underexplored. To tackle data scarcity, we integrate quantum generative adversarial networks (QGANs) with hybrid quantum-classical neural networks (HQCNNs) to develop an augmentation framework. We propose two strategies: a general approach to enhance data processing and classification across HQCNNs, and a customized strategy that dynamically generates samples tailored to the HQCNN's performance on specific data categories, improving its ability to learn from complex datasets. Simulation experiments on the MNIST dataset demonstrate that QGAN outperforms traditional data augmentation methods and classical GANs. Compared to baseline DCGAN, QGAN achieves comparable performance with half the parameters, balancing efficiency and effectiveness. This suggests that QGANs can simplify models and generate high-quality data, enhancing HQCNN accuracy and performance. These findings pave the way for applying quantum data augmentation techniques in machine learning. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712859 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24781 | Karim Abou-Moustafa | Karim Abou-Moustafa | Efficient Estimation of Regularized Tyler's M-Estimator Using Approximate LOOCV | An extended version of a short article that appeared in 2023 IEEE Workshop on Information Theory, Saint-Malo, France | null | null | null | stat.ML cs.CE cs.CV cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | We consider the problem of estimating a regularization parameter, or a shrinkage coefficient $\alpha \in (0,1)$ for Regularized Tyler's M-estimator (RTME). In particular, we propose to estimate an optimal shrinkage coefficient by setting $\alpha$ as the solution to a suitably chosen objective function; namely the leave-one-out cross-validated (LOOCV) log-likelihood loss. Since LOOCV is computationally prohibitive even for moderate sample size $n$, we propose a computationally efficient approximation for the LOOCV log-likelihood loss that eliminates the need for invoking the RTME procedure $n$ times for each sample left out during the LOOCV procedure. This approximation yields an $O(n)$ reduction in the running time complexity for the LOOCV procedure, which results in a significant speedup for computing the LOOCV estimate. We demonstrate the efficiency and accuracy of the proposed approach on synthetic high-dimensional data sampled from heavy-tailed elliptical distributions, as well as on real high-dimensional datasets for object recognition, face recognition, and handwritten digit's recognition. Our experiments show that the proposed approach is efficient and consistently more accurate than other methods in the literature for shrinkage coefficient estimation. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709143 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24784 | Conor Heins | Conor Heins, Toon Van de Maele, Alexander Tschantz, Hampus Linander, Dimitrije Markovic, Tommaso Salvatori, Corrado Pezzato, Ozan Catal, Ran Wei, Magnus Koudahl, Marco Perin, Karl Friston, Tim Verbelen, Christopher Buckley | AXIOM: Learning to Play Games in Minutes with Expanding Object-Centric Models | 10 pages main text, 4 figures, 2 tables; 25 pages supplementary material, 8 figures | null | null | null | cs.AI cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Current deep reinforcement learning (DRL) approaches achieve state-of-the-art performance in various domains, but struggle with data efficiency compared to human learning, which leverages core priors about objects and their interactions. Active inference offers a principled framework for integrating sensory information with prior knowledge to learn a world model and quantify the uncertainty of its own beliefs and predictions. However, active inference models are usually crafted for a single task with bespoke knowledge, so they lack the domain flexibility typical of DRL approaches. To bridge this gap, we propose a novel architecture that integrates a minimal yet expressive set of core priors about object-centric dynamics and interactions to accelerate learning in low-data regimes. The resulting approach, which we call AXIOM, combines the usual data efficiency and interpretability of Bayesian approaches with the across-task generalization usually associated with DRL. AXIOM represents scenes as compositions of objects, whose dynamics are modeled as piecewise linear trajectories that capture sparse object-object interactions. The structure of the generative model is expanded online by growing and learning mixture models from single events and periodically refined through Bayesian model reduction to induce generalization. AXIOM masters various games within only 10,000 interaction steps, with both a small number of parameters compared to DRL, and without the computational expense of gradient-based optimization. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709853 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24786 | Eran Beeri Bamani | Eran Bamani Beeri, Eden Nissinman, Avishai Sintov | DiG-Net: Enhancing Quality of Life through Hyper-Range Dynamic Gesture Recognition in Assistive Robotics | arXiv admin note: substantial text overlap with arXiv:2411.18413 | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dynamic hand gestures play a pivotal role in assistive human-robot interaction (HRI), facilitating intuitive, non-verbal communication, particularly for individuals with mobility constraints or those operating robots remotely. Current gesture recognition methods are mostly limited to short-range interactions, reducing their utility in scenarios demanding robust assistive communication from afar. In this paper, we introduce a novel approach designed specifically for assistive robotics, enabling dynamic gesture recognition at extended distances of up to 30 meters, thereby significantly improving accessibility and quality of life. Our proposed Distance-aware Gesture Network (DiG-Net) effectively combines Depth-Conditioned Deformable Alignment (DADA) blocks with Spatio-Temporal Graph modules, enabling robust processing and classification of gesture sequences captured under challenging conditions, including significant physical attenuation, reduced resolution, and dynamic gesture variations commonly experienced in real-world assistive environments. We further introduce the Radiometric Spatio-Temporal Depth Attenuation Loss (RSTDAL), shown to enhance learning and strengthen model robustness across varying distances. Our model demonstrates significant performance improvement over state-of-the-art gesture recognition frameworks, achieving a recognition accuracy of 97.3% on a diverse dataset with challenging hyper-range gestures. By effectively interpreting gestures from considerable distances, DiG-Net significantly enhances the usability of assistive robots in home healthcare, industrial safety, and remote assistance scenarios, enabling seamless and intuitive interactions for users regardless of physical limitations | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711634 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24787 | Yucheng Zhou | Yucheng Zhou, Jiahao Yuan, Qianning Wang | Draw ALL Your Imagine: A Holistic Benchmark and Agent Framework for Complex Instruction-based Image Generation | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in text-to-image (T2I) generation have enabled models to produce high-quality images from textual descriptions. However, these models often struggle with complex instructions involving multiple objects, attributes, and spatial relationships. Existing benchmarks for evaluating T2I models primarily focus on general text-image alignment and fail to capture the nuanced requirements of complex, multi-faceted prompts. Given this gap, we introduce LongBench-T2I, a comprehensive benchmark specifically designed to evaluate T2I models under complex instructions. LongBench-T2I consists of 500 intricately designed prompts spanning nine diverse visual evaluation dimensions, enabling a thorough assessment of a model's ability to follow complex instructions. Beyond benchmarking, we propose an agent framework (Plan2Gen) that facilitates complex instruction-driven image generation without requiring additional model training. This framework integrates seamlessly with existing T2I models, using large language models to interpret and decompose complex prompts, thereby guiding the generation process more effectively. As existing evaluation metrics, such as CLIPScore, fail to adequately capture the nuances of complex instructions, we introduce an evaluation toolkit that automates the quality assessment of generated images using a set of multi-dimensional metrics. The data and code are released at https://github.com/yczhou001/LongBench-T2I. | 2025-06-02T00:00:00 | new_dataset | true | 0.714777 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24788 | Houjun Liu | Houjun Liu, John Bauer, Christopher D. Manning | Drop Dropout on Single-Epoch Language Model Pretraining | Accepted to ACL Findings; 5 pages, 2 figures, 4 pages of appendix | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Originally, dropout was seen as a breakthrough regularization technique that reduced overfitting and improved performance in almost all applications of deep learning by reducing overfitting. Yet, single-epoch pretraining tasks common to modern LLMs yield minimal overfitting, leading to dropout not being used for large LLMs. Nevertheless, no thorough empirical investigation has been done on the role of dropout in LM pretraining. Through experiments in single-epoch pretraining of both masked (BERT) and autoregressive (Pythia 160M and 1.4B) LMs with varying levels of dropout, we find that downstream performance in language modeling, morpho-syntax (BLiMP), question answering (SQuAD), and natural-language inference (MNLI) improves when dropout is not applied during pretraining. We additionally find that the recently-introduced "early dropout" also degrades performance over applying no dropout at all. We further investigate the models' editability, and find that models trained without dropout are more successful in gradient-based model editing (MEND) and equivalent in representation-based model editing (ReFT). Therefore, we advocate to drop dropout during single-epoch pretraining. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710884 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24791 | Jiaru Zhang | Jiaru Zhang, Juanwu Lu, Ziran Wang, Ruqi Zhang | Inference Acceleration of Autoregressive Normalizing Flows by Selective Jacobi Decoding | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Normalizing flows are promising generative models with advantages such as theoretical rigor, analytical log-likelihood computation, and end-to-end training. However, the architectural constraints to ensure invertibility and tractable Jacobian computation limit their expressive power and practical usability. Recent advancements utilize autoregressive modeling, significantly enhancing expressive power and generation quality. However, such sequential modeling inherently restricts parallel computation during inference, leading to slow generation that impedes practical deployment. In this paper, we first identify that strict sequential dependency in inference is unnecessary to generate high-quality samples. We observe that patches in sequential modeling can also be approximated without strictly conditioning on all preceding patches. Moreover, the models tend to exhibit low dependency redundancy in the initial layer and higher redundancy in subsequent layers. Leveraging these observations, we propose a selective Jacobi decoding (SeJD) strategy that accelerates autoregressive inference through parallel iterative optimization. Theoretical analyses demonstrate the method's superlinear convergence rate and guarantee that the number of iterations required is no greater than the original sequential approach. Empirical evaluations across multiple datasets validate the generality and effectiveness of our acceleration technique. Experiments demonstrate substantial speed improvements up to 4.7 times faster inference while keeping the generation quality and fidelity. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712143 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24792 | Xinliu Zhong | Xinliu Zhong, Leo Hwa Liang, Angela S. Koh, Yeo Si Yong | Lightweight Relational Embedding in Task-Interpolated Few-Shot Networks for Enhanced Gastrointestinal Disease Classification | 6 pages, 15 figures | 2024 IEEE Conference on Artificial Intelligence (CAI), 2024, 839-844 | 10.1109/CAI59869.2024.00157 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Traditional diagnostic methods like colonoscopy are invasive yet critical tools necessary for accurately diagnosing colorectal cancer (CRC). Detection of CRC at early stages is crucial for increasing patient survival rates. However, colonoscopy is dependent on obtaining adequate and high-quality endoscopic images. Prolonged invasive procedures are inherently risky for patients, while suboptimal or insufficient images hamper diagnostic accuracy. These images, typically derived from video frames, often exhibit similar patterns, posing challenges in discrimination. To overcome these challenges, we propose a novel Deep Learning network built on a Few-Shot Learning architecture, which includes a tailored feature extractor, task interpolation, relational embedding, and a bi-level routing attention mechanism. The Few-Shot Learning paradigm enables our model to rapidly adapt to unseen fine-grained endoscopic image patterns, and the task interpolation augments the insufficient images artificially from varied instrument viewpoints. Our relational embedding approach discerns critical intra-image features and captures inter-image transitions between consecutive endoscopic frames, overcoming the limitations of Convolutional Neural Networks (CNNs). The integration of a light-weight attention mechanism ensures a concentrated analysis of pertinent image regions. By training on diverse datasets, the model's generalizability and robustness are notably improved for handling endoscopic images. Evaluated on Kvasir dataset, our model demonstrated superior performance, achieving an accuracy of 90.1\%, precision of 0.845, recall of 0.942, and an F1 score of 0.891. This surpasses current state-of-the-art methods, presenting a promising solution to the challenges of invasive colonoscopy by optimizing CRC detection through advanced image analysis. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711128 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24800 | Sergey Dyakov A. | Sergey A. Dyakov, Ilia A. Smagin, Natalia S. Salakhova, Oleg Blokhin, Denis G. Baranov, Ilia M. Fradkin, and Nikolay A. Gippius | Strong coupling of chiral light with chiral matter: a macroscopic study | 12 pages, 4 figures | null | null | null | physics.optics cond-mat.other | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Maximizing the interaction between chiral light and chiral matter is pivotal for the advancement of technologies enabling optical detection that distinguishes between different handedness in chiral organic molecules. One strategy involves developing a resonator that sustains photonic modes with non-zero electromagnetic handedness, which interact differently with chiral molecules of opposite enantiomers. When chiral molecules are positioned in resonator hotspots, they can alter the system's characteristics due to their inherent electric and magnetic transition dipole moments. In this study, we explore this interaction by incorporating the Lorentz pole into the macroscopic parameters of the chiral medium: dielectric permittivity, magnetic permeability, and chirality coefficient. The latter, also known as the Pasteur parameter, is a dimensionless macroscopic measure indicating the medium's chirality, interlinking electric and magnetic fields in the constitutive relations. We show that introducing the Lorentz pole into these macroscopic material parameters of the chiral medium results in chiral strong coupling between light and matter, with the strength of coupling determined by both the medium's chirality and the photonic mode's chirality. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.713985 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24801 | Dorian Christoph Quelle | Dorian Quelle, Frederic Denker, Prashant Garg, Alexandre Bovet | Why Academics Are Leaving Twitter for Bluesky | null | null | null | null | cs.SI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyse the migration of 300,000 academic users from Twitter/X to Bluesky between 2023 and early 2025, combining rich bibliometric data, longitudinal social-media activity, and a novel cross-platform identity-matching pipeline. We show that 18% of scholars in our sample transitioned, with transition rates varying sharply by discipline, political expression, and Twitter engagement but not by traditional academic metrics. Using time-varying Cox models and a matched-pairs design, we isolate genuine peer influence from homophily. We uncover a striking asymmetry whereby information sources drive migration far more powerfully than audience, with this influence decaying exponentially within a week. We further develop an ego-level contagion classifier, revealing that simple contagion drives two-thirds of all exits, shock-driven bursts account for 16%, and complex contagion plays a marginal role. Finally, we show that scholars who rebuild a higher fraction of their former Twitter networks on Bluesky remain significantly more active and engaged. Our findings provide new insights onto theories of network externalities, directional influence, and platform migration, highlighting information sources' central role in overcoming switching costs. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.708169 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24802 | John Stephan | Marc Gonz\'alez, Rachid Guerraoui, Rafael Pinot, Geovani Rizk, John Stephan, Fran\c{c}ois Ta\"iani | ByzFL: Research Framework for Robust Federated Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present ByzFL, an open-source Python library for developing and benchmarking robust federated learning (FL) algorithms. ByzFL provides a unified and extensible framework that includes implementations of state-of-the-art robust aggregators, a suite of configurable attacks, and tools for simulating a variety of FL scenarios, including heterogeneous data distributions, multiple training algorithms, and adversarial threat models. The library enables systematic experimentation via a single JSON-based configuration file and includes built-in utilities for result visualization. Compatible with PyTorch tensors and NumPy arrays, ByzFL is designed to facilitate reproducible research and rapid prototyping of robust FL solutions. ByzFL is available at https://byzfl.epfl.ch/, with source code hosted on GitHub: https://github.com/LPD-EPFL/byzfl. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.708566 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24804 | Xuhui Zhang | Chunjie Wang and Xuhui Zhang and Wenchao Liu and Jinke Ren and Huijun Xing and Shuqiang Wang and Yanyan Shen | Coordinated Beamforming for RIS-Empowered ISAC Systems over Secure Low-Altitude Networks | This manuscript has been submitted to the IEEE | null | null | null | eess.SP cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emerging as a cornerstone for next-generation wireless networks, integrated sensing and communication (ISAC) systems demand innovative solutions to balance spectral efficiency and sensing accuracy. In this paper, we propose a coordinated beamforming framework for a reconfigurable intelligent surface (RIS)-empowered ISAC system, where the active precoding at the dual-functional base station (DFBS) and the passive beamforming at the RIS are jointly optimized to provide communication services for legitimate unmanned aerial vehicles (UAVs) while sensing the unauthorized UAVs. The sum-rate of all legitimate UAVs are maximized, while satisfying the radar sensing signal-to-noise ratio requirements, the transmit power constraints, and the reflection coefficients of the RIS. To address the inherent non-convexity from coupled variables, we propose a low-complexity algorithm integrating fractional programming with alternating optimization, featuring convergence guarantees. Numerical results demonstrate that the proposed algorithm achieves higher data rate compared to disjoint optimization benchmarks. This underscores RIS's pivotal role in harmonizing communication and target sensing functionalities for low-altitude networks. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711438 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24805 | Hernan Haimovich | Hernan Haimovich, Shenyu Liu, Antonio Russo, Jose L. Mancilla-Aguilar | Input-Power-to-State Stability of Time-Varying Systems | Submitted to Automatica | null | null | null | eess.SY cs.SY math.OC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | When the state of a system may remain bounded even if both the input amplitude and energy are unbounded, then the state bounds given by the standard input-to-state stability (ISS) and integral-ISS (iISS) properties may provide no useful information. This paper considers an ISS-related concept suitable in such a case: input-power-to-state stability (IPSS). Necessary and sufficient conditions for IPSS are developed for time-varying systems under very mild assumptions on the dynamics. More precisely, it is shown that (a) the existence of a dissipation-form ISS-Lyapunov function implies IPSS, but not necessarily that of an implication-form one, (b) iISS with exponential class-$\KL$ function implies IPSS, and (c) ISS and stronger assumptions on the dynamics imply the existence of a dissipation-form ISS-Lyapunov function and hence IPSS. The latter result is based on a converse Lyapunov theorem for time-varying systems whose dynamics (i.e. state derivative) is not necessarily continuous with respect to time. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711487 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24806 | Ahmadreza Montazerolghaem | Somaye Imanpour, Ahmadreza Montazerolghaem, Saeed Afshari | Optimizing Server Load Distribution in Multimedia IoT Environments through LSTM-Based Predictive Algorithms | null | null | null | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | The Internet of Multimedia Things (IoMT) represents a significant advancement in the evolution of IoT technologies, focusing on the transmission and management of multimedia streams. As the volume of data continues to surge and the number of connected devices grows exponentially, internet traffic has reached unprecedented levels, resulting in challenges such as server overloads and deteriorating service quality. Traditional computer network architectures were not designed to accommodate this rapid increase in demand, leading to the necessity for innovative solutions. In response, Software-Defined Networks (SDNs) have emerged as a promising framework, offering enhanced management capabilities by decoupling the control layer from the data layer. This study explores the load balancing of servers within software-defined multimedia IoT networks. The Long Short-Term Memory (LSTM) prediction algorithm is employed to accurately estimate server loads and fuzzy systems are integrated to optimize load distribution across servers. The findings from the simulations indicate that the proposed approach enhances the optimization and management of IoT networks, resulting in improved service quality, reduced operational costs, and increased productivity. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711591 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24807 | Maksim Kulichenko | Vishikh Athavale, Nikita Fedik, William Colglazier, Anders M. N. Niklasson, Maksim Kulichenko, Sergei Tretiak | PySEQM 2.0: Accelerated Semiempirical Excited State Calculations on Graphical Processing Units | null | null | null | LA-UR-25-25101 | physics.chem-ph physics.comp-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We report the implementation of electronic excited states for semi-empirical quantum chemical methods at the configuration interaction singles (CIS) and time-dependent Hartree-Fock (TDHF) level of theory in the PySEQM software. Built on PyTorch, this implementation leverages GPU acceleration to significantly speed up molecular property calculations. Benchmark tests demonstrate that our approach can compute excited states for molecules with nearly a thousand atoms in under a minute. Additionally, the implementation also includes a machine learning interface to enable parameters re-optimization and neural network training for future machine learning applications for excited state dynamics. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709252 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24808 | Wenhao Ding | Wenhao Ding, Sushant Veer, Yuxiao Chen, Yulong Cao, Chaowei Xiao, Marco Pavone | RealDrive: Retrieval-Augmented Driving with Diffusion Models | null | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Learning-based planners generate natural human-like driving behaviors by learning to reason about nuanced interactions from data, overcoming the rigid behaviors that arise from rule-based planners. Nonetheless, data-driven approaches often struggle with rare, safety-critical scenarios and offer limited controllability over the generated trajectories. To address these challenges, we propose RealDrive, a Retrieval-Augmented Generation (RAG) framework that initializes a diffusion-based planning policy by retrieving the most relevant expert demonstrations from the training dataset. By interpolating between current observations and retrieved examples through a denoising process, our approach enables fine-grained control and safe behavior across diverse scenarios, leveraging the strong prior provided by the retrieved scenario. Another key insight we produce is that a task-relevant retrieval model trained with planning-based objectives results in superior planning performance in our framework compared to a task-agnostic retriever. Experimental results demonstrate improved generalization to long-tail events and enhanced trajectory diversity compared to standard learning-based planners -- we observe a 40% reduction in collision rate on the Waymo Open Motion dataset with RAG. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712054 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24812 | Marcelo Fiore | Marcelo Fiore and Sanjiv Ranchod | Substructural Abstract Syntax with Variable Binding and Single-Variable Substitution | To appear in the Fortieth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS'25) | null | null | null | cs.LO math.CT | http://creativecommons.org/licenses/by/4.0/ | We develop a unified categorical theory of substructural abstract syntax with variable binding and single-variable (capture-avoiding) substitution. This is done for the gamut of context structural rules given by exchange (linear theory) with weakening (affine theory) or with contraction (relevant theory) and with both (cartesian theory). Specifically, in all four scenarios, we uniformly: define abstract syntax with variable binding as free algebras for binding-signature endofunctors over variables; provide finitary algebraic axiomatisations of the laws of substitution; construct single-variable substitution operations by generalised structural recursion; and prove their correctness, establishing their universal abstract character as initial substitution algebras. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.708634 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24813 | Marc Barthelemy | Yannick Feld, Marc Barthelemy | Critical demand in a stochastic model of flows in supply networks | Letter (5 pages with 4 figures)+Supplementary Material | Phys. Rev. Lett. 134, 217401 (Published 30 May, 2025) | null | null | physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supply networks are essential for modern production, yet their critical properties remain understudied. We present a stochastic model with random production capacities to analyze material flow to a root node, focusing on topology and buffer stocks. The critical demand, where unsatisfied demand diverges, is examined mostly through numerical simulations. Without stocks, minimal production dictates behavior, making topology irrelevant. With stocks, memory effects arise, making topology crucial. Increased local connectivity is beneficial: firms should favor broad, short supply chains over long, narrow ones. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710995 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24814 | Sarah Kostinski | Debendro Mookerjee and Sarah Kostinski | Closed-form survival probabilities for biased random walks at arbitrary step number | null | null | null | null | cond-mat.stat-mech physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a closed-form expression for the survival probability of a biased random walker to first reach a target site on a 1D lattice. The expression holds for any step number $N$ and is computationally faster than non-closed-form results in the literature. Because our result is exact even in the intermediate step number range, it serves as a tool to study convergence to the large $N$ limit. We also obtain a closed-form expression for the probability of last passage. In contrast to predictions of the large $N$ approximation, the new expression reveals a critical value of the bias beyond which the tail of the last-passage probability decays monotonically. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711256 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24815 | V Varagapriya | V Varagapriya, Vikas Vikram Singh, Abdel Lisser | Convex Approximations of Random Constrained Markov Decision Processes | null | null | null | null | math.OC cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Constrained Markov decision processes (CMDPs) are used as a decision-making framework to study the long-run performance of a stochastic system. It is well-known that a stationary optimal policy of a CMDP problem under discounted cost criterion can be obtained by solving a linear programming problem when running costs and transition probabilities are exactly known. In this paper, we consider a discounted cost CMDP problem where the running costs and transition probabilities are defined using random variables. Consequently, both the objective function and constraints become random. We use chance constraints to model these uncertainties and formulate the uncertain CMDP problem as a joint chance-constrained Markov decision process (JCCMDP). Under random running costs, we assume that the dependency among random constraint vectors is driven by a Gumbel-Hougaard copula. Using standard probability inequalities, we construct convex upper bound approximations of the JCCMDP problem under certain conditions on random running costs. In addition, we propose a linear programming problem whose optimal value gives a lower bound to the optimal value of the JCCMDP problem. When both running costs and transition probabilities are random, we define the latter variables as a sum of their means and random perturbations. Under mild conditions on the random perturbations and random running costs, we construct convex upper and lower bound approximations of the JCCMDP problem. We analyse the quality of the derived bounds through numerical experiments on a queueing control problem for random running costs. For the case when both running costs and transition probabilities are random, we choose randomly generated Markov decision problems called Garnets for numerical experiments. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711597 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24816 | Jiangpeng He | Jiangpeng He and Zhihao Duan and Fengqing Zhu | CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Class-Incremental Learning (CIL) aims to learn new classes sequentially while retaining the knowledge of previously learned classes. Recently, pre-trained models (PTMs) combined with parameter-efficient fine-tuning (PEFT) have shown remarkable performance in rehearsal-free CIL without requiring exemplars from previous tasks. However, existing adapter-based methods, which incorporate lightweight learnable modules into PTMs for CIL, create new adapters for each new task, leading to both parameter redundancy and failure to leverage shared knowledge across tasks. In this work, we propose ContinuaL Low-Rank Adaptation (CL-LoRA), which introduces a novel dual-adapter architecture combining \textbf{task-shared adapters} to learn cross-task knowledge and \textbf{task-specific adapters} to capture unique features of each new task. Specifically, the shared adapters utilize random orthogonal matrices and leverage knowledge distillation with gradient reassignment to preserve essential shared knowledge. In addition, we introduce learnable block-wise weights for task-specific adapters, which mitigate inter-task interference while maintaining the model's plasticity. We demonstrate CL-LoRA consistently achieves promising performance under multiple benchmarks with reduced training and inference computation, establishing a more efficient and scalable paradigm for continual learning with pre-trained models. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710664 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24817 | Eric Nelson | Eric C. Nelson, Kyle J. Charbonnet, Haytham H. Effarah, Trevor Reutershan, Kyle D. Chesnut, Christopher P. J. Barty | Focused axisymmetric spatially chirped beams | 12 pages, supplemental document (3 pages), submitted for peer review | null | null | null | physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A characterization of the focused space-time structures of radially chirped beams is provided, detailing different tunable properties such as: variable on-axis centroid velocity, symmetric pulse front tilt, transverse intensity modulations, and polarization states. While the practical generation of ideal radially chirped beams and polarizations can be problematic, it is shown that the primary characteristics of these beams can be mimicked with simple arrays of axisymmetric, 1D spatially chirped beams. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712346 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24818 | Dirk Witthaut | Aarathi Parameswaran, Iva Ba\v{c}i\'c, Andrea Benigni, Dirk Witthaut | Symmetry breaking in minimum dissipation networks | 12 pages, 11 figures | null | null | null | nlin.AO physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Both natural and man-made supply networks exhibit universal structural patterns, such as the formation of loops. These patterns can be understood in terms of optimization models, assuming that biological networks evolved to optimal states and technical networks are designed to function optimally. In this article, we analyze networks that minimize dissipation under a research constraint. We demonstrate spontaneous symmetry breaking in optimal network structures as a function of resource scaling. We show that fluctuations intricately impact the structure and can lead to a reentrant transition from a symmetry-broken state to a symmetric state and back again. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712625 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24819 | Haozhan Tang | Haozhan Tang, Tianyi Zhang, Matthew Johnson-Roberson, Weiming Zhi | Bi-Manual Joint Camera Calibration and Scene Representation | null | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Robot manipulation, especially bimanual manipulation, often requires setting up multiple cameras on multiple robot manipulators. Before robot manipulators can generate motion or even build representations of their environments, the cameras rigidly mounted to the robot need to be calibrated. Camera calibration is a cumbersome process involving collecting a set of images, with each capturing a pre-determined marker. In this work, we introduce the Bi-Manual Joint Calibration and Representation Framework (Bi-JCR). Bi-JCR enables multiple robot manipulators, each with cameras mounted, to circumvent taking images of calibration markers. By leveraging 3D foundation models for dense, marker-free multi-view correspondence, Bi-JCR jointly estimates: (i) the extrinsic transformation from each camera to its end-effector, (ii) the inter-arm relative poses between manipulators, and (iii) a unified, scale-consistent 3D representation of the shared workspace, all from the same captured RGB image sets. The representation, jointly constructed from images captured by cameras on both manipulators, lives in a common coordinate frame and supports collision checking and semantic segmentation to facilitate downstream bimanual coordination tasks. We empirically evaluate the robustness of Bi-JCR on a variety of tabletop environments, and demonstrate its applicability on a variety of downstream tasks. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711953 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24820 | Yu Xi | Yu Xi, Xiaoyu Gu, Haoyu Li, Jun Song, Bo Zheng, Kai Yu | Masked Self-distilled Transducer-based Keyword Spotting with Semi-autoregressive Decoding | null | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | RNN-T-based keyword spotting (KWS) with autoregressive decoding~(AR) has gained attention due to its streaming architecture and superior performance. However, the simplicity of the prediction network in RNN-T poses an overfitting issue, especially under challenging scenarios, resulting in degraded performance. In this paper, we propose a masked self-distillation (MSD) training strategy that avoids RNN-Ts overly relying on prediction networks to alleviate overfitting. Such training enables masked non-autoregressive (NAR) decoding, which fully masks the RNN-T predictor output during KWS decoding. In addition, we propose a semi-autoregressive (SAR) decoding approach to integrate the advantages of AR and NAR decoding. Our experiments across multiple KWS datasets demonstrate that MSD training effectively alleviates overfitting. The SAR decoding method preserves the superior performance of AR decoding while benefits from the overfitting suppression of NAR decoding, achieving excellent results. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711271 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24823 | Yinggan Xu | Yinggan Xu, Yue Liu, Zhiqiang Gao, Changnan Peng and Di Luo | PhySense: Principle-Based Physics Reasoning Benchmarking for Large Language Models | null | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have rapidly advanced and are increasingly capable of tackling complex scientific problems, including those in physics. Despite this progress, current LLMs often fail to emulate the concise, principle-based reasoning characteristic of human experts, instead generating lengthy and opaque solutions. This discrepancy highlights a crucial gap in their ability to apply core physical principles for efficient and interpretable problem solving. To systematically investigate this limitation, we introduce PhySense, a novel principle-based physics reasoning benchmark designed to be easily solvable by experts using guiding principles, yet deceptively difficult for LLMs without principle-first reasoning. Our evaluation across multiple state-of-the-art LLMs and prompt types reveals a consistent failure to align with expert-like reasoning paths, providing insights for developing AI systems with efficient, robust and interpretable principle-based scientific reasoning. | 2025-06-02T00:00:00 | new_dataset | true | 0.712242 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24824 | Marta L\'opez Rauhut | Marta L\'opez-Rauhut, Hongyu Zhou, Mathieu Aubry, Loic Landrieu | Segmenting France Across Four Centuries | 20 pages, 8 figures, 3 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Historical maps offer an invaluable perspective into territory evolution across past centuries--long before satellite or remote sensing technologies existed. Deep learning methods have shown promising results in segmenting historical maps, but publicly available datasets typically focus on a single map type or period, require extensive and costly annotations, and are not suited for nationwide, long-term analyses. In this paper, we introduce a new dataset of historical maps tailored for analyzing large-scale, long-term land use and land cover evolution with limited annotations. Spanning metropolitan France (548,305 km^2), our dataset contains three map collections from the 18th, 19th, and 20th centuries. We provide both comprehensive modern labels and 22,878 km^2 of manually annotated historical labels for the 18th and 19th century maps. Our dataset illustrates the complexity of the segmentation task, featuring stylistic inconsistencies, interpretive ambiguities, and significant landscape changes (e.g., marshlands disappearing in favor of forests). We assess the difficulty of these challenges by benchmarking three approaches: a fully-supervised model trained with historical labels, and two weakly-supervised models that rely only on modern annotations. The latter either use the modern labels directly or first perform image-to-image translation to address the stylistic gap between historical and contemporary maps. Finally, we discuss how these methods can support long-term environment monitoring, offering insights into centuries of landscape transformation. Our official project repository is publicly available at https://github.com/Archiel19/FRAx4.git. | 2025-06-02T00:00:00 | new_dataset | true | 0.710998 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24831 | Luis Enrique Correa Rocha Prof | Ruixue Jing, Ryota Kobayashi, Luis Enrique Correa Rocha | Optimising cryptocurrency portfolios through stable clustering of price correlation networks | Comments welcomed | null | null | null | physics.pop-ph physics.soc-ph q-fin.PM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emerging cryptocurrency market presents unique challenges for investment due to its unregulated nature and inherent volatility. However, collective price movements can be explored to maximise profits with minimal risk using investment portfolios. In this paper, we develop a technical framework that utilises historical data on daily closing prices and integrates network analysis, price forecasting, and portfolio theory to identify cryptocurrencies for building profitable portfolios under uncertainty. Our method utilises the Louvain network community algorithm and consensus clustering to detect robust and temporally stable clusters of highly correlated cryptocurrencies, from which the chosen cryptocurrencies are selected. A price prediction step using the ARIMA model guarantees that the portfolio performs well for up to 14 days in the investment horizon. Empirical analysis over a 5-year period shows that despite the high volatility in the crypto market, hidden price patterns can be effectively utilised to generate consistently profitable, time-agnostic cryptocurrency portfolios. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712298 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24833 | Duxing Hao | Duxing Hao, Chun-I Lu, Ziqi Sun, Yu-Chen Chang, Wen-Hao Chang, Ye-Ru Chen, Akiyoshi Park, Beining Rao, Siyuan Qiu, Yann-Wen Lan, Ting-Hua Lu, and Nai-Chang Yeh | Cryogenic scanning photocurrent spectroscopy for materials responses to structured optical fields | null | null | null | null | cond-mat.other physics.ins-det | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Circular dichroism spectroscopy is known to provide important insights into the interplay of different degrees of freedom in quantum materials, and yet spectroscopic study of the optoelectronic responses of quantum materials to structured optical fields, such as light with finite spin and orbital angular momentum, has not yet been widely explored, particularly at cryogenic temperature. Here we demonstrate the design and application of a novel instrument that integrates scanning spectroscopic photocurrent measurements with structured light of controlled spin and orbital angular momentum. For structured photons with wavelengths between 500 nm to 700 nm, this instrument can perform spatially resolved photocurrent measurements of two-dimensional materials or thin crystals under magnetic fields up to $\pm$ 14 Tesla, at temperatures from 300 K down to 3 K, with either spin angular momentum $\pm \hbar$ ororbital angular momentum $\pm \ell \hbar$ (where $\ell$=1,2,3... is the topological charge), and over a (35 $\times$ 25) $\mu m^2$ area with ~ 1 $\mu m$ spatial resolution. These capabilities of the instrument are exemplified by magneto-photocurrent spectroscopic measurements of monolayer 2H-$MoS_2$ field-effect transistors, which not only reveal the excitonic spectra but also demonstrate monotonically increasing photocurrents with increasing |$\ell $| as well as excitonic Zeeman splitting and an enhanced Land\'e g-factor due to the enhanced formation of intervalley dark excitons under magnetic field. These studies thus demonstrate the versatility of the scanning photocurrent spectrometry for investigating excitonic physics, optical selection rules, and optoelectronic responses of novel quantum materials and engineered quantum devices to structured light. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710208 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24837 | Qizao Wang | Yinglian Zhu, Haiyang Yu, Qizao Wang, Wei Lu, Xiangyang Xue, Bin Li | Zero-Shot Chinese Character Recognition with Hierarchical Multi-Granularity Image-Text Aligning | The first three authors contributed equally | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chinese Character Recognition (CCR) is a fundamental technology for intelligent document processing. Unlike Latin characters, Chinese characters exhibit unique spatial structures and compositional rules, allowing for the use of fine-grained semantic information in representation. However, existing approaches are usually based on auto-regressive as well as edit distance post-process and typically rely on a single-level character representation. In this paper, we propose a Hierarchical Multi-Granularity Image-Text Aligning (Hi-GITA) framework based on a contrastive paradigm. To leverage the abundant fine-grained semantic information of Chinese characters, we propose multi-granularity encoders on both image and text sides. Specifically, the Image Multi-Granularity Encoder extracts hierarchical image representations from character images, capturing semantic cues from localized strokes to holistic structures. The Text Multi-Granularity Encoder extracts stroke and radical sequence representations at different levels of granularity. To better capture the relationships between strokes and radicals, we introduce Multi-Granularity Fusion Modules on the image and text sides, respectively. Furthermore, to effectively bridge the two modalities, we further introduce a Fine-Grained Decoupled Image-Text Contrastive loss, which aligns image and text representations across multiple granularities. Extensive experiments demonstrate that our proposed Hi-GITA significantly outperforms existing zero-shot CCR methods. For instance, it brings about 20% accuracy improvement in handwritten character and radical zero-shot settings. Code and models will be released soon. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711496 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24839 | Shilpa Sarkar | Shilpa Sarkar | Novel methodology to obtain transonic solutions for dissipative flows around compact objects | 18 pages, 10 figures, 1 table, Accepted in Physical Review D after minor revision | null | null | null | astro-ph.HE hep-th physics.comp-ph physics.plasm-ph | http://creativecommons.org/licenses/by/4.0/ | A novel methodology to obtain global transonic solutions around compact objects is reported here. A unified methodology to obtain accretion as well as wind solutions around these objects has been presented. Flows around compact objects are dissipative, and the conservation equations are therefore stiff. In such conditions, obtaining of sonic point(s) and hence, the transonic solution is not trivial. The conserved equations of motion fail to integrate in the presence of realistic viscosity, thereby making it difficult to obtain a global solution. This inhibits one from getting an actual picture of an astrophysical flow. The current work addresses this long-standing issue of obtaining solutions for both accretion and wind. The methodology developed utilises the inner boundary conditions and takes recourse to implicit-explicit (ImEx) integration schemes, to obtain general global transonic accretion and wind solutions. This is the first time such an attempt has been made. Current work considers the different cooling processes like bremsstrahlung, synchrotron and their inverse-Comptonizations, which are found to affect the thermodynamics of the flow. This methodology could successfully generate all topologies of global solutions, multiple sonic point regime, as well as shocks. A broad parameter space study has been done in this work. In an upcoming part II of the paper, a detailed discussion on the spectra and luminosity of the accretion and wind solutions has been presented. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.713103 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24840 | Yuan Qing | Yuwen Tan, Yuan Qing, Boqing Gong | Vision LLMs Are Bad at Hierarchical Visual Understanding, and LLMs Are the Bottleneck | 28 pages, 13 figures | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reveals that many state-of-the-art large language models (LLMs) lack hierarchical knowledge about our visual world, unaware of even well-established biology taxonomies. This shortcoming makes LLMs a bottleneck for vision LLMs' hierarchical visual understanding (e.g., recognizing Anemone Fish but not Vertebrate). We arrive at these findings using about one million four-choice visual question answering (VQA) tasks constructed from six taxonomies and four image datasets. Interestingly, finetuning a vision LLM using our VQA tasks reaffirms LLMs' bottleneck effect to some extent because the VQA tasks improve the LLM's hierarchical consistency more than the vision LLM's. We conjecture that one cannot make vision LLMs understand visual concepts fully hierarchical until LLMs possess corresponding taxonomy knowledge. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.708804 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24844 | Wanyun Xie | Wanyun Xie, Francesco Tonin, Volkan Cevher | Chameleon: A Flexible Data-mixing Framework for Language Model Pretraining and Finetuning | ICML 2025 | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Training data mixtures greatly impact the generalization performance of large language models. Existing domain reweighting methods often rely on costly weight computations and require retraining when new data is introduced. To this end, we introduce a flexible and efficient data mixing framework, Chameleon, that employs leverage scores to quantify domain importance within a learned embedding space. We first construct a domain affinity matrix over domain embeddings. The induced leverage scores determine a mixture that upweights domains sharing common representations in embedding space. This formulation allows direct transfer to new data by computing the new domain embeddings. In experiments, we demonstrate improvements over three key scenarios: (i) our computed weights improve performance on pretraining domains with a fraction of the compute of existing methods; (ii) Chameleon can adapt to data changes without proxy retraining, boosting few-shot reasoning accuracies when transferred to new data; (iii) our method enables efficient domain reweighting in finetuning, consistently improving test perplexity on all finetuning domains over uniform mixture. Our code is available at https://github.com/LIONS-EPFL/Chameleon. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.711774 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24849 | Mauro Pastore | Jean Barbier, Francesco Camilli, Minh-Toan Nguyen, Mauro Pastore, Rudy Skerk | Statistical mechanics of extensive-width Bayesian neural networks near interpolation | 9 pages + appendices, 12 figures. This submission supersedes arXiv:2501.18530 | null | null | null | stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For three decades statistical mechanics has been providing a framework to analyse neural networks. However, the theoretically tractable models, e.g., perceptrons, random features models and kernel machines, or multi-index models and committee machines with few neurons, remained simple compared to those used in applications. In this paper we help reducing the gap between practical networks and their theoretical understanding through a statistical physics analysis of the supervised learning of a two-layer fully connected network with generic weight distribution and activation function, whose hidden layer is large but remains proportional to the inputs dimension. This makes it more realistic than infinitely wide networks where no feature learning occurs, but also more expressive than narrow ones or with fixed inner weights. We focus on the Bayes-optimal learning in the teacher-student scenario, i.e., with a dataset generated by another network with the same architecture. We operate around interpolation, where the number of trainable parameters and of data are comparable and feature learning emerges. Our analysis uncovers a rich phenomenology with various learning transitions as the number of data increases. In particular, the more strongly the features (i.e., hidden neurons of the target) contribute to the observed responses, the less data is needed to learn them. Moreover, when the data is scarce, the model only learns non-linear combinations of the teacher weights, rather than "specialising" by aligning its weights with the teacher's. Specialisation occurs only when enough data becomes available, but it can be hard to find for practical training algorithms, possibly due to statistical-to-computational~gaps. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712227 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24853 | Zhao Mandi | Zhao Mandi, Yifan Hou, Dieter Fox, Yashraj Narang, Ajay Mandlekar, Shuran Song | DexMachina: Functional Retargeting for Bimanual Dexterous Manipulation | null | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study the problem of functional retargeting: learning dexterous manipulation policies to track object states from human hand-object demonstrations. We focus on long-horizon, bimanual tasks with articulated objects, which is challenging due to large action space, spatiotemporal discontinuities, and embodiment gap between human and robot hands. We propose DexMachina, a novel curriculum-based algorithm: the key idea is to use virtual object controllers with decaying strength: an object is first driven automatically towards its target states, such that the policy can gradually learn to take over under motion and contact guidance. We release a simulation benchmark with a diverse set of tasks and dexterous hands, and show that DexMachina significantly outperforms baseline methods. Our algorithm and benchmark enable a functional comparison for hardware designs, and we present key findings informed by quantitative and qualitative results. With the recent surge in dexterous hand development, we hope this work will provide a useful platform for identifying desirable hardware capabilities and lower the barrier for contributing to future research. Videos and more at https://project-dexmachina.github.io/ | 2025-06-02T00:00:00 | new_dataset | true | 0.713744 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24857 | Heli Ben-Hamu | Heli Ben-Hamu, Itai Gat, Daniel Severo, Niklas Nolte, Brian Karrer | Accelerated Sampling from Masked Diffusion Models via Entropy Bounded Unmasking | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent masked diffusion models (MDMs) have shown competitive performance compared to autoregressive models (ARMs) for language modeling. While most literature has focused on performance enhancing sampling procedures, efficient sampling from MDMs has been scarcely explored. We make the observation that often a given sequence of partially masked tokens determines the values of multiple unknown tokens deterministically, meaning that a single prediction of a masked model holds additional information unused by standard sampling procedures. Based on this observation, we introduce EB-Sampler, a simple drop-in replacement for existing samplers, utilizing an Entropy Bounded unmasking procedure that dynamically unmasks multiple tokens in one function evaluation with predefined approximate error tolerance. We formulate the EB-Sampler as part of a broad family of adaptive samplers for which we provide an error analysis that motivates our algorithmic choices. EB-Sampler accelerates sampling from current state of the art MDMs by roughly 2-3x on standard coding and math reasoning benchmarks without loss in performance. We also validate the same procedure works well on smaller reasoning tasks including maze navigation and Sudoku, tasks ARMs often struggle with. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709917 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24860 | Victoria Webster-Wood | Avery S. Williamson, Michael J. Bennington, Ravesh Sukhnandan, Mrinali Nakhre, Yuemin Mao, Victoria A. Webster-Wood | PB&J: Peanut Butter and Joints for Damped Articulation | to be published in Living Machines 2025 Proceedings | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many bioinspired robots mimic the rigid articulated joint structure of the human hand for grasping tasks, but experience high-frequency mechanical perturbations that can destabilize the system and negatively affect precision without a high-frequency controller. Despite having bandwidth-limited controllers that experience time delays between sensing and actuation, biological systems can respond successfully to and mitigate these high-frequency perturbations. Human joints include damping and stiffness that many rigid articulated bioinspired hand robots lack. To enable researchers to explore the effects of joint viscoelasticity in joint control, we developed a human-hand-inspired grasping robot with viscoelastic structures that utilizes accessible and bioderived materials to reduce the economic and environmental impact of prototyping novel robotic systems. We demonstrate that an elastic element at the finger joints is necessary to achieve concurrent flexion, which enables secure grasping of spherical objects. To significantly damp the manufactured finger joints, we modeled, manufactured, and characterized rotary dampers using peanut butter as an organic analog joint working fluid. Finally, we demonstrated that a real-time position-based controller could be used to successfully catch a lightweight falling ball. We developed this open-source, low-cost grasping platform that abstracts the morphological and mechanical properties of the human hand to enable researchers to explore questions about biomechanics in roboto that would otherwise be difficult to test in simulation or modeling. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.70082 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24863 | Junyu Zhang | Junyu Zhang, Runpei Dong, Han Wang, Xuying Ning, Haoran Geng, Peihao Li, Xialin He, Yutong Bai, Jitendra Malik, Saurabh Gupta, Huan Zhang | AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents AlphaOne ($\alpha$1), a universal framework for modulating reasoning progress in large reasoning models (LRMs) at test time. $\alpha$1 first introduces $\alpha$ moment, which represents the scaled thinking phase with a universal parameter $\alpha$. Within this scaled pre-$\alpha$ moment phase, it dynamically schedules slow thinking transitions by modeling the insertion of reasoning transition tokens as a Bernoulli stochastic process. After the $\alpha$ moment, $\alpha$1 deterministically terminates slow thinking with the end-of-thinking token, thereby fostering fast reasoning and efficient answer generation. This approach unifies and generalizes existing monotonic scaling methods by enabling flexible and dense slow-to-fast reasoning modulation. Extensive empirical studies on various challenging benchmarks across mathematical, coding, and scientific domains demonstrate $\alpha$1's superior reasoning capability and efficiency. Project page: https://alphaone-project.github.io/ | 2025-06-02T00:00:00 | no_new_dataset | false | 0.710496 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24864 | Shizhe Diao | Mingjie Liu, Shizhe Diao, Ximing Lu, Jian Hu, Xin Dong, Yejin Choi, Jan Kautz, Yi Dong | ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models | 26 pages, 17 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in reasoning-centric language models have highlighted reinforcement learning (RL) as a promising method for aligning models with verifiable rewards. However, it remains contentious whether RL truly expands a model's reasoning capabilities or merely amplifies high-reward outputs already latent in the base model's distribution, and whether continually scaling up RL compute reliably leads to improved reasoning performance. In this work, we challenge prevailing assumptions by demonstrating that prolonged RL (ProRL) training can uncover novel reasoning strategies that are inaccessible to base models, even under extensive sampling. We introduce ProRL, a novel training methodology that incorporates KL divergence control, reference policy resetting, and a diverse suite of tasks. Our empirical analysis reveals that RL-trained models consistently outperform base models across a wide range of pass@k evaluations, including scenarios where base models fail entirely regardless of the number of attempts. We further show that reasoning boundary improvements correlates strongly with task competence of base model and training duration, suggesting that RL can explore and populate new regions of solution space over time. These findings offer new insights into the conditions under which RL meaningfully expands reasoning boundaries in language models and establish a foundation for future work on long-horizon RL for reasoning. We release model weights to support further research: https://huggingface.co/nvidia/Nemotron-Research-Reasoning-Qwen-1.5B | 2025-06-02T00:00:00 | no_new_dataset | false | 0.709828 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24867 | Mukul Ranjan | Ujjwal Upadhyay and Mukul Ranjan and Zhiqiang Shen and Mohamed Elhoseiny | Time Blindness: Why Video-Language Models Can't See What Humans Can? | Project page at https://timeblindness.github.io/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in vision-language models (VLMs) have made impressive strides in understanding spatio-temporal relationships in videos. However, when spatial information is obscured, these models struggle to capture purely temporal patterns. We introduce $\textbf{SpookyBench}$, a benchmark where information is encoded solely in temporal sequences of noise-like frames, mirroring natural phenomena from biological signaling to covert communication. Interestingly, while humans can recognize shapes, text, and patterns in these sequences with over 98% accuracy, state-of-the-art VLMs achieve 0% accuracy. This performance gap highlights a critical limitation: an over-reliance on frame-level spatial features and an inability to extract meaning from temporal cues. Furthermore, when trained in data sets with low spatial signal-to-noise ratios (SNR), temporal understanding of models degrades more rapidly than human perception, especially in tasks requiring fine-grained temporal reasoning. Overcoming this limitation will require novel architectures or training paradigms that decouple spatial dependencies from temporal processing. Our systematic analysis shows that this issue persists across model scales and architectures. We release SpookyBench to catalyze research in temporal pattern recognition and bridge the gap between human and machine video understanding. Dataset and code has been made available on our project website: https://timeblindness.github.io/. | 2025-06-02T00:00:00 | new_dataset | true | 0.660718 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24873 | Bojia Zi | Bojia Zi, Weixuan Peng, Xianbiao Qi, Jianan Wang, Shihao Zhao, Rong Xiao, Kam-Fai Wong | MiniMax-Remover: Taming Bad Noise Helps Video Object Removal | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in video diffusion models have driven rapid progress in video editing techniques. However, video object removal, a critical subtask of video editing, remains challenging due to issues such as hallucinated objects and visual artifacts. Furthermore, existing methods often rely on computationally expensive sampling procedures and classifier-free guidance (CFG), resulting in slow inference. To address these limitations, we propose MiniMax-Remover, a novel two-stage video object removal approach. Motivated by the observation that text condition is not best suited for this task, we simplify the pretrained video generation model by removing textual input and cross-attention layers, resulting in a more lightweight and efficient model architecture in the first stage. In the second stage, we distilled our remover on successful videos produced by the stage-1 model and curated by human annotators, using a minimax optimization strategy to further improve editing quality and inference speed. Specifically, the inner maximization identifies adversarial input noise ("bad noise") that makes failure removals, while the outer minimization step trains the model to generate high-quality removal results even under such challenging conditions. As a result, our method achieves a state-of-the-art video object removal results with as few as 6 sampling steps and doesn't rely on CFG, significantly improving inference efficiency. Extensive experiments demonstrate the effectiveness and superiority of MiniMax-Remover compared to existing methods. Codes and Videos are available at: https://minimax-remover.github.io. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.713024 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24874 | Adam Stein | Adam Stein, Aaditya Naik, Neelay Velingker, Mayur Naik, Eric Wong | The Road to Generalizable Neuro-Symbolic Learning Should be Paved with Foundation Models | 19 pages, 11 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Neuro-symbolic learning was proposed to address challenges with training neural networks for complex reasoning tasks with the added benefits of interpretability, reliability, and efficiency. Neuro-symbolic learning methods traditionally train neural models in conjunction with symbolic programs, but they face significant challenges that limit them to simplistic problems. On the other hand, purely-neural foundation models now reach state-of-the-art performance through prompting rather than training, but they are often unreliable and lack interpretability. Supplementing foundation models with symbolic programs, which we call neuro-symbolic prompting, provides a way to use these models for complex reasoning tasks. Doing so raises the question: What role does specialized model training as part of neuro-symbolic learning have in the age of foundation models? To explore this question, we highlight three pitfalls of traditional neuro-symbolic learning with respect to the compute, data, and programs leading to generalization problems. This position paper argues that foundation models enable generalizable neuro-symbolic solutions, offering a path towards achieving the original goals of neuro-symbolic learning without the downsides of training from scratch. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.713314 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24876 | Tajamul Ashraf | Tajamul Ashraf, Amal Saqib, Hanan Ghani, Muhra AlMahri, Yuhao Li, Noor Ahsan, Umair Nawaz, Jean Lahoud, Hisham Cholakkal, Mubarak Shah, Philip Torr, Fahad Shahbaz Khan, Rao Muhammad Anwer, Salman Khan | Agent-X: Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Deep reasoning is fundamental for solving complex tasks, especially in vision-centric scenarios that demand sequential, multimodal understanding. However, existing benchmarks typically evaluate agents with fully synthetic, single-turn queries, limited visual modalities, and lack a framework to assess reasoning quality over multiple steps as required in real-world settings. To address this, we introduce Agent-X, a large-scale benchmark for evaluating vision-centric agents multi-step and deep reasoning capabilities in real-world, multimodal settings. Agent- X features 828 agentic tasks with authentic visual contexts, including images, multi-image comparisons, videos, and instructional text. These tasks span six major agentic environments: general visual reasoning, web browsing, security and surveillance, autonomous driving, sports, and math reasoning. Our benchmark requires agents to integrate tool use with explicit, stepwise decision-making in these diverse settings. In addition, we propose a fine-grained, step-level evaluation framework that assesses the correctness and logical coherence of each reasoning step and the effectiveness of tool usage throughout the task. Our results reveal that even the best-performing models, including GPT, Gemini, and Qwen families, struggle to solve multi-step vision tasks, achieving less than 50% full-chain success. These findings highlight key bottlenecks in current LMM reasoning and tool-use capabilities and identify future research directions in vision-centric agentic reasoning models. Our data and code are publicly available at https://github.com/mbzuai-oryx/Agent-X | 2025-06-02T00:00:00 | new_dataset | true | 0.711932 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24877 | Yangyi Huang | Yangyi Huang and Ye Yuan and Xueting Li and Jan Kautz and Umar Iqbal | AdaHuman: Animatable Detailed 3D Human Generation with Compositional Multiview Diffusion | Website: https://nvlabs.github.io/AdaHuman | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing methods for image-to-3D avatar generation struggle to produce highly detailed, animation-ready avatars suitable for real-world applications. We introduce AdaHuman, a novel framework that generates high-fidelity animatable 3D avatars from a single in-the-wild image. AdaHuman incorporates two key innovations: (1) A pose-conditioned 3D joint diffusion model that synthesizes consistent multi-view images in arbitrary poses alongside corresponding 3D Gaussian Splats (3DGS) reconstruction at each diffusion step; (2) A compositional 3DGS refinement module that enhances the details of local body parts through image-to-image refinement and seamlessly integrates them using a novel crop-aware camera ray map, producing a cohesive detailed 3D avatar. These components allow AdaHuman to generate highly realistic standardized A-pose avatars with minimal self-occlusion, enabling rigging and animation with any input motion. Extensive evaluation on public benchmarks and in-the-wild images demonstrates that AdaHuman significantly outperforms state-of-the-art methods in both avatar reconstruction and reposing. Code and models will be publicly available for research purposes. | 2025-06-02T00:00:00 | no_new_dataset | false | 0.712166 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2505.24878 | Yaxin Luo | Yaxin Luo, Zhaoyi Li, Jiacheng Liu, Jiacheng Cui, Xiaohan Zhao, Zhiqiang Shen | Open CaptchaWorld: A Comprehensive Web-based Platform for Testing and Benchmarking Multimodal LLM Agents | Code at: https://github.com/MetaAgentX/OpenCaptchaWorld | null | null | null | cs.AI cs.CL cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | CAPTCHAs have been a critical bottleneck for deploying web agents in real-world applications, often blocking them from completing end-to-end automation tasks. While modern multimodal LLM agents have demonstrated impressive performance in static perception tasks, their ability to handle interactive, multi-step reasoning challenges like CAPTCHAs is largely untested. To address this gap, we introduce Open CaptchaWorld, the first web-based benchmark and platform specifically designed to evaluate the visual reasoning and interaction capabilities of MLLM-powered agents through diverse and dynamic CAPTCHA puzzles. Our benchmark spans 20 modern CAPTCHA types, totaling 225 CAPTCHAs, annotated with a new metric we propose: CAPTCHA Reasoning Depth, which quantifies the number of cognitive and motor steps required to solve each puzzle. Experimental results show that humans consistently achieve near-perfect scores, state-of-the-art MLLM agents struggle significantly, with success rates at most 40.0% by Browser-Use Openai-o3, far below human-level performance, 93.3%. This highlights Open CaptchaWorld as a vital benchmark for diagnosing the limits of current multimodal agents and guiding the development of more robust multimodal reasoning systems. Code and Data are available at this https URL. | 2025-06-02T00:00:00 | new_dataset | true | 0.702896 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
1810.06981 | Christian Baumgarten | C. Baumgarten | How to (Un-) Quantum Mechanics | 35 Pages, 1 Figure | null | null | null | physics.gen-ph quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When compared to quantum mechanics, classical mechanics is often depicted in a specific metaphysical flavour: spatio-temporal realism or a Newtonian "background" is presented as an intrinsic fundamental classical presumption. However, the Hamiltonian formulation of classical analytical mechanics is based on abstract generalized coordinates and momenta: It is a mathematical rather than a philosophical framework. If the metaphysical assumptions ascribed to classical mechanics are dropped, then there exists a presentation in which little of the purported difference between quantum and classical mechanics remains. This presentation allows to derive the mathematics of relativistic quantum mechanics on the basis of a purely classical Hamiltonian phase space picture. It is shown that a spatio-temporal description is not a condition for but a consequence of objectivity. It requires no postulates. This is achieved by evading spatial notions and assuming nothing but time translation invariance. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.710126 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
1911.11868 | Greg Bodwin | Greg Bodwin and Santosh Vempala | A Unified View of Graph Regularity via Matrix Decompositions | null | null | null | null | cs.DS math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove algorithmic weak and \Szemeredi{} regularity lemmas for several classes of sparse graphs in the literature, for which only weak regularity lemmas were previously known. These include core-dense graphs, low threshold rank graphs, and (a version of) $L^p$ upper regular graphs. More precisely, we define \emph{cut pseudorandom graphs}, we prove our regularity lemmas for these graphs, and then we show that cut pseudorandomness captures all of the above graph classes as special cases.
The core of our approach is an abstracted matrix decomposition, roughly following Frieze and Kannan [Combinatorica '99] and \Lovasz{} and Szegedy [Geom.\ Func.\ Anal.\ '07], which can be computed by a simple algorithm by Charikar [AAC0 '00]. This gives rise to the class of cut pseudorandom graphs, and using work of Oveis Gharan and Trevisan [TOC '15], it also implies new PTASes for MAX-CUT, MAX-BISECTION, MIN-BISECTION for a significantly expanded class of input graphs. (It is NP Hard to get PTASes for these graphs in general.) | 2025-05-30T00:00:00 | no_new_dataset | false | 0.707962 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2001.07495 | Daniel Nissani | Daniel N. Nissani (Nissensohn) | Unsupervisedly Learned Representations: Should the Quest be Over? | published at The 6th International Conference on Machine Learning, Optimization and Data Science - LOD 2020 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After four decades of research there still exists a Classification accuracy gap of about 20% between our best Unsupervisedly Learned Representations methods and the accuracy rates achieved by intelligent animals. It thus may well be that we are looking in the wrong direction. A possible solution to this puzzle is presented. We demonstrate that Reinforcement Learning can learn representations which achieve the same accuracy as that of animals. Our main modest contribution lies in the observations that: a. when applied to a real world environment Reinforcement Learning does not require labels, and thus may be legitimately considered as Unsupervised Learning, and b. in contrast, when Reinforcement Learning is applied in a simulated environment it does inherently require labels and should thus be generally be considered as Supervised Learning. The corollary of these observations is that further search for Unsupervised Learning competitive paradigms which may be trained in simulated environments may be futile. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.709158 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2007.13121 | Sahil Singla | Danny Segev and Sahil Singla | Efficient Approximation Schemes for Stochastic Probing and Selection-Stopping Problems | 38 pages; the preliminary version appeared in EC 2021 | null | null | null | cs.DS cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a general framework to design {efficient} polynomial time approximation schemes (EPTAS) for fundamental stochastic combinatorial optimization problems. Given an error parameter $\epsilon>0$, such algorithmic schemes attain a $(1-\epsilon)$-approximation in $t(\epsilon)\cdot poly(|{\cal I}|)$ time, where $t(\cdot)$ is a function that depends only on $\epsilon$ and $|{\cal I}|$ denotes the input length. Technically speaking, our approach relies on presenting tailor-made reductions to a newly-introduced multi-dimensional Santa Claus problem. Even though the single-dimensional version of this problem is already known to be APX-Hard, we prove that an EPTAS can be designed for a constant number of machines and dimensions, which hold for each of our applications.
To demonstrate the versatility of our framework, we first study selection-stopping settings to derive an EPTAS for the Free-Order Prophets problem [Agrawal et al., EC~'20] and for its cost-driven generalization, Pandora's Box with Commitment [Fu et al., ICALP~'18]. These results constitute the first approximation schemes in the non-adaptive setting and improve on known \emph{inefficient} polynomial time approximation schemes (PTAS) for their adaptive variants. Next, turning our attention to stochastic probing problems, we obtain an EPTAS for the adaptive ProbeMax problem as well as for its non-adaptive counterpart; in both cases, state-of-the-art approximability results have been inefficient PTASes [Chen et al., NIPS~'16; Fu et al., ICALP~'18]. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.709779 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2111.00814 | Tamas Spisak | Tamas Spisak | Statistical quantification of confounding bias in predictive modelling | 20 pages, 7 figures. The manuscript is associated with the the python package `mlconfound`: https://mlconfound.readthedocs.io See manuscript repository, including fully reproducible analysis code, here: https://github.com/pni-lab/mlconfound-manuscript | GigaScience, Volume 11, 2022, giac082 | 10.1093/gigascience/giac082 | null | cs.LG q-bio.QM stat.ML | http://creativecommons.org/licenses/by/4.0/ | The lack of non-parametric statistical tests for confounding bias significantly hampers the development of robust, valid and generalizable predictive models in many fields of research. Here I propose the partial and full confounder tests, which, for a given confounder variable, probe the null hypotheses of unconfounded and fully confounded models, respectively. The tests provide a strict control for Type I errors and high statistical power, even for non-normally and non-linearly dependent predictions, often seen in machine learning. Applying the proposed tests on models trained on functional brain connectivity data from the Human Connectome Project and the Autism Brain Imaging Data Exchange dataset reveals confounders that were previously unreported or found to be hard to correct for with state-of-the-art confound mitigation approaches. The tests, implemented in the package mlconfound (https://mlconfound.readthedocs.io), can aid the assessment and improvement of the generalizability and neurobiological validity of predictive models and, thereby, foster the development of clinically useful machine learning biomarkers. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.711734 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2111.12835 | Cody Christopher PhD | Cody James Christopher, Kristen Moore, David Liebowitz | SchemaDB: Structures in Relational Datasets | Draft | Comm.Comp.Inf.Sci (CCIS). 1741 (2022) AusDM. 233-243 | 10.1007/978-981-19-8746-5_17 | null | cs.DB cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper we introduce the SchemaDB data-set; a collection of relational database schemata in both sql and graph formats. Databases are not commonly shared publicly for reasons of privacy and security, so schemata are not available for study. Consequently, an understanding of database structures in the wild is lacking, and most examples found publicly belong to common development frameworks or are derived from textbooks or engine benchmark designs. SchemaDB contains 2,500 samples of relational schemata found in public repositories which we have standardised to MySQL syntax. We provide our gathering and transformation methodology, summary statistics, and structural analysis, and discuss potential downstream research tasks in several domains. | 2025-05-30T00:00:00 | new_dataset | true | 0.67 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2203.02252 | Qiang Zou | Qiang Zou | Parametric/direct CAD integration | 12 pages; 3 figures | Computer-Aided Design 157 (2023): 103465 | 10.1016/j.cad.2022.103465 | null | cs.GR cs.CG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the history of computer-aided design (CAD), feature-based parametric modeling and boundary representation-based direct modeling are two of the most important CAD paradigms, developed respectively in the late 1980s and the late 2000s. They have complementary advantages and limitations, thereby offering huge potential for improvement towards an integrated CAD modeling scheme. Some believe that their integration will be the key characteristic of next generation CAD software. This paper provides a brief review on current parametric/direct integration approaches. Their basic ideas, advantages, and disadvantages will be discussed. The main result reads that existing integration approaches are far from being completed if seamless parametric/direct integration is desired. It is hoped that, by outlining what has already been made possible and what still remains problematic, more researchers will be attracted to work on this very important research topic of parametric/direct integration.
This paper serves as a complement to the CAD paper titled ``Variational Direct Modeling: A Framework Towards Integration of Parametric Modeling and Direct Modeling in CAD." Cite this work as follows: Qiang Zou, Hsi-Yung Feng, and Shuming Gao. Variational Direct Modeling: A Framework Towards Integration of Parametric Modeling and Direct Modeling in CAD. Computer-Aided Design 157 (2023): 103465. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.711932 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2204.07971 | Jelena Stratijev | Milo\v{s} Stojakovi\'c and Jelena Stratijev | On strong avoiding games | 23 pages | Discrete Mathematics 346 (2023) 113270 | 10.1016/j.disc.2022.113270 | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given an increasing graph property $\cal F$, the strong Avoider-Avoider $\cal F$ game is played on the edge set of a complete graph. Two players, Red and Blue, take turns in claiming previously unclaimed edges with Red going first, and the player whose graph possesses $\cal F$ first loses the game. If the property $\cal F$ is "containing a fixed graph $H$", we refer to the game as the $H$ game.
We prove that Blue has a winning strategy in two strong Avoider-Avoider games, $P_4$ game and ${\cal CC}_{>3}$ game, where ${\cal CC}_{>3}$ is the property of having at least one connected component on more than three vertices.
We also study a variant, the strong CAvoider-CAvoider games, with additional requirement that the graph of each of the players must stay connected throughout the game. We prove that Blue has a winning strategy in the strong CAvoider-CAvoider games $S_3$ and $P_4$, as well as in the $Cycle$ game, where the players aim at avoiding all cycles. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.709546 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2207.00378 | Micha{\l} Siemaszko | Micha{\l} Siemaszko, Adam Buraczewski, Bertrand Le Saux, Magdalena Stobi\'nska | Rapid training of quantum recurrent neural networks | null | Quantum Mach. Intell. 5, 31 (2023) | 10.1007/s42484-023-00117-0 | 5 (31) | quant-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series prediction is essential for human activities in diverse areas. A common approach to this task is to harness Recurrent Neural Networks (RNNs). However, while their predictions are quite accurate, their learning process is complex and, thus, time and energy consuming. Here, we propose to extend the concept of RRNs by including continuous-variable quantum resources in it, and to use a quantum-enhanced RNN to overcome these obstacles. The design of the Continuous-Variable Quantum RNN (CV-QRNN) is rooted in the continuous-variable quantum computing paradigm. By performing extensive numerical simulations, we demonstrate that the quantum network is capable of learning-time dependence of several types of temporal data, and that it converges to the optimal weights in fewer epochs than a classical network. Furthermore, for a small number of trainable parameters, it can achieve lower losses than its classical counterpart. CV-QRNN can be implemented using commercially available quantum-photonic hardware. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.712435 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2209.06725 | Takase Shimizu | Takase Shimizu, Jun-ichiro Ohe, Akira Endo, Taketomo Nakamura, and Shingo Katsumoto | Half-mirror for electrons on quantum Hall copropagating edge channels | null | Phys. Rev. Applied 19, 034085 (2023) | 10.1103/PhysRevApplied.19.034085 | null | cond-mat.mes-hall physics.app-ph quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A half-mirror that divides a spin-polarized electron into two parallel copropagating spin-resolved quantum Hall edge channels one half each is presented in this study. The partition process was coherent, as confirmed by observing the Aharonov-Bohm oscillation at a high visibility of up to 60% in a Mach-Zehnder interferometer, which comprised two such half-mirrors. The device characteristics were highly stable, making the device promising in the application of quantum information processing. The beam-splitting process is theoretically modelled, and the numerical simulation successfully reproduces the experimental observation. The partition of the electron accompanied by the spin rotation is explained by the angular momentum transfer from the orbital to the spin via spin-orbit interactions. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.712092 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2210.15709 | Gunnar K\"onig | Gunnar K\"onig, Timo Freiesleben, Moritz Grosse-Wentrup | Improvement-Focused Causal Recourse (ICR) | under review | null | 10.1609/aaai.v37i10.26398 | null | stat.ML cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Algorithmic recourse recommendations, such as Karimi et al.'s (2021) causal recourse (CR), inform stakeholders of how to act to revert unfavourable decisions. However, some actions lead to acceptance (i.e., revert the model's decision) but do not lead to improvement (i.e., may not revert the underlying real-world state). To recommend such actions is to recommend fooling the predictor. We introduce a novel method, Improvement-Focused Causal Recourse (ICR), which involves a conceptual shift: Firstly, we require ICR recommendations to guide towards improvement. Secondly, we do not tailor the recommendations to be accepted by a specific predictor. Instead, we leverage causal knowledge to design decision systems that predict accurately pre- and post-recourse. As a result, improvement guarantees translate into acceptance guarantees. We demonstrate that given correct causal knowledge, ICR, in contrast to existing approaches, guides towards both acceptance and improvement. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.712262 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2301.08028 | Jacob Beck | Jacob Beck, Risto Vuorio, Evan Zheran Liu, Zheng Xiong, Luisa Zintgraf, Chelsea Finn, Shimon Whiteson | A Tutorial on Meta-Reinforcement Learning | Published in Foundations and Trends in Machine Learning as "A Tutorial on Meta-Reinforcement Learning". For the earlier version titled "A Survey of Meta-Reinforcement Learning", see v3 in the submission history at arXiv:2301.08028v3 | Foundations and Trends in Machine Learning: Vol. 18, No. 2-3, pp 224-384 (2025) | 10.1561/2200000080 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | While deep reinforcement learning (RL) has fueled multiple high-profile successes in machine learning, it is held back from more widespread adoption by its often poor data efficiency and the limited generality of the policies it produces. A promising approach for alleviating these limitations is to cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL. Meta-RL is most commonly studied in a problem setting where, given a distribution of tasks, the goal is to learn a policy that is capable of adapting to any new task from the task distribution with as little data as possible. In this survey, we describe the meta-RL problem setting in detail as well as its major variations. We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task. Using these clusters, we then survey meta-RL algorithms and applications. We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.711847 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2302.04363 | Alexander Jung | S. Abdurakhmanova, Y. SarcheshmehPour and A. Jung | Plug In and Learn: Federated Intelligence over a Smart Grid of Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a model-agnostic federated learning method that mirrors the operation of a smart power grid: diverse local models, like energy prosumers, train independently on their own data while exchanging lightweight signals to coordinate with statistically similar peers. This coordination is governed by a graph-based regularizer that encourages connected models to produce similar predictions on a shared, public unlabeled dataset. The resulting method is a flexible instance of regularized empirical risk minimization and supports a wide variety of local models - both parametric and non-parametric - provided they can be trained via regularized loss minimization. Such training is readily supported by standard ML libraries including scikit-learn, Keras, and PyTorch. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.710025 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2303.09117 | Weixing Chen | Weixing Chen, Yang Liu, Ce Wang, Jiarui Zhu, Guanbin Li, Cheng-Lin Liu and Liang Lin | Cross-Modal Causal Intervention for Medical Report Generation | Accepted by IEEE TIP 2025, 16 pages, 11 figures, 7 tables | IEEE Transactions on Image Processing 34 (2025) 2970-2985 | 10.1109/TIP.2025.3568746 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Radiology Report Generation (RRG) is essential for computer-aided diagnosis and medication guidance, which can relieve the heavy burden of radiologists by automatically generating the corresponding radiology reports according to the given radiology image. However, generating accurate lesion descriptions remains challenging due to spurious correlations from visual-linguistic biases and inherent limitations of radiological imaging, such as low resolution and noise interference. To address these issues, we propose a two-stage framework named CrossModal Causal Representation Learning (CMCRL), consisting of the Radiological Cross-modal Alignment and Reconstruction Enhanced (RadCARE) pre-training and the Visual-Linguistic Causal Intervention (VLCI) fine-tuning. In the pre-training stage, RadCARE introduces a degradation-aware masked image restoration strategy tailored for radiological images, which reconstructs high-resolution patches from low-resolution inputs to mitigate noise and detail loss. Combined with a multiway architecture and four adaptive training strategies (e.g., text postfix generation with degraded images and text prefixes), RadCARE establishes robust cross-modal correlations even with incomplete data. In the VLCI phase, we deploy causal front-door intervention through two modules: the Visual Deconfounding Module (VDM) disentangles local-global features without fine-grained annotations, while the Linguistic Deconfounding Module (LDM) eliminates context bias without external terminology databases. Experiments on IU-Xray and MIMIC-CXR show that our CMCRL pipeline significantly outperforms state-of-the-art methods, with ablation studies confirming the necessity of both stages. Code and models are available at https://github.com/WissingChen/CMCRL. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.712222 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2305.03112 | Lechao Cheng | Lechao Cheng, Zerun Liu, Jingxuan He, Chaowei Fang, Dingwen Zhang, Meng Wang | Calibrating Undisciplined Over-Smoothing in Transformer for Weakly Supervised Semantic Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Weakly supervised semantic segmentation (WSSS) has recently attracted considerable attention because it requires fewer annotations than fully supervised approaches, making it especially promising for large-scale image segmentation tasks. Although many vision transformer-based methods leverage self-attention affinity matrices to refine Class Activation Maps (CAMs), they often treat each layer's affinity equally and thus introduce considerable background noise at deeper layers, where attention tends to converge excessively on certain tokens (i.e., over-smoothing). We observe that this deep-level attention naturally converges on a subset of tokens, yet unregulated query-key affinity can generate unpredictable activation patterns (undisciplined over-smoothing), adversely affecting CAM accuracy. To address these limitations, we propose an Adaptive Re-Activation Mechanism (AReAM), which exploits shallow-level affinity to guide deeper-layer convergence in an entropy-aware manner, thereby suppressing background noise and re-activating crucial semantic regions in the CAMs. Experiments on two commonly used datasets demonstrate that AReAM substantially improves segmentation performance compared with existing WSSS methods, reducing noise while sharpening focus on relevant semantic regions. Overall, this work underscores the importance of controlling deep-level attention to mitigate undisciplined over-smoothing, introduces an entropy-aware mechanism that harmonizes shallow and deep-level affinities, and provides a refined approach to enhance transformer-based WSSS accuracy by re-activating CAMs. | 2025-05-30T00:00:00 | no_new_dataset | false | 0.711815 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
2306.17159 | Jean-Philip Piquemal | C\'esar Feniou, Muhammad Hassan, Baptiste Claudon, Axel Courtat, Olivier Adjoua, Yvon Maday, Jean-Philip Piquemal | Greedy Gradient-free Adaptive Variational Quantum Algorithms on a Noisy Intermediate Scale Quantum Computer | null | Scientific Reports, 2025, 15, 18689 | 10.1038/s41598-025-99962-1 | null | quant-ph physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Hybrid quantum-classical adaptive Variational Quantum Eigensolvers (VQE) hold the potential to outperform classical computing for simulating many-body quantum systems. However, practical implementations on current quantum processing units (QPUs) are challenging due to the noisy evaluation of a polynomially scaling number of observables, undertaken for operator selection and high-dimensional cost function optimization. We introduce an adaptive algorithm using analytic, gradient-free optimization, called Greedy Gradient-free Adaptive VQE (GGA-VQE). In addition to demonstrating the algorithm's improved resilience to statistical sampling noise in the computation of simple molecular ground states, we execute GGA-VQE on a 25-qubit error-mitigated QPU by computing the ground state of a 25-body Ising model. Although hardware noise on the QPU produces inaccurate energies, our implementation outputs a parameterized quantum circuit yielding a favorable ground-state approximation. We demonstrate this by retrieving the parameterized operators calculated on the QPU and evaluating the resulting ansatz wave-function via noiseless emulation (i.e., hybrid observable measurement). | 2025-05-30T00:00:00 | no_new_dataset | false | 0.709817 | 2026-03-01T00:45:07.287856 | davanstrien/ModernBERT-base-is-new-arxiv-dataset |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 31