SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B

This is a sentence-transformers model finetuned from Qwen/Qwen3-Embedding-0.6B. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Qwen/Qwen3-Embedding-0.6B
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'Qwen3Model'})
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    '(Rev. 38, 10-31-03) Cost plans who choose this option will follow the following rules: 1\\ Enrollments will be effective the first day of the month after the month the cost plan receives an enrollment form The cost plan must be open to accept such enrollments. 2\\ November 15 through December 31 of every year: Enrollments received during this time period will be effective January 1 of the following year (except as noted below). (NOTE: Enrollments made between November 15 and November 30 may be effective December 1 or January 1 The cost plan must allow the individual to choose the effective date',
    '(Rev. 38, 10-31-03) Cost plans who choose this option will follow the following rules: 1\\ Enrollments will be effective the first day of the month after the month the cost plan receives an enrollment form The cost plan must be open to accept such enrollments. 2\\ November 15 through December 31 of every year: Enrollments received during this time period will be effective January 1 of the following year (except as noted below). (NOTE: Enrollments made between November 15 and November 30 may be effective December 1 or January 1 The cost plan must allow the individual to choose the effective date',
    '40.10.1 - Traveling Beneficiaries and Transfer of Title of Oxygen Equipment or Capped Rental Items (Rev. 1532, Issued: 06-11-08, Effective: 07-01-08, Implementation: 07-07-08) If a beneficiary has two residences in different areas and uses a local supplier in each area or if a beneficiary changes suppliers during or after the rental period, this does not result in a new rental episode The supplier that provides the item in the 36th month of a rental episode for oxygen equipment or the 13th month of a rental episode for capped rental DME is responsible for transferring title to the equipment to the beneficiary',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  1.0000, -0.0147],
#         [ 1.0000,  1.0000, -0.0147],
#         [-0.0147, -0.0147,  1.0000]])

Training Details

Training Dataset

Unnamed Dataset

  • Size: 60,906 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 34 tokens
    • mean: 147.66 tokens
    • max: 512 tokens
    • min: 34 tokens
    • mean: 147.66 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    Call 1-800-MEDICARE (1-800-633-4227) for a copy of the LCD" Spanish Version - Las Determinaciones Locales de Cobertura (LCDs en inglés) le ayudan a decidir a Medicare lo que está cubierto Usted puede comparar su caso con la determinación y enviar información de su médico si piensa que puede cambiar nuestra decisión Para obtener una copia del LCD, llame al 1-800-MEDICARE (1-800-633-4227) MSN 15.20: "The following policies NCD 210.2.1 were used when we made this decision." Spanish Version - "Las siguientes políticas NCD 210.2.1 fueron utilizadas cuando se tomó esta decisión." Call 1-800-MEDICARE (1-800-633-4227) for a copy of the LCD" Spanish Version - Las Determinaciones Locales de Cobertura (LCDs en inglés) le ayudan a decidir a Medicare lo que está cubierto Usted puede comparar su caso con la determinación y enviar información de su médico si piensa que puede cambiar nuestra decisión Para obtener una copia del LCD, llame al 1-800-MEDICARE (1-800-633-4227) MSN 15.20: "The following policies NCD 210.2.1 were used when we made this decision." Spanish Version - "Las siguientes políticas NCD 210.2.1 fueron utilizadas cuando se tomó esta decisión."
    or (2) If the primary completion date is on or after January 18, 2017, the responsible party must submit the clinical trial results information specified in § 11.48 Applicable clinical trials for which the studied product is not approved, licensed, or cleared by FDA. (b) Unless a waiver of the requirement to submit clinical trial results information is granted in accordance with § 11.54, clinical trial results information specified in § 11.48 must be submitted for any applicable clinical trial with a primary completion date on or after January 18, 2017 for which clinical trial registration information is required to be submitted and for which the studied product is not approved, licensed, or cleared by FDA. or (2) If the primary completion date is on or after January 18, 2017, the responsible party must submit the clinical trial results information specified in § 11.48 Applicable clinical trials for which the studied product is not approved, licensed, or cleared by FDA. (b) Unless a waiver of the requirement to submit clinical trial results information is granted in accordance with § 11.54, clinical trial results information specified in § 11.48 must be submitted for any applicable clinical trial with a primary completion date on or after January 18, 2017 for which clinical trial registration information is required to be submitted and for which the studied product is not approved, licensed, or cleared by FDA.
    Document the specific type of transactions and transmission methods to be utilized and secure authorizations from the provider or other trading partner requesting to exchange electronic administrative transactions, and 2\ Designate the A/B MACs and CEDI with whom the provider or other trading partner agrees to engage in EDI and implements standard policies and practices to ensure the security and integrity of the information to be exchanged Under HIPAA, EDI applies to all covered entities transmitting the following administrative transactions: ASC X12 837 institutional claim and ASC X12 837 professional claim, ASC X12 835 remittance advice, ASC X12 270/271 eligibility, ASC X12 276/277 claim status and NCPDP claim (and others that are not used by Medicare at this time) Document the specific type of transactions and transmission methods to be utilized and secure authorizations from the provider or other trading partner requesting to exchange electronic administrative transactions, and 2\ Designate the A/B MACs and CEDI with whom the provider or other trading partner agrees to engage in EDI and implements standard policies and practices to ensure the security and integrity of the information to be exchanged Under HIPAA, EDI applies to all covered entities transmitting the following administrative transactions: ASC X12 837 institutional claim and ASC X12 837 professional claim, ASC X12 835 remittance advice, ASC X12 270/271 eligibility, ASC X12 276/277 claim status and NCPDP claim (and others that are not used by Medicare at this time)
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss
0.2626 500 0.2982
0.5252 1000 0.2878
0.7878 1500 0.2911
1.0504 2000 0.2956
1.3130 2500 0.3014
1.5756 3000 0.2941
1.8382 3500 0.285
2.1008 4000 0.2937
2.3634 4500 0.2902
2.6261 5000 0.2954
2.8887 5500 0.2856

Framework Versions

  • Python: 3.12.6
  • Sentence Transformers: 5.2.0
  • Transformers: 4.56.0
  • PyTorch: 2.8.0+cu129
  • Accelerate: 1.10.1
  • Datasets: 4.4.1
  • Tokenizers: 0.22.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
12
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for atx-labs/tsdae-Qwen3-Embedding-0.6B-cms_cfr

Finetuned
(98)
this model