repo stringclasses 20
values | pull_number float64 116 189k | instance_id stringlengths 17 34 | issue_numbers stringlengths 7 27 | base_commit stringlengths 40 40 | patch stringlengths 294 136k | test_patch stringlengths 405 47.1k | problem_statement stringlengths 148 24k | hints_text stringlengths 1 33.2k ⌀ | created_at stringdate 2016-08-20 07:52:07 2024-07-18 05:28:29 | language stringclasses 4
values | Dockerfile stringlengths 486 3.42k | P2P stringlengths 2 224k | F2P stringlengths 14 9.06k | F2F stringclasses 23
values | test_command stringlengths 27 951 | task_category stringclasses 3
values | is_no_nodes bool 2
classes | is_func_only bool 2
classes | is_class_only bool 2
classes | is_mixed bool 2
classes | num_func_changes int64 0 238 | num_class_changes int64 0 26 | num_nodes int64 0 264 | is_single_func bool 2
classes | is_single_class bool 2
classes | modified_nodes stringlengths 2 42.2k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 21,969 | huggingface__transformers-21969 | ['21915'] | 0bb17295f04e565c94a79960ff7f7b6cd03acbfc | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -131,7 +131,8 @@ def to_pil_image(
The image to convert to the `PIL.Image` format.
do_rescale (`bool`, *optional*):
... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -96,6 +96,11 @@ def test_to_pil_image_from_float(self, name, image_shape, dtype):
# make sure image is correctly rescaled
self.assertTrue(np.abs(np.... | Mask2Former ImageProcessor produces different results on Mac vs Windows.
### System Info
>>> transformers.__version__
'4.27.0.dev0'
>>> Python 3.10.6
Windows vs Mac
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ... | Here is the image I used.

Also cc @alaradirik
Thanks for raising this issue @nickponline and for all the details!
Could you give details on how you're reading in the image e.g. through torchvision and th... | 2023-03-06 14:38:39+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_torch', 'tests/test_image_transforms.py:ImageTransformsTester:test_center_to_corners_format', 'tests/test_image_transforms.py:ImageTransformsTester:test_id_to_rgb', 'tests/test_image_transforms.py:ImageTransformsTester:test_normalize', 'tests... | ['tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_float_1_numpy_float_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_float_0_numpy_float_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_float_3_numpy_... | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_image_transforms.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/image_transforms.py->module->function_definition:to_pil_image"] |
huggingface/transformers | 22,158 | huggingface__transformers-22158 | ['22147'] | 3b22bfbc6afbf7aa65ce0f255e3c75a0dd7524d3 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -156,12 +156,20 @@ def to_pil_image(
# If there is a single channel, we squeeze it, as otherwise PIL can't handle it.
image = np.squ... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -101,6 +101,27 @@ def test_to_pil_image_from_float(self, name, image_shape, dtype):
with self.assertRaises(ValueError):
to_pil_image(image)
+ ... | OneFormerProcessor、MaskFormerImageProcessor will cause errors if segmentation_maps only have elements 0 and 1
### System Info
transformers-4.26.0 do not have this bug
but transformers-4.27.0.dev0 has.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts... | cc @amyeroberts @alaradirik | 2023-03-14 14:05:52+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_resize', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_uint_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_... | ['tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_mask'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_image_transforms.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/image_transforms.py->module->function_definition:to_pil_image"] |
huggingface/transformers | 22,458 | huggingface__transformers-22458 | ['22392'] | cd73b9a8c140fb74cd93187f5c3d380cfc308023 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -118,6 +118,33 @@ def rescale(
return rescaled_image
+def _rescale_for_pil_conversion(image):
+ """
+ Detects whether or not th... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -249,6 +249,14 @@ def test_resize(self):
# PIL size is in (width, height) order
self.assertEqual(resized_image.size, (40, 30))
+ # Check an... | Inconsistent Normalization for ViTImageProcessor when `do_resize` is False
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?):... | cc @amyeroberts
Hi @Interpause, thanks for raising this issue!
Indeed, this is a funny behaviour. This is happening because of the use of the PIL library to resize images and the rescaling behaviour that happens in `ToTensor`.
To explain in more detail, I'll refer to the input `im` and `im_pil` and `to_tens(im... | 2023-03-29 20:03:48+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_uint_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_id_to_rgb', 'tests/test_image_transforms.py:ImageTransformsTester:te... | ['tests/test_image_transforms.py:ImageTransformsTester:test_resize'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_image_transforms.py | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["src/transformers/image_transforms.py->module->function_definition:to_pil_image", "src/transformers/image_transforms.py->module->function_definition:resize", "src/transformers/image_transforms.py->module->function_definition:_rescale_for_pil_conversion"] |
huggingface/transformers | 22,649 | huggingface__transformers-22649 | ['21685'] | ee8e80a060d65ab349743ffcb5842365eb0e5606 | diff --git a/src/transformers/models/opt/modeling_opt.py b/src/transformers/models/opt/modeling_opt.py
--- a/src/transformers/models/opt/modeling_opt.py
+++ b/src/transformers/models/opt/modeling_opt.py
@@ -631,19 +631,21 @@ def forward(
else:
raise ValueError("You have to specify either decoder_i... | diff --git a/tests/models/opt/test_modeling_opt.py b/tests/models/opt/test_modeling_opt.py
--- a/tests/models/opt/test_modeling_opt.py
+++ b/tests/models/opt/test_modeling_opt.py
@@ -182,6 +182,19 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict):
# test that outputs are equal ... | `modeling_opt.py` if `previous_key_values` given and `attention_mask==None` the model throws an error.
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (False)
... | Hey! Thanks for submitting this issue!
Passing attention maks solves the problem, and usually we expect to pass attention masks when you are using the `past_key_values`(for example in generate). It is debatable whether the default behaviour should rely on the past_key_values.
Do you have a specific usage in mind? ... | 2023-04-07 09:02:52+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/opt/test_modeling_opt.py:OPTModelTest:test_inputs_embeds', 'tests/models/opt/test_modeling_opt.py:OPTModelTest:test_model_common_attributes', 'tests/models/opt/test_modeling_opt.py:OPTModelTest:test_training', 'tests/models/opt/test_modeling_opt.py:OPTModelTest:test_forward_signature', 'tests/models/opt/... | ['tests/models/opt/test_modeling_opt.py:OPTModelTest:test_decoder_model_past_with_large_inputs'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/opt/test_modeling_opt.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/opt/modeling_opt.py->module->class_definition:OPTDecoder->function_definition:forward"] |
huggingface/transformers | 22,920 | huggingface__transformers-22920 | ['22904'] | 1e1cb6f8e5af1c592ed7d6ca035b0e07297e52b8 | diff --git a/src/transformers/models/sam/image_processing_sam.py b/src/transformers/models/sam/image_processing_sam.py
--- a/src/transformers/models/sam/image_processing_sam.py
+++ b/src/transformers/models/sam/image_processing_sam.py
@@ -378,12 +378,13 @@ def post_process_masks(
Remove padding and upscale mas... | diff --git a/tests/models/sam/test_processor_sam.py b/tests/models/sam/test_processor_sam.py
--- a/tests/models/sam/test_processor_sam.py
+++ b/tests/models/sam/test_processor_sam.py
@@ -17,8 +17,8 @@
import numpy as np
-from transformers.testing_utils import require_torchvision, require_vision
-from transformers.... | SAM: Notebook example not working
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: macOS-13.2-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 1.13.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax v... | I have similar issue when i run
```
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_p... | 2023-04-21 13:38:26+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_image_processor', 'tests/models/sam/test_processor_sam.py:SamProcessorTest:test_save_load_pretrained_additional_features'] | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_post_process_masks'] | null | pytest -v --tb=short --show-capture=no --junitxml=test-results.xml /testbed/tests/models/sam/test_processor_sam.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor->function_definition:post_process_masks"] |
huggingface/transformers | 23,126 | huggingface__transformers-23126 | ['20249'] | b61d5b47f640308068139561f673765b2af39874 | diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -15,6 +15,7 @@
import dataclasses
import json
import sys
+import types
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentTypeErr... | diff --git a/tests/utils/test_hf_argparser.py b/tests/utils/test_hf_argparser.py
--- a/tests/utils/test_hf_argparser.py
+++ b/tests/utils/test_hf_argparser.py
@@ -15,6 +15,7 @@
import argparse
import json
import os
+import sys
import tempfile
import unittest
from argparse import Namespace
@@ -36,6 +37,10 @@
... | Support X | Y syntax on HfArgumentParser
### Feature request
[PEP-604](https://peps.python.org/pep-0604/) created the X | Y syntax on python 3.10, which is equivalent to Union[X, Y]. The use of this syntax is not supported by HfArgumentParser.
### Motivation
With this syntax I would like to use something lik... | Looks like adding support while not breaking previous Python version will be tricky, as `from types import UnionType` only work for Python 3.10 and above. We can look at a PR if you want to try a contribution, but I don't think we will add this ourselves until Python 3.10 is more widely supported (PyTorch and TensorFlo... | 2023-05-03 10:49:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_basic', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_string_literal_annotation', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_literal', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_dict_extra_key', ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_optional'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_hf_argparser.py -rA --json-report --json-report-file=test_output.json | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_parse_dataclass_field", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_add_dataclass_arguments"] |
huggingface/transformers | 23,141 | huggingface__transformers-23141 | ['23140'] | 78b7debf56efb907c6af767882162050d4fbb294 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -1562,6 +1562,7 @@ def generate(
generation_config.return_timestamps ... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -414,6 +414,21 @@ def test_generate_fp16(self):
model.generate(input_features)
model.gen... | Whisper generation support for passing acronym to language arg
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?... | null | 2023-05-03 22:47:37+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_modeling_whisper.py:WhisperEncoderModelTest:test_sample_generate', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_headmasking', 'tests/models/whisper/test_modeling_whisper.... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_language'] | null | pytest -v --tb=short --show-capture=no --json-report-file=test-results.json /testbed/tests/models/whisper/test_modeling_whisper.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForConditionalGeneration->function_definition:generate"] |
huggingface/transformers | 23,223 | huggingface__transformers-23223 | ['22175'] | 9088fcae82f4e23021e600966626188ce6fbe6df | diff --git a/src/transformers/feature_extraction_sequence_utils.py b/src/transformers/feature_extraction_sequence_utils.py
--- a/src/transformers/feature_extraction_sequence_utils.py
+++ b/src/transformers/feature_extraction_sequence_utils.py
@@ -140,7 +140,7 @@ def pad(
return_attention_mask if return_att... | diff --git a/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py b/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py
--- a/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py
+++ b/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py
@@ -123,6 +123,14 @@ def test_call(self):
for enc_se... | wav2vec processor batching logic is too restrictive
### System Info
transformers version at the time of writing is `4.26.1`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (suc... | cc @sanchit-gandhi @ArthurZucker
Hey @LWprogramming! Thanks for the comprehensive issue description - I agree that the logic for checking if the input `is_batched` is broken when the input is a batched numpy array, e.g. the feature extractor **should** set `is_batched=True` when the numpy array is 2-d, but currently d... | 2023-05-09 03:36:11+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_maximum_encoding_length_pair_input', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_training_new_tokenizer', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_right_an... | ['tests/models/wav2vec2/test_feature_extraction_wav2vec2.py:Wav2Vec2FeatureExtractionTest:test_call', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2TokenizerTest:test_call'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py /testbed/tests/models/wav2vec2/test_tokenization_wav2vec2.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py->module->class_definition:Wav2Vec2FeatureExtractor->function_definition:__call__", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2Tokenizer->function_definition:__call__", "src/transformers/feature_extraction... |
huggingface/transformers | 23,796 | huggingface__transformers-23796 | ['23764'] | de9255de27abfcae4a1f816b904915f0b1e23cd9 | diff --git a/src/transformers/models/whisper/tokenization_whisper.py b/src/transformers/models/whisper/tokenization_whisper.py
--- a/src/transformers/models/whisper/tokenization_whisper.py
+++ b/src/transformers/models/whisper/tokenization_whisper.py
@@ -721,7 +721,7 @@ def _decode_asr(self, model_outputs, *, return_ti... | diff --git a/tests/models/whisper/test_tokenization_whisper.py b/tests/models/whisper/test_tokenization_whisper.py
--- a/tests/models/whisper/test_tokenization_whisper.py
+++ b/tests/models/whisper/test_tokenization_whisper.py
@@ -213,6 +213,16 @@ def test_skip_special_tokens_skips_prompt_ids(self):
rust_t... | Whisper `get_prompt_ids` throws error when used with a 'FastTokenizer'
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow versi... | Related issue #17391 mentions that `add_prefix_space` can only be specified for fast tokenizers upon init, so it seems like just the manual `" " + text` replacement for this param would be the appropriate fix.
Hey! Thanks for reporting. Indeed I think you can easily fix this for a single model (in the fast tokenizer yo... | 2023-05-26 14:20:42+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_padding_different_model_input_name', 'tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_added_token_serializable', 'tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_sentencepiece_tokenize_a... | ['tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_fast_tokenizer_get_prompt_ids'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/whisper/test_tokenization_whisper.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/whisper/tokenization_whisper.py->module->class_definition:WhisperTokenizer->function_definition:get_prompt_ids", "src/transformers/models/whisper/tokenization_whisper_fast.py->module->class_definition:WhisperTokenizerFast->function_definition:get_prompt_ids"] |
huggingface/transformers | 24,238 | huggingface__transformers-24238 | ['24104'] | d7389cd20168052e5fc7abe0cf31cd1eb960fbc9 | diff --git a/src/transformers/generation/configuration_utils.py b/src/transformers/generation/configuration_utils.py
--- a/src/transformers/generation/configuration_utils.py
+++ b/src/transformers/generation/configuration_utils.py
@@ -288,7 +288,8 @@ def __init__(self, **kwargs):
# Additional attributes with... | diff --git a/tests/generation/test_configuration_utils.py b/tests/generation/test_configuration_utils.py
--- a/tests/generation/test_configuration_utils.py
+++ b/tests/generation/test_configuration_utils.py
@@ -93,6 +93,31 @@ def test_initialize_new_kwargs(self):
generation_config = GenerationConfig.from_model... | Error when overriding generation config: GenerationConfig() got multiple values for keyword argument 'num_beams'
### System Info
- `transformers` version: 4.30.0.dev0 (commit: 4aa13224a5bca560147a29c06b2e0597137caf3e)
- Platform: Linux-5.15.0-1013-oracle-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface... | Hey @Taytay 👋
Thank you for raising this issue! This is indeed a bug, I'll open a PR ASAP | 2023-06-13 11:16:39+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_configuration_utils.py:GenerationConfigTest:test_save_load_config_1_foo_json', 'tests/generation/test_configuration_utils.py:GenerationConfigTest:test_update', 'tests/generation/test_configuration_utils.py:GenerationConfigTest:test_from_model_config', 'tests/generation/test_configuration_utils.p... | ['tests/generation/test_configuration_utils.py:GenerationConfigTest:test_kwarg_init'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/generation/test_configuration_utils.py --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig->function_definition:from_dict", "src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig->function_definition:__init__"] |
huggingface/transformers | 25,636 | huggingface__transformers-25636 | ['25634'] | 021887682224daf29264f98c759a45e88c82e244 | diff --git a/src/transformers/models/gpt2/modeling_flax_gpt2.py b/src/transformers/models/gpt2/modeling_flax_gpt2.py
--- a/src/transformers/models/gpt2/modeling_flax_gpt2.py
+++ b/src/transformers/models/gpt2/modeling_flax_gpt2.py
@@ -753,7 +753,9 @@ def prepare_inputs_for_generation(self, input_ids, max_length, attent... | diff --git a/tests/models/gpt2/test_modeling_flax_gpt2.py b/tests/models/gpt2/test_modeling_flax_gpt2.py
--- a/tests/models/gpt2/test_modeling_flax_gpt2.py
+++ b/tests/models/gpt2/test_modeling_flax_gpt2.py
@@ -187,6 +187,26 @@ def check_use_cache_forward_with_attn_mask(self, model_class_name, config, input
di... | Problem caused by boolean attention mask in `pretrained_model.generate` of Flax GPT2
Hi!
I notice that the usage of a boolean attention mask in `pretrained_model.generate` of Flax GPT2 can cause an error. Here is a short, self-contained code block to showcase the problem; I also prepared a [colab notebook here](htt... | cc @sanchit-gandhi
Hey @liutianlin0121! Thanks for the comprehensive issue description! That's a good spot - we actually covert the `attention_mask` to `"i4"` dtype under-the-hood when we call the Flax module:
https://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/src/transformers/m... | 2023-08-21 17:41:40+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_model_outputs_equivalence', 'tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_beam_search_generate_num_return_sequences', 'tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_no_automatic_init', 'tests/models/gpt2/t... | ['tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_bool_attention_mask_in_generation'] | null | pytest -v --tb=short /testbed/tests/models/gpt2/test_modeling_flax_gpt2.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/gpt2/modeling_flax_gpt2.py->module->class_definition:FlaxGPT2LMHeadModel->function_definition:prepare_inputs_for_generation"] |
huggingface/transformers | 25,765 | huggingface__transformers-25765 | ['23331'] | d0354e5e86842b757cec1ecb7de314a1f2421c1e | diff --git a/src/transformers/models/mega/modeling_mega.py b/src/transformers/models/mega/modeling_mega.py
--- a/src/transformers/models/mega/modeling_mega.py
+++ b/src/transformers/models/mega/modeling_mega.py
@@ -1542,6 +1542,9 @@ def forward(
else:
raise ValueError("You have to specify either i... | diff --git a/tests/models/mega/test_modeling_mega.py b/tests/models/mega/test_modeling_mega.py
--- a/tests/models/mega/test_modeling_mega.py
+++ b/tests/models/mega/test_modeling_mega.py
@@ -313,6 +313,34 @@ def create_and_check_decoder_model_past_large_inputs(
# test that outputs are equal for slice
... | RuntimeError: The size of tensor a (16) must match the size of tensor b (16000) at non-singleton dimension 2
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- Py... | Hi @Tylersuard, thanks for reporting this issue.
So that we can best try and help you, could you update the notebook so that it contains the minimal logic to replicate the error and can be run out-of-the-box? As it stands, there's many blocks with comments; references to loading / processing data we don't have acce... | 2023-08-25 17:48:04+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mega/test_modeling_mega.py:MegaModelTest:test_for_token_classification', 'tests/models/mega/test_modeling_mega.py:MegaModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/mega/test_modeling_mega.py:MegaModelTest:test_model_as_decoder', 'tests/models/mega/test_modeling_mega.py:MegaModelTe... | ['tests/models/mega/test_modeling_mega.py:MegaModelTest:test_decoder_model_with_chunking'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/mega/test_modeling_mega.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/mega/modeling_mega.py->module->class_definition:MegaModel->function_definition:forward"] |
huggingface/transformers | 25,884 | huggingface__transformers-25884 | ['25804'] | 716bb2e3910fd4872064c55b0d8bc3dad754d129 | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -872,6 +872,9 @@ def save_pretrained(self, save_directory: str, safe_serialization: bool = False)
if self.feature_extractor is not None:
... | diff --git a/tests/pipelines/test_pipelines_image_segmentation.py b/tests/pipelines/test_pipelines_image_segmentation.py
--- a/tests/pipelines/test_pipelines_image_segmentation.py
+++ b/tests/pipelines/test_pipelines_image_segmentation.py
@@ -13,6 +13,7 @@
# limitations under the License.
import hashlib
+import tem... | OSError: /home/datascience/huggingface does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co//home/datascience/huggingface/None' for available files.
### System Info
import transformers
transformers.__version__
'4.31.0'
### Who can help?
_No response_
### Inform... | Hey! Thanks for reporting! Yep I thing we should make sure the `image_processor`is also saved! Would you like to open a PR? 🤗 | 2023-08-31 07:29:21+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_small_model_pt_no_panoptic', 'tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_small_model_pt', 'tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_sma... | ['tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_save_load'] | null | pytest -v --tb=short /testbed/tests/pipelines/test_pipelines_image_segmentation.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:save_pretrained"] |
huggingface/transformers | 26,164 | huggingface__transformers-26164 | ['25422'] | 7c63e6fc8c34dcf8b0121eaee776f41ccf3b1137 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -1719,13 +1719,22 @@ def generate(
decoder_start_token_id, *text_prom... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -1075,6 +1075,29 @@ def test_generate_with_prompt_ids_and_forced_decoder_ids(self):
for row in ou... | Whisper Prompting max_new_tokens
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU... | Hi @Helene-Maxcici! Thanks for writing this issue, there’s definitely an out of bounds issue here.
Appreciate you catching the precedence issue that the slicing doesn’t quite match OpenAI’s, we should change that in the fix PR so its slicing one less than half the max_length instead one one more than half. Ultimate... | 2023-09-14 14:02:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_with_prompt_ids_max_length'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/whisper/test_modeling_whisper.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForConditionalGeneration->function_definition:generate"] |
huggingface/transformers | 26,568 | huggingface__transformers-26568 | ['26566', '26566'] | bd6205919aad4d3a2300a39a98a642f1cc3a5348 | diff --git a/src/transformers/models/swin2sr/configuration_swin2sr.py b/src/transformers/models/swin2sr/configuration_swin2sr.py
--- a/src/transformers/models/swin2sr/configuration_swin2sr.py
+++ b/src/transformers/models/swin2sr/configuration_swin2sr.py
@@ -44,6 +44,8 @@ class Swin2SRConfig(PretrainedConfig):
... | diff --git a/tests/models/swin2sr/test_modeling_swin2sr.py b/tests/models/swin2sr/test_modeling_swin2sr.py
--- a/tests/models/swin2sr/test_modeling_swin2sr.py
+++ b/tests/models/swin2sr/test_modeling_swin2sr.py
@@ -46,6 +46,7 @@ def __init__(
image_size=32,
patch_size=1,
num_channels=3,
+ ... | SWIN2SR: Allow to choose number of in_channels and out_channels
### Feature request
I'd like to be able to specify a different number of output and input channels for the Swin2sr superresolution model. The current [SWIN2SR](https://github.com/huggingface/transformers/blob/v4.33.3/src/transformers/models/swin2sr/mode... | 2023-10-03 16:27:03+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_headmasking', 'tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_can_use_safetensors', 'tests/models/swin2sr/test_modeling... | ['tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_model_for_image_super_resolution'] | null | pytest -v --tb=short /testbed/tests/models/swin2sr/test_modeling_swin2sr.py -rA --junitxml=test-results.xml | Feature | false | false | true | false | 0 | 8 | 8 | false | false | ["src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:UpsampleOneStep", "src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:Swin2SRModel->function_definition:__init__", "src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:Swin2SRForImageSupe... | |
huggingface/transformers | 26,752 | huggingface__transformers-26752 | ['25271'] | 3bc65505fc0801e3d9ff741ec725fb0cb4d863d6 | diff --git a/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
--- a/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
+++ b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
@@ -620,6 +620,8 @@ d... | diff --git a/tests/models/encoder_decoder/test_modeling_encoder_decoder.py b/tests/models/encoder_decoder/test_modeling_encoder_decoder.py
--- a/tests/models/encoder_decoder/test_modeling_encoder_decoder.py
+++ b/tests/models/encoder_decoder/test_modeling_encoder_decoder.py
@@ -17,8 +17,8 @@
import tempfile
import un... | EncoderDecoder does not automatically create decoder_attention_mask to match decoder_input_ids
### System Info
```
- `transformers` version: 4.31.0
- Platform: Linux-4.15.0-192-generic-x86_64-with-glibc2.27
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate versi... | somewhat related, it seems like in the notebook, the `decoder_input_ids` nor the `labels` are shifted; Patrick claims it's because:
> `"labels"` are shifted automatically to the left for language modeling training.
but I don't see any evidence of this in the implementation. Was this behavior changed at some point? ... | 2023-10-12 08:20:35+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/encoder_decoder/test_modeling_encoder_decoder.py:BertGenerationEncoderDecoderModelTest:test_encoder_decoder_model', 'tests/models/encoder_decoder/test_modeling_encoder_decoder.py:RoBertaEncoderDecoderModelTest:test_encoder_decoder_model_generate', 'tests/models/encoder_decoder/test_modeling_encoder_decod... | ['tests/models/encoder_decoder/test_modeling_encoder_decoder.py:BertEncoderDecoderModelTest:test_bert2bert_default_decoder_attention_mask'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/encoder_decoder/test_modeling_encoder_decoder.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/encoder_decoder/modeling_encoder_decoder.py->module->class_definition:EncoderDecoderModel->function_definition:forward"] |
huggingface/transformers | 26,839 | huggingface__transformers-26839 | ['26428'] | d7cb5e138ec1ccc848a554574b1a89f0dfaf0e90 | diff --git a/src/transformers/models/idefics/modeling_idefics.py b/src/transformers/models/idefics/modeling_idefics.py
--- a/src/transformers/models/idefics/modeling_idefics.py
+++ b/src/transformers/models/idefics/modeling_idefics.py
@@ -875,16 +875,20 @@ def forward(
attention_mask: Optional[torch.Tensor] = ... | diff --git a/tests/models/idefics/test_modeling_idefics.py b/tests/models/idefics/test_modeling_idefics.py
--- a/tests/models/idefics/test_modeling_idefics.py
+++ b/tests/models/idefics/test_modeling_idefics.py
@@ -71,6 +71,7 @@ def __init__(
type_vocab_size=16,
type_sequence_label_size=2,
in... | IDEFICS Cross Attention: Text tokens appearing before images still attend to image embeddings
### System Info
- `transformers` version: 4.33.1
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.1
- Safetensors version: 0.3.3
- Accelerate version: 0.2... | What do you think @leot13 @VictorSanh ?
Thank you for noticing! It's not easy to detect. We are aware but did training this way. In practice that means the few first tokens with no image are attending to every image instead of none of them, so there's a small information leak.
To fix this, we could apply the image_att... | 2023-10-16 14:26:33+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git git-lfs && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
RUN git lfs install
WORKDIR /testbed
# Install system dependencies
RUN apt-get update... | ['tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_training', 'tests/models/idefics/test_modeling_idefics.py:IdeficsModelTest:test_config', 'tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_resize_embeddings_untied', 'tests/models/idefics/test_modeling_ide... | ['tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_cross_attention_gates', 'tests/models/idefics/test_modeling_idefics.py:IdeficsModelTest:test_cross_attention_gates'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/idefics/test_modeling_idefics.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/idefics/modeling_idefics.py->module->class_definition:IdeficsModel->function_definition:forward->function_definition:vblock", "src/transformers/models/idefics/modeling_idefics.py->module->class_definition:IdeficsModel->function_definition:forward", "src/transformers/models/idefics/modeling_ide... |
huggingface/transformers | 27,114 | huggingface__transformers-27114 | ['27050'] | 7e9f10ac94c626780cf9e17485e73aec2c644bf2 | diff --git a/src/transformers/modeling_attn_mask_utils.py b/src/transformers/modeling_attn_mask_utils.py
--- a/src/transformers/modeling_attn_mask_utils.py
+++ b/src/transformers/modeling_attn_mask_utils.py
@@ -11,11 +11,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the Licens... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1266,6 +1266,9 @@ def check_to_4d(self, mask_converter, q_len, kv_len, additional_mask=None, bsz=3
assert mask_4d.shape == (bsz, 1, q_len, kv_len)
+ # ma... | Difference in LlamaAttention & LlamaFlashAttention2 attn_output
### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: n... | Hey, I think this is related to flash attention version, could you have a look at #26697?
We are currently using `flash-attn==2.3.2`. There was a minor version release of flash attention literally yesterday.
The problem persists with `flash-attn==2.3.3`.
Are you able to reproduce on your end with the supplied sc... | 2023-10-27 16:19:01+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_shard_checkpoint', 'tests/test_modeling_utils.py:AttentionMaskTester:test_causal_mask_sliding', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:ModelUtilsTest:test_no_super_init_config_and_model', 'tests/test_m... | ['tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d_causal', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d_causal_sliding'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_modeling_utils.py -rA --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/modeling_attn_mask_utils.py->module->class_definition:AttentionMaskConverter", "src/transformers/modeling_attn_mask_utils.py->module->class_definition:AttentionMaskConverter->function_definition:to_4d"] |
huggingface/transformers | 27,463 | huggingface__transformers-27463 | ['27361'] | 3cefac1d974db5e2825a0cb2b842883a628be7a0 | diff --git a/docs/source/en/model_doc/sam.md b/docs/source/en/model_doc/sam.md
--- a/docs/source/en/model_doc/sam.md
+++ b/docs/source/en/model_doc/sam.md
@@ -66,6 +66,34 @@ masks = processor.image_processor.post_process_masks(
scores = outputs.iou_scores
```
+You can also process your own masks alongside the input... | diff --git a/tests/models/sam/test_processor_sam.py b/tests/models/sam/test_processor_sam.py
--- a/tests/models/sam/test_processor_sam.py
+++ b/tests/models/sam/test_processor_sam.py
@@ -58,13 +58,18 @@ def prepare_image_inputs(self):
"""This function prepares a list of PIL images, or a list of numpy arrays if... | Add how to preprocess mask for finetuning with SAM
### Feature request
The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size ex... | Hi @rwood-97, thanks for raising this issue!
Agreed - being able to pass in the masks to the image processor would be ideal! Feel free to ping me on a PR for review if you'd like to open one :) | 2023-11-13 11:52:42+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/sam/test_processor_sam.py:TFSamProcessorTest:test_post_process_masks', 'tests/models/sam/test_processor_sam.py:SamProcessorEquivalenceTest:test_post_process_masks_equivalence', 'tests/models/sam/test_processor_sam.py:TFSamProcessorTest:test_save_load_pretrained_additional_features', 'tests/models/sam/tes... | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_image_processor_with_masks'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/sam/test_processor_sam.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 5 | 2 | 7 | false | false | ["src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor", "src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor->function_definition:_preprocess_mask", "src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamI... |
huggingface/transformers | 27,663 | huggingface__transformers-27663 | ['27381'] | 45b70384a7d6692a8304f34a981a5ff020918b82 | diff --git a/src/transformers/models/detr/image_processing_detr.py b/src/transformers/models/detr/image_processing_detr.py
--- a/src/transformers/models/detr/image_processing_detr.py
+++ b/src/transformers/models/detr/image_processing_detr.py
@@ -82,6 +82,7 @@
SUPPORTED_ANNOTATION_FORMATS = (AnnotationFormat.COCO_DETE... | diff --git a/tests/models/yolos/test_image_processing_yolos.py b/tests/models/yolos/test_image_processing_yolos.py
--- a/tests/models/yolos/test_image_processing_yolos.py
+++ b/tests/models/yolos/test_image_processing_yolos.py
@@ -86,18 +86,28 @@ def get_expected_values(self, image_inputs, batched=False):
if n... | `YolosImageProcessor` violates `longest_edge` constraint for certain images
### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerat... | Hi @xenova, thanks for reporting!
Looking into it 🕵️♀️ | 2023-11-22 20:44:08+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_image_processor_from_and_save_pretrained', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_equivalence_padding', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_init_withou... | ['tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_call_numpy_4_channels', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_resize_max_size_respected', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_call_pil', 'tests/models... | null | pytest -v --tb=short /testbed/tests/models/yolos/test_image_processing_yolos.py -rA --junitxml=test-results.xml | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["src/transformers/models/yolos/image_processing_yolos.py->module->function_definition:get_size_with_aspect_ratio"] |
huggingface/transformers | 27,717 | huggingface__transformers-27717 | ['26497'] | ef5ab72f4b538d6f9ea032ac307b75b40ceef42e | diff --git a/src/transformers/convert_slow_tokenizer.py b/src/transformers/convert_slow_tokenizer.py
--- a/src/transformers/convert_slow_tokenizer.py
+++ b/src/transformers/convert_slow_tokenizer.py
@@ -800,8 +800,6 @@ def vocab(self, proto):
("<unk>", 0.0),
]
vocab += [(piece.piece, piec... | diff --git a/tests/models/nllb/test_tokenization_nllb.py b/tests/models/nllb/test_tokenization_nllb.py
--- a/tests/models/nllb/test_tokenization_nllb.py
+++ b/tests/models/nllb/test_tokenization_nllb.py
@@ -24,6 +24,7 @@
NllbTokenizerFast,
is_torch_available,
)
+from transformers.models.nllb.tokenization_nll... | NllbTokenizer: optionally list language codes in the config, to enable updating it more smoothly
### Feature request
Currently, `NllbTokenizer` during initialization takes the list of language codes from a hardcoded constant FAIRSEQ_LANGUAGE_CODES.
I propose enable overriding this list with a field in the tokeniz... | WDYT @ArthurZucker?
Mmm I guess for now this can make sense, but think when refactoring NLLB, the FAIRSEQ_LANGUAGE_CODES will be the default of `additional_special_tokens` in the correct order, removing the need to change this. You can also already add language codes using `additional_special_tokens`
Thanks @ArthurZuck... | 2023-11-27 07:16:03+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_embeded_special_tokens', 'tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_num_special_tokens_to_add_equal', 'tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_tokenizers_special_tokens_properties_unset_1', ... | ['tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_new_language_codes'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/nllb/test_tokenization_nllb.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 11 | 4 | 15 | false | false | ["src/transformers/convert_slow_tokenizer.py->module->class_definition:NllbConverter->function_definition:vocab", "src/transformers/models/nllb/tokenization_nllb_fast.py->module->class_definition:NllbTokenizerFast->function_definition:lang_code_to_id", "src/transformers/models/nllb/tokenization_nllb.py->module->class_d... |
huggingface/transformers | 28,071 | huggingface__transformers-28071 | ['26598'] | 43ee58588be4dc754c9f0dea874437fe7201bf00 | diff --git a/src/transformers/models/speecht5/modeling_speecht5.py b/src/transformers/models/speecht5/modeling_speecht5.py
--- a/src/transformers/models/speecht5/modeling_speecht5.py
+++ b/src/transformers/models/speecht5/modeling_speecht5.py
@@ -64,13 +64,17 @@ def shift_tokens_right(input_ids: torch.Tensor, pad_token... | diff --git a/tests/models/speecht5/test_modeling_speecht5.py b/tests/models/speecht5/test_modeling_speecht5.py
--- a/tests/models/speecht5/test_modeling_speecht5.py
+++ b/tests/models/speecht5/test_modeling_speecht5.py
@@ -909,6 +909,23 @@ def test_model_forward(self):
config_and_inputs = self.model_tester.pre... | [SpeechT5] Attention mask not changed according to decoder inputs
### System Info
- `transformers` version: 4.33.3
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate conf... | cc @ylacombe could you take a look when you get the chance? You know SpeechT5 pretty well by now!
Hey, thanks for opening this issue!
I will take a look in the next few days, in the meantime, do you have a script to reproduce the mismatch @Joao-Maria-Janeiro ?
Hey @Joao-Maria-Janeiro , any update on a reproducing scri... | 2023-12-15 13:45:49+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ForSpeechToTextTest:test_training', 'tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ModelTest:test_tied_weights_keys', 'tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ModelTest:test_inputs_embeds', 'tests/models/speecht5/test_modeling_speecht5.... | ['tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ForSpeechToSpeechTest:test_model_forward_with_labels', 'tests/models/speecht5/test_modeling_speecht5.py:SpeechT5ForTextToSpeechTest:test_model_forward_with_labels'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/speecht5/test_modeling_speecht5.py | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/speecht5/modeling_speecht5.py->module->class_definition:SpeechT5ForTextToSpeech->function_definition:forward", "src/transformers/models/speecht5/modeling_speecht5.py->module->class_definition:SpeechT5ForSpeechToSpeech->function_definition:forward", "src/transformers/models/speecht5/modeling_sp... |
huggingface/transformers | 28,115 | huggingface__transformers-28115 | ['28021'] | 71d47f0ad498b7649f11d3a9cca3cd3585e4341f | diff --git a/src/transformers/models/mixtral/configuration_mixtral.py b/src/transformers/models/mixtral/configuration_mixtral.py
--- a/src/transformers/models/mixtral/configuration_mixtral.py
+++ b/src/transformers/models/mixtral/configuration_mixtral.py
@@ -79,7 +79,7 @@ class MixtralConfig(PretrainedConfig):
... | diff --git a/tests/models/mixtral/test_modeling_mixtral.py b/tests/models/mixtral/test_modeling_mixtral.py
--- a/tests/models/mixtral/test_modeling_mixtral.py
+++ b/tests/models/mixtral/test_modeling_mixtral.py
@@ -469,6 +469,7 @@ def test_load_balancing_loss(self):
config, input_dict = self.model_tester.pre... | Incorrect router probability calculation
### System Info
transformers version 4.36.0
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)... | Sorry could you either show the issue or detail where you had a problem? The computation is different because the output shape are also different, the routing mecanism is also different. 🤗
Sure! @ArthurZucker
Here's the current loss function for convenience
```
def load_balancing_loss_func(gate_logits: torch.Te... | 2023-12-18 15:38:54+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_beam_search_generate_dict_output', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_generate_with_head_masking', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_greedy_generate_dict_outputs_use_cache', 'tests/... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_load_balancing_loss'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mixtral/test_modeling_mixtral.py | Bug Fix | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/models/mixtral/configuration_mixtral.py->module->class_definition:MixtralConfig", "src/transformers/models/mixtral/configuration_mixtral.py->module->class_definition:MixtralConfig->function_definition:__init__", "src/transformers/models/mixtral/modeling_mixtral.py->module->function_definition:load_ba... |
huggingface/transformers | 28,398 | huggingface__transformers-28398 | ['23116'] | fff8ca8e597532f141bc3f522f47573320a06730 | diff --git a/src/transformers/models/oneformer/image_processing_oneformer.py b/src/transformers/models/oneformer/image_processing_oneformer.py
--- a/src/transformers/models/oneformer/image_processing_oneformer.py
+++ b/src/transformers/models/oneformer/image_processing_oneformer.py
@@ -15,11 +15,13 @@
"""Image process... | diff --git a/tests/models/oneformer/test_image_processing_oneformer.py b/tests/models/oneformer/test_image_processing_oneformer.py
--- a/tests/models/oneformer/test_image_processing_oneformer.py
+++ b/tests/models/oneformer/test_image_processing_oneformer.py
@@ -15,10 +15,11 @@
import json
+import os
+import tempf... | OneFormerImageProcessor does not support passing local config file, always tries to download from repo
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch ... | @rbavery Thanks for raising this issue.
I'm able to load a processor locally on the development branch without issue:
```python
from transformers import OneFormerProcessor
processor = OneFormerProcessor.from_pretrained('shi-labs/oneformer_ade20k_swin_tiny')
processor.save_pretrained('foo')
new_processor =... | 2024-01-08 16:33:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_init_without_params', 'tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_image_processor_to_json_file', 'tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcess... | ['tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_can_load_with_local_metadata'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/oneformer/test_image_processing_oneformer.py | Bug Fix | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/models/oneformer/image_processing_oneformer.py->module->class_definition:OneFormerImageProcessor->function_definition:__init__", "src/transformers/models/oneformer/image_processing_oneformer.py->module->class_definition:OneFormerImageProcessor", "src/transformers/models/oneformer/image_processing_one... |
huggingface/transformers | 28,517 | huggingface__transformers-28517 | ['28505'] | edb170238febf7fc3e3278ed5b9ca0b2c40c70e3 | diff --git a/src/transformers/models/mixtral/modeling_mixtral.py b/src/transformers/models/mixtral/modeling_mixtral.py
--- a/src/transformers/models/mixtral/modeling_mixtral.py
+++ b/src/transformers/models/mixtral/modeling_mixtral.py
@@ -74,7 +74,9 @@
_CONFIG_FOR_DOC = "MixtralConfig"
-def load_balancing_loss_fun... | diff --git a/tests/models/mixtral/test_modeling_mixtral.py b/tests/models/mixtral/test_modeling_mixtral.py
--- a/tests/models/mixtral/test_modeling_mixtral.py
+++ b/tests/models/mixtral/test_modeling_mixtral.py
@@ -462,7 +462,6 @@ def test_load_balancing_loss(self):
r"""
Let's make sure we can actuall... | Exclude the load balancing loss of padding tokens in Mixtral-8x7B
### Feature request
The auxiliary loss in Mixtral-MoE shouldn't **include the loss from padding tokens**.
### Motivation
I think it is better to change the function
[load_balancing_loss_func](https://github.com/huggingface/transformers/blob/main/sr... | cc @ArthurZucker
feel free to open a PR for this! Otherwise will mark it as a good second issue 🤗
I would like to work on this issue, i will go through the linked file today and ask any questions i have.
I was looking at the code.
Below is what the model outputs
`return MoeModelOutputWithPast(
last_hi... | 2024-01-16 02:39:12+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_beam_search_generate_dict_output', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_generate_with_head_masking', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_greedy_generate_dict_outputs_use_cache', 'tests/... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_load_balancing_loss'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mixtral/test_modeling_mixtral.py | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/mixtral/modeling_mixtral.py->module->function_definition:load_balancing_loss_func", "src/transformers/models/mixtral/modeling_mixtral.py->module->class_definition:MixtralForCausalLM->function_definition:forward"] |
huggingface/transformers | 28,535 | huggingface__transformers-28535 | ['28387'] | 07ae53e6e77ec6ff4fb25fbacfec4b11cfc82749 | diff --git a/src/transformers/models/esm/tokenization_esm.py b/src/transformers/models/esm/tokenization_esm.py
--- a/src/transformers/models/esm/tokenization_esm.py
+++ b/src/transformers/models/esm/tokenization_esm.py
@@ -14,10 +14,9 @@
# limitations under the License.
"""Tokenization classes for ESM."""
import os
... | diff --git a/tests/models/esm/test_tokenization_esm.py b/tests/models/esm/test_tokenization_esm.py
--- a/tests/models/esm/test_tokenization_esm.py
+++ b/tests/models/esm/test_tokenization_esm.py
@@ -87,3 +87,25 @@ def test_tokenize_special_tokens(self):
self.assertEqual(len(token_2), 1)
... | Issue with Adding New Tokens to ESM2 Model Tokenizer
Hello
I am encountering an issue while working with the ESM2 models (`facebook/esm2_t6_8M_UR50D`). Specifically, when I try to add new tokens to the tokenizer, they are automatically classified as special tokens, even though I am specifying `special_tokens=False`.... | Seems like a bug with ESMTokenizer, (which doesn't use this library).
@ArthurZucker for insights or the more relevant people ?
Hey, I cannot reproduce this:
```python
In [23]: model_checkpoint = "facebook/esm2_t6_8M_UR50D"
...: tokenizer_2 = AutoTokenizer.from_pretrained(model_checkpoint)
huggingface/token... | 2024-01-16 15:06:24+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenize_special_tokens', 'tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenizer_call_pad', 'tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenizer_call_no_pad', 'tests/models/esm/test_tokenization_esm.py:E... | ['tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_add_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/esm/test_tokenization_esm.py | Bug Fix | false | false | false | true | 4 | 1 | 5 | false | false | ["src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer", "src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer->function_definition:get_vocab", "src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer->function_definition... |
huggingface/transformers | 28,563 | huggingface__transformers-28563 | ['28002'] | 2c1eebc1216549d8195d7d1c6adb8b99afee3ec5 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -57,6 +57,8 @@
logger = logging.get_logger(__name__)
+_HIDDEN_STATES_START_PO... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -2292,16 +2292,15 @@ def get_subsampled_output_lengths(self, input_lengths):
def encoder_seq_length(s... | Not handled case when use_weighted_layer_sum and return-dict=True in WhisperForAudioClassification
@sanchit-gandhi
I use WhisperForAudioClassification task and want to use `use_weighted_layer_sum=True`, but there is a problem when call forward,
the encoder part can return tuple or dict if `return_dict=True` but the... | Hi @ElsebaiyMohamed, thanks for raising this issue and providing details on the error + a snippet. Could you also provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output?
Hi @amyeroberts ,
Apologies for the delayed response! 🙏 Life threw a curveball, bu... | 2024-01-17 17:22:35+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperEncoderModelTest:test_forward_pass_weighted_layer_sum'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/whisper/test_modeling_whisper.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForAudioClassification->function_definition:forward"] |
huggingface/transformers | 29,311 | huggingface__transformers-29311 | ['29243'] | b27aa206ddf3fe66b36db587603141b3d0379a82 | diff --git a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
--- a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
+++ b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
@@ -125,7 +125,6 @@ class Wav2Vec2CTCTokenizerOutput(ModelOut... | diff --git a/tests/models/wav2vec2/test_tokenization_wav2vec2.py b/tests/models/wav2vec2/test_tokenization_wav2vec2.py
--- a/tests/models/wav2vec2/test_tokenization_wav2vec2.py
+++ b/tests/models/wav2vec2/test_tokenization_wav2vec2.py
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions ... | `skip_special_tokens` for `Wav2Vec2CTCTokenizer` does not work expectedly.
### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelera... | it could / should but should also be left to the super class IMO!
Would you like to open a PR for a fix? I don't think that this is intended behaviour | 2024-02-27 06:22:32+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_maximum_encoding_length_pair_input', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_right_and_left_truncation', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_neste... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_tokenizer_decode_added_tokens', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2TokenizerTest:test_tokenizer_decode_added_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/wav2vec2/test_tokenization_wav2vec2.py | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2CTCTokenizer->function_definition:_decode", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2CTCTokenizer", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_... |
huggingface/transformers | 29,449 | huggingface__transformers-29449 | ['28591'] | 17b06e2c6650de162e7954babf6224c1975c2852 | diff --git a/src/transformers/models/idefics/processing_idefics.py b/src/transformers/models/idefics/processing_idefics.py
--- a/src/transformers/models/idefics/processing_idefics.py
+++ b/src/transformers/models/idefics/processing_idefics.py
@@ -149,7 +149,7 @@ def __init__(self, image_processor, tokenizer=None, image... | diff --git a/tests/models/idefics/test_modeling_idefics.py b/tests/models/idefics/test_modeling_idefics.py
--- a/tests/models/idefics/test_modeling_idefics.py
+++ b/tests/models/idefics/test_modeling_idefics.py
@@ -656,7 +656,7 @@ def test_inference_natural_language_visual_reasoning(self):
"HuggingFaceM4/i... | Idefics - AttentionMasks wrongly set with padding='longest'
### System Info
transformers==4.36.2
### Reproduction
Reported by https://huggingface.co/VishnuSuganth
https://huggingface.co/HuggingFaceM4/idefics-9b-instruct/discussions/11
| Cc @ArthurZucker @younesbelkada
Might be a tokenization issue will have a look
Is anyone working on this issue? If not, would it be something a new contributor could look at?
I think the issue may be how `unpadded_seq_len` is calculated here: https://github.com/huggingface/transformers/blob/main/src/transformers/m... | 2024-03-05 04:48:47+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_training', 'tests/models/idefics/test_modeling_idefics.py:IdeficsModelTest:test_config', 'tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_resize_embeddings_untied', 'tests/models/idefics/test_modeling_ide... | ['tests/models/idefics/test_processor_idefics.py:IdeficsProcessorTest:test_tokenizer_left_padding', 'tests/models/idefics/test_processor_idefics.py:IdeficsProcessorTest:test_tokenizer_padding'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/idefics/test_modeling_idefics.py /testbed/tests/models/idefics/test_processor_idefics.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/idefics/processing_idefics.py->module->class_definition:IdeficsProcessor->function_definition:__call__"] |
huggingface/transformers | 29,519 | huggingface__transformers-29519 | ['29176'] | b338a6c3b8eda29610d4d472cad8cd87cbfdaaed | diff --git a/src/transformers/modeling_attn_mask_utils.py b/src/transformers/modeling_attn_mask_utils.py
--- a/src/transformers/modeling_attn_mask_utils.py
+++ b/src/transformers/modeling_attn_mask_utils.py
@@ -164,10 +164,10 @@ def _make_causal_mask(
# add lower triangular sliding window mask if necessary
... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1673,7 +1673,7 @@ def check_to_causal(self, mask_converter, q_len, kv_len, bsz=3):
def compute_num_context_mask(self, kv_len, context, q_len):
# This function ... | Sliding window inconsistency between PyTorch and Flax
### System Info
transformers main (ae49b218c), Python 3.10.8
### Who can help?
@ArthurZucker, @sanchit-gandhi
### Reproduction
The attention `sliding_window` has different interpretation for PyTorch and Flax. Here's are matching examples:
**PyTorch... | Hey! Pretty sure `MistralSdpaAttention` does not support sliding window yet! Are you using `attn_implementation="flash_attention_2"`?
@ArthurZucker I'm using the default implementation on the CPU, I've just checked to make sure and it's "eager". Initially I thought the issues may be in flash_attn, but you made me re... | 2024-03-07 15:56:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_shard_checkpoint', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:ModelUtilsTest:test_no_super_init_config_and_model', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d', 'tests/test_modeling_uti... | ['tests/test_modeling_utils.py:AttentionMaskTester:test_causal_mask_sliding', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d_causal_sliding'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/test_modeling_utils.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/modeling_attn_mask_utils.py->module->class_definition:AttentionMaskConverter->function_definition:_make_causal_mask"] |
huggingface/transformers | 29,563 | huggingface__transformers-29563 | ['29514'] | 0290ec19c901adc0f1230ebdccad11c40af026f5 | diff --git a/src/transformers/models/mamba/modeling_mamba.py b/src/transformers/models/mamba/modeling_mamba.py
--- a/src/transformers/models/mamba/modeling_mamba.py
+++ b/src/transformers/models/mamba/modeling_mamba.py
@@ -211,7 +211,7 @@ def slow_forward(self, input_states, cache_params=None):
# 2. Convolut... | diff --git a/tests/models/mamba/test_modeling_mamba.py b/tests/models/mamba/test_modeling_mamba.py
--- a/tests/models/mamba/test_modeling_mamba.py
+++ b/tests/models/mamba/test_modeling_mamba.py
@@ -170,7 +170,7 @@ def create_and_check_mamba_model(self, config, input_ids, *args):
self.parent.assertEqual(result... | Cannot propagate gradients in Mamba
### System Info
- `transformers` version: 4.39.0.dev0
- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.11.7
- Huggingface_hub version: 0.21.4
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): ... | Hi @gsarti, thanks for reporting!
Looking at the error message, it's likely due to an in place operation in the model implementation. Would you like to open a PR to fix this?
Pretty sure ~setting `use_cache=False` fixes it, let me check~ let's fix it (It's only for the slow version, which I tried but not on CPU)! ... | 2024-03-09 22:35:02+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_sample_generate_dict_output', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_model_is_small', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_generate_with_head_masking', 'tests/models/mamba/test_modeling_mamba.py:MambaModelT... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_mamba_cached_slow_forward_and_backwards'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mamba/test_modeling_mamba.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/mamba/modeling_mamba.py->module->class_definition:MambaMixer->function_definition:slow_forward"] |
huggingface/transformers | 29,675 | huggingface__transformers-29675 | ['29665'] | 56b64bf1a51e29046bb3f8ca15839ff4d6a92c74 | diff --git a/src/transformers/generation/configuration_utils.py b/src/transformers/generation/configuration_utils.py
--- a/src/transformers/generation/configuration_utils.py
+++ b/src/transformers/generation/configuration_utils.py
@@ -652,7 +652,8 @@ def save_pretrained(
Additional key word arguments p... | diff --git a/tests/trainer/test_trainer_seq2seq.py b/tests/trainer/test_trainer_seq2seq.py
--- a/tests/trainer/test_trainer_seq2seq.py
+++ b/tests/trainer/test_trainer_seq2seq.py
@@ -181,3 +181,22 @@ def prepare_data(examples):
assert (
metrics["eval_samples"] == dataset_len * num_return_s... | GenerationConfig.from_pretrained raise ValueError after training, maybe raise it earlier?
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.13
- Huggingface_hub version: 0.21.4
- Safetensors version: 0.4.2
- Accelerate version... | Hi @YiqunChen1999 👋 Thank you for opening this issue
You're absolutely right, this was an oversight on our part -- we should fail as early as possible. I'm going to open a PR for it. | 2024-03-15 11:00:43+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | [] | ['tests/trainer/test_trainer_seq2seq.py:Seq2seqTrainerTester:test_bad_generation_config_fail_early'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_trainer_seq2seq.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/trainer_seq2seq.py->module->class_definition:Seq2SeqTrainer->function_definition:load_generation_config", "src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig->function_definition:save_pretrained"] |
huggingface/transformers | 29,688 | huggingface__transformers-29688 | ['29685'] | f4dc26d46687f5f4baf3fe64a1d87cafefbeec53 | diff --git a/src/transformers/models/whisper/generation_whisper.py b/src/transformers/models/whisper/generation_whisper.py
--- a/src/transformers/models/whisper/generation_whisper.py
+++ b/src/transformers/models/whisper/generation_whisper.py
@@ -262,7 +262,7 @@ def generate(
synced_gpus: bool = False,
... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -545,10 +545,19 @@ def test_generate_language(self):
# test language code
model.genera... | Support mixed-language batches in `WhisperGenerationMixin`
### Feature request
It is currently not possible to mix multiple languages in a single batch when running [Whisper](https://huggingface.co/docs/transformers/en/model_doc/whisper). The `language` argument only accepts a single string (as opposed to a separate... | null | 2024-03-16 10:17:27+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_language'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/whisper/test_modeling_whisper.py | Feature | false | true | false | false | 5 | 0 | 5 | false | false | ["src/transformers/models/whisper/generation_whisper.py->module->class_definition:WhisperGenerationMixin->function_definition:_prepare_decoder_input_ids", "src/transformers/models/whisper/generation_whisper.py->module->class_definition:WhisperGenerationMixin->function_definition:_retrieve_init_tokens->function_definiti... |
huggingface/transformers | 30,556 | huggingface__transformers-30556 | ['30521'] | a3aabc702e1c49243e7b48f22d88362d50e786c5 | diff --git a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
--- a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
+++ b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
@@ -122,7 +12... | diff --git a/tests/trainer/test_data_collator.py b/tests/trainer/test_data_collator.py
--- a/tests/trainer/test_data_collator.py
+++ b/tests/trainer/test_data_collator.py
@@ -23,6 +23,7 @@
BertTokenizer,
DataCollatorForLanguageModeling,
DataCollatorForPermutationLanguageModeling,
+ DataCollatorForSeq2... | [BUG] DataCollatorForSeq2Seq with PaddingStrategy.MAX_LENGTH may not pad labels
It seems that when padding, if the MAX_LENGTH policy is set, the same padding is not performed on the label.
test case below:
```python
from transformers import DataCollatorForSeq2Seq,
from transformers.utils import PaddingStrategy
... | Thanks for raising this issue! Yea, that seems like a valid bug imo. The padding strategy isn't respected with `max_length`.
I'd change these lines:
https://github.com/huggingface/transformers/blob/73014b561d5f88d728e46a57d346f516fefe3f2d/src/transformers/data/data_collator.py#L591-L592
to something like:
```pyth... | 2024-04-29 21:36:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_data_collator_for_language_modeling', 'tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_default_with_no_labels', 'tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_default_with_no_labels', 'tests/trai... | ['tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_data_collator_for_seq2seq_with_pt', 'tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_data_collator_for_seq2seq', 'tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_data_collator_for_seq2seq_with_lists'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_data_collator.py | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py->module->class_definition:ModelArguments", "src/transformers/data/data_collator.py->module->class_definition:DataCollatorForSeq2Seq->function_definition:__call__"] |
huggingface/transformers | 30,602 | huggingface__transformers-30602 | ['30601'] | c681b58b06f6fb8b5c331f380548af3b4b33f881 | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -3263,8 +3263,8 @@ def from_pretrained(
)
else:
raise EnvironmentError(
- ... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1001,6 +1001,26 @@ def test_use_safetensors(self):
self.assertTrue(any(f.endswith("safetensors") for f in all_downloaded_files))
self.assertFalse(a... | `model.safetensors` missing in model file not found error in default case
### System Info
System info isn't super relevant here since the confusion is really just an just an error message string. I just reproduced in a CPU instance but this is applicable whenever model loading is needed.
- `transformers` version: 4.4... | null | 2024-05-01 19:16:26+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_safetensors_torch_from_torch_sharded', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:AttentionMaskTester:test_torch_compile_fullgraph', 'tests/test_modeling_utils.py:ModelUtilsTest:test_tied_weights_reload', ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_use_safetensors'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/test_modeling_utils.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/modeling_utils.py->module->class_definition:PreTrainedModel->function_definition:from_pretrained"] |
huggingface/transformers | 30,899 | huggingface__transformers-30899 | ['30892'] | 481a95781404e48b1c80940be17e8279dec82fe8 | diff --git a/src/transformers/generation/utils.py b/src/transformers/generation/utils.py
--- a/src/transformers/generation/utils.py
+++ b/src/transformers/generation/utils.py
@@ -1354,6 +1354,23 @@ def _get_static_cache(self, max_batch_size: int, max_cache_len: int) -> StaticCa
self._static_cache.reset() ... | diff --git a/tests/generation/test_utils.py b/tests/generation/test_utils.py
--- a/tests/generation/test_utils.py
+++ b/tests/generation/test_utils.py
@@ -65,6 +65,7 @@
GenerateBeamEncoderDecoderOutput,
GenerateDecoderOnlyOutput,
GenerateEncoderDecoderOutput,
+ GenerationConfig,
... | transformers 4.41.0 breaks generate() for T5
### System Info
- `transformers` version: 4.41.0
- Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.3
- Accelerate version: 0.30.0
- Accelerate config: not found
- PyTorch v... | null | 2024-05-19 13:18:57+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_utils.py:GenerationIntegrationTests:test_generated_length_assisted_generation', 'tests/generation/test_utils.py:GenerationIntegrationTests:test_assisted_decoding_num_assistant_tokens_heuristic_schedule', 'tests/generation/test_utils.py:GenerationIntegrationTests:test_custom_stopping_criteria', '... | ['tests/generation/test_utils.py:GenerationIntegrationTests:test_decoder_start_id_from_config'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/generation/test_utils.py | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/generation/utils.py->module->class_definition:GenerationMixin->function_definition:_prepare_special_tokens", "src/transformers/generation/utils.py->module->class_definition:GenerationMixin", "src/transformers/generation/utils.py->module->class_definition:GenerationMixin->function_definition:_get_deco... |
huggingface/transformers | 30,934 | huggingface__transformers-30934 | ['30922'] | a755745546779ae5c42510bc02a859bdac82b3b7 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -14,6 +14,7 @@
# limitations under the License.
import warnings
+from math import ceil
from typing import Iterable, List, Optional, Tuple... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -369,6 +369,10 @@ def test_center_crop(self):
self.assertEqual(cropped_image.shape, (300, 260, 3))
self.assertTrue(np.allclose(cropped_image, expect... | `center_crop` outputs wrong sized array if provided with odd-numbered dimensions smaller than requested crop size
### System Info
transformers 4.40.1, python 3.12
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially support... | I believe the issue is more accurately caused by odd-numbered difference between original size and new size. Rounding up rather than down when calculating the padding fixes the above test cases. | 2024-05-21 10:22:57+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_flip_channel_order', 'tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_resize', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_u... | ['tests/test_image_transforms.py:ImageTransformsTester:test_center_crop'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/test_image_transforms.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/image_transforms.py->module->function_definition:center_crop"] |
huggingface/transformers | 30,964 | huggingface__transformers-30964 | ['29625'] | 6739e1d261f80caec34b8c8ac7a030907a4f75a2 | diff --git a/src/transformers/models/llama/tokenization_llama_fast.py b/src/transformers/models/llama/tokenization_llama_fast.py
--- a/src/transformers/models/llama/tokenization_llama_fast.py
+++ b/src/transformers/models/llama/tokenization_llama_fast.py
@@ -163,6 +163,7 @@ def __init__(
add_bos_token=add_... | diff --git a/tests/models/llama/test_tokenization_llama.py b/tests/models/llama/test_tokenization_llama.py
--- a/tests/models/llama/test_tokenization_llama.py
+++ b/tests/models/llama/test_tokenization_llama.py
@@ -602,6 +602,10 @@ def test_special_token_special_word(self):
self.assertEqual(decoded_tokens, "he... | `add_prefix_space` won't be respected by Llama tokenizer
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.21.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
- Accelerate config: not fo... | Hey, I took a peek under the hood and looks like setting `add_prefix_true` is only changing `kwargs[slow]=True` (in [tokenization_llama_fast.py](https://github.com/huggingface/transformers/blob/5011908e10d9592eeb634f4940e0bc130d3edc69/src/transformers/models/llama/tokenization_llama_fast.py#L127C9-L132C1). The `super()... | 2024-05-22 13:01:20+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_offsets_mapping', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_number_of_added_tokens', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_mask_output', 'tests/models/llama/test_tokenization_ll... | ['tests/models/llama/test_tokenization_llama.py:LlamaIntegrationTest:test_no_prefix_space'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/llama/test_tokenization_llama.py | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["src/transformers/models/llama/tokenization_llama_fast.py->module->class_definition:LlamaTokenizerFast->function_definition:__init__"] |
huggingface/transformers | 31,217 | huggingface__transformers-31217 | ['31216'] | c73ee1333dc4dc63a71cb6180d0f35fdf4b44958 | diff --git a/src/transformers/pipelines/visual_question_answering.py b/src/transformers/pipelines/visual_question_answering.py
--- a/src/transformers/pipelines/visual_question_answering.py
+++ b/src/transformers/pipelines/visual_question_answering.py
@@ -1,4 +1,4 @@
-from typing import Union
+from typing import List, U... | diff --git a/tests/pipelines/test_pipelines_visual_question_answering.py b/tests/pipelines/test_pipelines_visual_question_answering.py
--- a/tests/pipelines/test_pipelines_visual_question_answering.py
+++ b/tests/pipelines/test_pipelines_visual_question_answering.py
@@ -14,6 +14,8 @@
import unittest
+from datasets... | [pipeline] VQA pipeline does not accept list as input
### System Info
- `transformers` version: 4.42.0.dev0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.3
- Accelerate version: not installed
- A... | null | 2024-06-03 23:53:41+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_visual_question_answering.py:VisualQuestionAnsweringPipelineTests:test_small_model_pt'] | ['tests/pipelines/test_pipelines_visual_question_answering.py:VisualQuestionAnsweringPipelineTests:test_small_model_pt_image_list', 'tests/pipelines/test_pipelines_visual_question_answering.py:VisualQuestionAnsweringPipelineTests:test_small_model_pt_both_list', 'tests/pipelines/test_pipelines_visual_question_answering.... | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/pipelines/test_pipelines_visual_question_answering.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/pipelines/visual_question_answering.py->module->class_definition:VisualQuestionAnsweringPipeline->function_definition:__call__", "src/transformers/pipelines/visual_question_answering.py->module->class_definition:VisualQuestionAnsweringPipeline->function_definition:preprocess"] |
huggingface/transformers | 31,448 | huggingface__transformers-31448 | ['31435'] | cd71f9381b86b0dc1fd60e8b87fb5bade35aa0cd | diff --git a/src/transformers/generation/stopping_criteria.py b/src/transformers/generation/stopping_criteria.py
--- a/src/transformers/generation/stopping_criteria.py
+++ b/src/transformers/generation/stopping_criteria.py
@@ -372,10 +372,11 @@ def _stop_string_create_embedding_vec(token_list, token_indices, stop_strin... | diff --git a/tests/generation/test_stopping_criteria.py b/tests/generation/test_stopping_criteria.py
--- a/tests/generation/test_stopping_criteria.py
+++ b/tests/generation/test_stopping_criteria.py
@@ -208,6 +208,24 @@ def test_stop_string_embedding_vecs(self):
token_lengths = embedding_vec[:, 2].tolist()
... | `stop_strings` Argument in `model.generate()` Results in Exception if Generation Completes Without `stop_string` Being Generated
### System Info
`transformers==4.41.2`
### Who can help?
@gante any thoughts here?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Task... | Might be a duplicate of https://github.com/huggingface/transformers/issues/31435
It looks like this line sets the `tokenizer` to `None` automatically, creates a related but not identical issue.
https://github.com/huggingface/transformers/blob/eed9ed67987/src/transformers/generation/utils.py#L1643
@ahmed-moubtahij... | 2024-06-17 13:14:50+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_max_time_criteria', 'tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_criterias_per_row', 'tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_stop_string_criteria', 'tests/generation/test_stopping_cr... | ['tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_single_letter_stop_string'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/generation/test_stopping_criteria.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/generation/stopping_criteria.py->module->class_definition:StopStringCriteria->function_definition:_stop_string_create_embedding_vec"] |
huggingface/transformers | 31,646 | huggingface__transformers-31646 | ['31642'] | 1f9f57ab4c8c30964360a2ba697c339f6d31f03f | diff --git a/src/transformers/models/encodec/modeling_encodec.py b/src/transformers/models/encodec/modeling_encodec.py
--- a/src/transformers/models/encodec/modeling_encodec.py
+++ b/src/transformers/models/encodec/modeling_encodec.py
@@ -729,7 +729,7 @@ def decode(
Whether or not to return a [`~utils.... | diff --git a/tests/models/encodec/test_modeling_encodec.py b/tests/models/encodec/test_modeling_encodec.py
--- a/tests/models/encodec/test_modeling_encodec.py
+++ b/tests/models/encodec/test_modeling_encodec.py
@@ -19,7 +19,6 @@
import os
import tempfile
import unittest
-from typing import Dict, List, Tuple
impor... | return_dict in encodec is always set to True:
### System Info
- `transformers` version: 4.42.0.dev0
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.2
- Accelerate version: 0.29.1
- Accelerate config: not found... | https://github.com/huggingface/transformers/blob/dfaadfdcda8d2c2f564c94121d4618309c1ecdd5/src/transformers/models/encodec/modeling_encodec.py#L789
@kamilakesbi
by default self.config.return_dict is true so the or condition is always maintained and the function returns a dict. | 2024-06-26 18:49:53+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_forward_signature', 'tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_config', 'tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_from_pretrained_no_checkpoint', 'tests/models/encodec/test_modeling_encodec.py:Enco... | ['tests/models/encodec/test_modeling_encodec.py:EncodecModelTest:test_model_outputs_equivalence'] | null | python -m pytest /testbed/tests/models/encodec/test_modeling_encodec.py --json-report --json-report-file=test_output.json -v | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/encodec/modeling_encodec.py->module->class_definition:EncodecModel->function_definition:forward", "src/transformers/models/encodec/modeling_encodec.py->module->class_definition:EncodecModel->function_definition:decode"] |
langchain-ai/langchain | 3,367 | langchain-ai__langchain-3367 | ['3365'] | 3a1bdce3f51e302d468807e980455d676c0f5fd6 | diff --git a/langchain/agents/mrkl/output_parser.py b/langchain/agents/mrkl/output_parser.py
--- a/langchain/agents/mrkl/output_parser.py
+++ b/langchain/agents/mrkl/output_parser.py
@@ -18,7 +18,9 @@ def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
{"output": text.split(FINAL_ANSWER_ACTI... | diff --git a/tests/unit_tests/agents/test_mrkl.py b/tests/unit_tests/agents/test_mrkl.py
--- a/tests/unit_tests/agents/test_mrkl.py
+++ b/tests/unit_tests/agents/test_mrkl.py
@@ -50,6 +50,27 @@ def test_get_action_and_input_newline() -> None:
assert action_input == "```\nimport unittest\n\nunittest.main()\n```"
... | Terminal tool gives `ValueError: Could not parse LLM output:` when there is a new libe before action string.
While playing with the LLaMA models I noticed what parse exception was thrown even output looked good.
### Screenshot
 based on langchain.
A few months ago, I used it with fine-tuned (FT) models.
We added a token usage counter later, and I haven't tried fine-tuned models again since then.
Recently we ... | null | 2023-05-02 22:52:00+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | ['tests/unit_tests/callbacks/test_openai_info.py:None:test_on_llm_end'] | ['tests/unit_tests/callbacks/test_openai_info.py:None:test_on_llm_end_custom_model'] | null | pytest /testbed/tests/unit_tests/callbacks/test_openai_info.py -v --json-report | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["langchain/callbacks/openai_info.py->module->function_definition:get_openai_model_cost_per_1k_tokens", "langchain/callbacks/openai_info.py->module->function_definition:get_openai_token_cost_for_model", "langchain/callbacks/openai_info.py->module->class_definition:OpenAICallbackHandler->function_definition:on_llm_end"] |
langchain-ai/langchain | 4,103 | langchain-ai__langchain-4103 | ['4087'] | 624554a43a1ab0113f3d79ebcbc9e726faecb339 | diff --git a/langchain/document_loaders/csv_loader.py b/langchain/document_loaders/csv_loader.py
--- a/langchain/document_loaders/csv_loader.py
+++ b/langchain/document_loaders/csv_loader.py
@@ -36,13 +36,7 @@ def __init__(
self.file_path = file_path
self.source_column = source_column
self.en... | diff --git a/tests/unit_tests/document_loader/test_csv_loader.py b/tests/unit_tests/document_loader/test_csv_loader.py
--- a/tests/unit_tests/document_loader/test_csv_loader.py
+++ b/tests/unit_tests/document_loader/test_csv_loader.py
@@ -1,4 +1,4 @@
-from pytest_mock import MockerFixture
+from pathlib import Path
f... | CSVLoader TypeError: "delimiter" must be string, not NoneType
it seems that the source code for initializing a CSVLoader doesn't put an appropriate if condition here:
```
def __init__(
self,
file_path: str,
source_column: Optional[str] = None,
csv_args: Optional[Dict] = None,... | Is there a work around for this?
I'm using it in a directory loader like this:
csv_directory_loader = DirectoryLoader(csv_folder_path, glob="**/*.csv", loader_cls=CSVLoader, show_progress=True)
and it gives me the same error.
> Is there a work around for this?
>
> I'm using it in a directory loader like th... | 2023-05-04 11:28:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_valid_data', 'tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_single_row_file', 'tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_single_column_file', 'te... | null | pytest /testbed/tests/unit_tests/document_loader/test_csv_loader.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["langchain/document_loaders/csv_loader.py->module->class_definition:CSVLoader->function_definition:__init__"] |
langchain-ai/langchain | 4,420 | langchain-ai__langchain-4420 | ['4153'] | f2150285a495fc530a7707218ea4980c17a170e5 | diff --git a/langchain/document_loaders/whatsapp_chat.py b/langchain/document_loaders/whatsapp_chat.py
--- a/langchain/document_loaders/whatsapp_chat.py
+++ b/langchain/document_loaders/whatsapp_chat.py
@@ -44,7 +44,7 @@ def load(self) -> List[Document]:
)
\]?
[\s-]*
- ... | diff --git a/tests/integration_tests/document_loaders/test_whatsapp_chat.py b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
--- a/tests/integration_tests/document_loaders/test_whatsapp_chat.py
+++ b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
@@ -16,4 +16,5 @@ def test_whatsapp_chat_... | WhatsAppChatLoader doesn't work on chats exported from WhatsApp
### System Info
langchain 0.0.158
Mac OS M1
Python 3.11
### Who can help?
@ey
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Mo... | it also doesn't work on Ukrainian date format, e.g.
```
[05.05.23, 15:45:46] User: text
```
---
I used the following input formats:
```
[05.05.23, 15:48:11] James: Hi here
[11/8/21, 9:41:32 AM] User name: Message 123
1/23/23, 3:19 AM - User 2: Bye!
1/23/23, 3:22_AM - User 1: And let me know if anything ... | 2023-05-09 21:23:12+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/integration_tests/document_loaders/test_whatsapp_chat.py:None:test_whatsapp_chat_loader'] | null | pytest /testbed/tests/integration_tests/document_loaders/test_whatsapp_chat.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/document_loaders/whatsapp_chat.py->module->class_definition:WhatsAppChatLoader->function_definition:load"] |
langchain-ai/langchain | 4,579 | langchain-ai__langchain-4579 | ['4167'] | 372a5113ff1cce613f78d58c9e79e7c49aa60fac | diff --git a/langchain/document_loaders/web_base.py b/langchain/document_loaders/web_base.py
--- a/langchain/document_loaders/web_base.py
+++ b/langchain/document_loaders/web_base.py
@@ -68,17 +68,19 @@ def __init__(
"bs4 package not found, please install it with " "`pip install bs4`"
)
... | diff --git a/tests/unit_tests/document_loader/test_web_base.py b/tests/unit_tests/document_loader/test_web_base.py
new file mode 100644
--- /dev/null
+++ b/tests/unit_tests/document_loader/test_web_base.py
@@ -0,0 +1,10 @@
+from langchain.document_loaders.web_base import WebBaseLoader
+
+
+class TestWebBaseLoader:
+ ... | User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info
Hi Team,
When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend.
```
loader = WebBaseLoader(url, header_template={
'User-... | possible fix after setting session
```
self.session = requests.Session()
"""Default headers are set by session and spread them with custom headers when needed"""
if header_template is not None:
self.session.headers = {** self.session.headers, ** header_template}
``` | 2023-05-12 13:07:01+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/unit_tests/document_loader/test_web_base.py:TestWebBaseLoader:test_respect_user_specified_user_agent'] | null | pytest /testbed/tests/unit_tests/document_loader/test_web_base.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["langchain/document_loaders/web_base.py->module->class_definition:WebBaseLoader->function_definition:__init__"] |
langchain-ai/langchain | 4,646 | langchain-ai__langchain-4646 | ['3709'] | 928cdd57a4531e606f7ca7e34c0b96736ffcce49 | diff --git a/langchain/output_parsers/pydantic.py b/langchain/output_parsers/pydantic.py
--- a/langchain/output_parsers/pydantic.py
+++ b/langchain/output_parsers/pydantic.py
@@ -22,7 +22,7 @@ def parse(self, text: str) -> T:
json_str = ""
if match:
json_str = match.group()
- ... | diff --git a/tests/unit_tests/output_parsers/test_pydantic_parser.py b/tests/unit_tests/output_parsers/test_pydantic_parser.py
--- a/tests/unit_tests/output_parsers/test_pydantic_parser.py
+++ b/tests/unit_tests/output_parsers/test_pydantic_parser.py
@@ -21,6 +21,7 @@ class TestModel(BaseModel):
additional_fields:... | PydanticOutputParser has high chance failing when completion contains new line
## Context
When the completion is of a longer format such as an Email, the text will likely contain new line character `\n`.
If it is not properly escaped like `\\n`, parsing will fail when using PydanticOutputParser as `json.loads` does n... | null | 2023-05-14 01:54:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | ['tests/unit_tests/output_parsers/test_pydantic_parser.py:None:test_pydantic_output_parser_fail'] | ['tests/unit_tests/output_parsers/test_pydantic_parser.py:None:test_pydantic_output_parser'] | null | pytest /testbed/tests/unit_tests/output_parsers/test_pydantic_parser.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/output_parsers/pydantic.py->module->class_definition:PydanticOutputParser->function_definition:parse"] |
langchain-ai/langchain | 5,450 | langchain-ai__langchain-5450 | ['3605'] | 64b4165c8d9b8374295d4629ef57d4d58e9af7c8 | diff --git a/langchain/embeddings/huggingface.py b/langchain/embeddings/huggingface.py
--- a/langchain/embeddings/huggingface.py
+++ b/langchain/embeddings/huggingface.py
@@ -25,7 +25,12 @@ class HuggingFaceEmbeddings(BaseModel, Embeddings):
model_name = "sentence-transformers/all-mpnet-base-v2"
... | diff --git a/tests/integration_tests/embeddings/test_huggingface.py b/tests/integration_tests/embeddings/test_huggingface.py
--- a/tests/integration_tests/embeddings/test_huggingface.py
+++ b/tests/integration_tests/embeddings/test_huggingface.py
@@ -26,7 +26,8 @@ def test_huggingface_embedding_query() -> None:
def te... | Embeddings normalization and similarity metric
I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the m... | null | 2023-05-30 16:11:31+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_instructor_embedding_documents', 'tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_embedding_documents', 'tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_embedding_query', 'tests/integ... | ['tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_instructor_embedding_normalize'] | null | poetry run pytest /testbed/tests/integration_tests/embeddings/test_huggingface.py -v --json-report-file=test_results.json | Feature | false | false | false | true | 2 | 2 | 4 | false | false | ["langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceInstructEmbeddings->function_definition:embed_documents", "langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceEmbeddings", "langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceInstructEmbeddings", "lang... |
langchain-ai/langchain | 5,584 | langchain-ai__langchain-5584 | ['5582'] | 4c572ffe959957b515528a9036b374f56cef027f | diff --git a/langchain/vectorstores/chroma.py b/langchain/vectorstores/chroma.py
--- a/langchain/vectorstores/chroma.py
+++ b/langchain/vectorstores/chroma.py
@@ -356,11 +356,11 @@ def update_document(self, document_id: str, document: Document) -> None:
raise ValueError(
"For update, you m... | diff --git a/tests/integration_tests/vectorstores/test_chroma.py b/tests/integration_tests/vectorstores/test_chroma.py
--- a/tests/integration_tests/vectorstores/test_chroma.py
+++ b/tests/integration_tests/vectorstores/test_chroma.py
@@ -3,7 +3,10 @@
from langchain.docstore.document import Document
from langchain.... | Chroma.update_document bug
### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/l... | null | 2023-06-01 23:21:18+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_with_persistence', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_with_include_parameter', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_async', 'tests/integration_tests/vectorstores/test_chroma.py:None... | ['tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_update_document', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma'] | null | poetry run pytest /testbed/tests/integration_tests/vectorstores/test_chroma.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/vectorstores/chroma.py->module->class_definition:Chroma->function_definition:update_document"] |
langchain-ai/langchain | 5,609 | langchain-ai__langchain-5609 | ['5601'] | 28d6277396013a16613008647c312bbd6c4623cc | diff --git a/langchain/agents/chat/output_parser.py b/langchain/agents/chat/output_parser.py
--- a/langchain/agents/chat/output_parser.py
+++ b/langchain/agents/chat/output_parser.py
@@ -13,17 +13,24 @@ def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> Un... | diff --git a/tests/unit_tests/agents/test_mrkl.py b/tests/unit_tests/agents/test_mrkl.py
--- a/tests/unit_tests/agents/test_mrkl.py
+++ b/tests/unit_tests/agents/test_mrkl.py
@@ -90,14 +90,7 @@ def test_get_action_and_input_sql_query() -> None:
def test_get_final_answer() -> None:
"""Test getting final answer."... | OutputParsers currently allows model to hallucinate the output of an action
### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/l... | null | 2023-06-02 10:24:47+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/unit_tests/agents/test_mrkl.py:None:test_get_final_answer_multiline', 'tests/unit_tests/agents/test_mrkl.py:None:test_bad_action_input_line', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_sql_query', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_newline', 'tests/uni... | ['tests/unit_tests/agents/test_mrkl.py:None:test_valid_action_and_answer_raises_exception'] | null | poetry run pytest /testbed/tests/unit_tests/agents/test_mrkl.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["langchain/agents/mrkl/output_parser.py->module->class_definition:MRKLOutputParser->function_definition:parse", "langchain/agents/chat/output_parser.py->module->class_definition:ChatOutputParser->function_definition:parse"] |
langchain-ai/langchain | 5,625 | langchain-ai__langchain-5625 | ['5614'] | d0d89d39efb5f292f72e70973f3b70c4ca095047 | diff --git a/langchain/text_splitter.py b/langchain/text_splitter.py
--- a/langchain/text_splitter.py
+++ b/langchain/text_splitter.py
@@ -30,7 +30,9 @@
TS = TypeVar("TS", bound="TextSplitter")
-def _split_text(text: str, separator: str, keep_separator: bool) -> List[str]:
+def _split_text_with_regex(
+ text: s... | diff --git a/tests/unit_tests/test_text_splitter.py b/tests/unit_tests/test_text_splitter.py
--- a/tests/unit_tests/test_text_splitter.py
+++ b/tests/unit_tests/test_text_splitter.py
@@ -275,6 +275,12 @@ def test_rst_code_splitter() -> None:
- Item 1
- Item 2
- Item 3
+
+Comment
+*******
+Not a comment
+
+.. This is... | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2)
### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] ... | null | 2023-06-02 18:06:25+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/unit_tests/test_text_splitter.py:None:test_merge_splits', 'tests/unit_tests/test_text_splitter.py:None:test_swift_code_splitter', 'tests/unit_tests/test_text_splitter.py:None:test_iterative_text_splitter', 'tests/unit_tests/test_text_splitter.py:None:test_character_text_splitter_short_words_first', 'tests/unit_... | ['tests/unit_tests/test_text_splitter.py:None:test_rst_code_splitter'] | null | poetry run pytest /testbed/tests/unit_tests/test_text_splitter.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 5 | 0 | 5 | false | false | ["langchain/text_splitter.py->module->function_definition:_split_text", "langchain/text_splitter.py->module->class_definition:RecursiveCharacterTextSplitter->function_definition:_split_text", "langchain/text_splitter.py->module->function_definition:_split_text_with_regex", "langchain/text_splitter.py->module->class_def... |
langchain-ai/langchain | 6,765 | langchain-ai__langchain-6765 | ['6756'] | ba622764cb7ccf4667878289f959857348ef8c19 | diff --git a/langchain/agents/initialize.py b/langchain/agents/initialize.py
--- a/langchain/agents/initialize.py
+++ b/langchain/agents/initialize.py
@@ -51,7 +51,7 @@ def initialize_agent(
f"Got unknown agent type: {agent}. "
f"Valid types are: {AGENT_TO_CLASS.keys()}."
... | diff --git a/tests/unit_tests/agents/test_initialize.py b/tests/unit_tests/agents/test_initialize.py
new file mode 100644
--- /dev/null
+++ b/tests/unit_tests/agents/test_initialize.py
@@ -0,0 +1,23 @@
+"""Test the initialize module."""
+
+from langchain.agents.agent_types import AgentType
+from langchain.agents.initia... | Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call
### System Info
- Langchain: 0.0.215
- Platform: ubuntu
- Python 3.10.12
### Who can help?
@vowelparrot
https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/... | yes i also got this error too. Apparently we have to use AgentType.ZERO_SHOT_REACT_DESCRIPTION , the old way of using just strings has been changed . At the very least they could have shown an exception error instead of this jargon.
agree!the same to me!
Will land a fix. Thanks for raising this! | 2023-06-26 15:12:34+00:00 | Python | FROM python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
gcc \
python3-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add to PATH
RUN curl -sS... | [] | ['tests/unit_tests/agents/test_initialize.py:None:test_initialize_agent_with_str_agent_type'] | null | pytest /testbed/tests/unit_tests/agents/test_initialize.py -v --json-report --json-report-file=report.json --override-ini=addopts= | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/agents/initialize.py->module->function_definition:initialize_agent"] |
langchain-ai/langchain | 19,331 | langchain-ai__langchain-19331 | ['19276'] | 5fc7bb01e9d6398452d0a7b4a50ce234408ca99c | diff --git a/libs/core/langchain_core/language_models/llms.py b/libs/core/langchain_core/language_models/llms.py
--- a/libs/core/langchain_core/language_models/llms.py
+++ b/libs/core/langchain_core/language_models/llms.py
@@ -115,17 +115,41 @@ def _before_sleep(retry_state: RetryCallState) -> None:
)
+def _re... | diff --git a/libs/core/tests/unit_tests/language_models/llms/test_cache.py b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
new file mode 100644
--- /dev/null
+++ b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
@@ -0,0 +1,105 @@
+from typing import Any, Dict, Optional, Tuple
+
+from langc... | langchain-core: Allow passing local cache to language models
### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
# Goal
Allow instantiating language models with specific caches provided as an init parameter. This will b... | i want try.
Is this test case runnable? If it works fine, what exactly is this issue?
https://github.com/langchain-ai/langchain/blob/40f846e65da37a1c00d72da9ea64ebb0f295b016/libs/core/tests/unit_tests/language_models/chat_models/test_cache.py#L43 | 2024-03-20 11:56:35+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | [] | ['libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_local_cache_generate_async', 'libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_local_cache_generate_sync', 'libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_no_cache_generate_sync', 'libs/core/tests/u... | null | python3 -m pytest /testbed/libs/core/tests/unit_tests/language_models/llms/test_cache.py -v --override-ini=addopts= --junitxml=test-results.xml | Feature | false | true | false | false | 7 | 0 | 7 | false | false | ["libs/core/langchain_core/language_models/llms.py->module->function_definition:aget_prompts", "libs/core/langchain_core/language_models/llms.py->module->class_definition:BaseLLM->function_definition:agenerate", "libs/core/langchain_core/language_models/llms.py->module->function_definition:get_prompts", "libs/core/lang... |
langchain-ai/langchain | 20,064 | langchain-ai__langchain-20064 | ['11408'] | ebd24bb5d64078d7567eca4da0297260eb33dc31 | diff --git a/libs/langchain/langchain/output_parsers/boolean.py b/libs/langchain/langchain/output_parsers/boolean.py
--- a/libs/langchain/langchain/output_parsers/boolean.py
+++ b/libs/langchain/langchain/output_parsers/boolean.py
@@ -1,3 +1,5 @@
+import re
+
from langchain_core.output_parsers import BaseOutputParser
... | diff --git a/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py b/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
--- a/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
+++ b/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
@@ -1,3 +1,5 ... | BooleanOutputParser expected output value error
### System Info
Hi, I am using LLMChainFilter.from_llm(llm) but while running, I am getting this error:
ValueError: BooleanOutputParser expected output value to either be YES or NO. Received Yes, the context is relevant to the question as it provides information about... | _🤖_
<!-- Answer -->
Hello,
The error you're encountering is due to the output of your language model (LLM) not matching the expected values "YES" or "NO" that the `BooleanOutputParser` class is designed to handle.
The `BooleanOutputParser` class in the LangChain codebase is used to parse the output of a language mo... | 2024-04-05 12:56:34+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | [] | ['libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py:None:test_boolean_output_parser_parse'] | null | python3 -m pytest /testbed/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py -v --override-ini=addopts= | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["libs/langchain/langchain/output_parsers/boolean.py->module->class_definition:BooleanOutputParser->function_definition:parse"] |
yt-dlp/yt-dlp | 4,841 | yt-dlp__yt-dlp-4841 | ['4187'] | 07a1250e0e90515ff8142161536f9dafa6eaba1b | diff --git a/yt_dlp/utils.py b/yt_dlp/utils.py
--- a/yt_dlp/utils.py
+++ b/yt_dlp/utils.py
@@ -2479,7 +2479,7 @@ def url_basename(url):
def base_url(url):
- return re.match(r'https?://[^?#&]+/', url).group()
+ return re.match(r'https?://[^?#]+/', url).group()
def urljoin(base, path):
| diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -566,6 +566,7 @@ def test_base_url(self):
self.assertEqual(base_url('http://foo.de/bar/'), 'http://foo.de/bar/')
self.assertEqual(base_url('http://foo.de/bar/baz'), 'http://foo.de/bar/')
... | DiscoveryPlusItaly error 403: Forbidden
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser... | I think this related to #3757
Can u try passing the url as referer?
I have already tried to insert in the referer the url of the main page of the series, but nothing has changed.
```shell
[debug] Command-line config: ['-Uv', '--no-geo-bypass', '--referer', 'https://www.discoveryplus.com/it/show/killer-of-the-cosmos... | 2022-09-03 20:29:36+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_utils.py:TestUtil:test_remove_start', 'test/test_utils.py:TestUtil:test_sanitize_url', 'test/test_utils.py:TestUtil:test_unified_dates', 'test/test_utils.py:TestUtil:test_float_or_none', 'test/test_utils.py:TestUtil:test_sanitize_ids', 'test/test_utils.py:TestUtil:test_get_elements_by_class', 'test/test_uti... | ['test/test_utils.py:TestUtil:test_base_url'] | null | pytest /testbed/test/test_utils.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["yt_dlp/utils.py->module->function_definition:base_url"] |
yt-dlp/yt-dlp | 5,195 | yt-dlp__yt-dlp-5195 | ['5186'] | 2c98d998181c81ee49908be03c031204fd66d03d | diff --git a/yt_dlp/cookies.py b/yt_dlp/cookies.py
--- a/yt_dlp/cookies.py
+++ b/yt_dlp/cookies.py
@@ -999,8 +999,9 @@ def _parse_browser_specification(browser_name, profile=None, keyring=None, conta
class LenientSimpleCookie(http.cookies.SimpleCookie):
"""More lenient version of http.cookies.SimpleCookie"""
... | diff --git a/test/test_cookies.py b/test/test_cookies.py
--- a/test/test_cookies.py
+++ b/test/test_cookies.py
@@ -277,9 +277,24 @@ def test_lenient_parsing(self):
"a=b; invalid; Version=1; c=d",
{"a": "b", "c": "d"},
),
+ (
+ "Reset morsel after ... | Downloads from Crunchyroll break if certain Optanon cookies are present
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.10.04... | @Grub4K Isn't lenient cookies supposed to handle this?
I would call this a bug imported from the CPython code, since it clearly allows usage of `)` and `&` in its `_LEGAL_KEY_CHARS` which is used in the compiled regex but does NOT allow them while setting them in the morsel, since that uses `_LegalChars`.
As a worka... | 2022-10-11 00:38:54+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy repository contents
COPY . .
# Install dependencies and package in development mode
RUN pip install -r requirements.txt pytest pytest-json-report
RUN pip install -... | ['test/test_cookies.py:TestCookies:test_get_desktop_environment', 'test/test_cookies.py:TestCookies:test_chrome_cookie_decryptor_linux_derive_key', 'test/test_cookies.py:TestCookies:test_pbkdf2_sha1', 'test/test_cookies.py:TestCookies:test_chrome_cookie_decryptor_linux_v10', 'test/test_cookies.py:TestCookies:test_chrom... | ['test/test_cookies.py:TestLenientSimpleCookie:test_lenient_parsing'] | null | python -m pytest /testbed/test/test_cookies.py -v --json-report --json-report-file=test_results.json | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["yt_dlp/cookies.py->module->class_definition:LenientSimpleCookie->function_definition:load", "yt_dlp/cookies.py->module->class_definition:LenientSimpleCookie"] |
yt-dlp/yt-dlp | 5,933 | yt-dlp__yt-dlp-5933 | ['5953'] | f079514957401f49db30ec4cd25f8c8246b0c1de | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -1119,9 +1119,10 @@ You can configure yt-dlp by placing any supported command line option to a confi
* `yt-dlp.conf` in the home path given by `-P`
* If `-P` is not given, the current directory is searched
1. **User Configuration**:
+ *... | diff --git a/test/test_config.py b/test/test_config.py
new file mode 100644
--- /dev/null
+++ b/test/test_config.py
@@ -0,0 +1,227 @@
+#!/usr/bin/env python3
+
+# Allow direct execution
+import os
+import sys
+import unittest
+import unittest.mock
+
+sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__... | [Version 2023.01.02] /etc/yt-dlp.conf is not loaded
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2023.01.0... | null | 2023-01-03 00:41:48+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_config.py:TestConfig:test_config__ENVIRON_DEFAULTS_sanity', 'test/test_config.py:TestConfig:test_config_override_commandline', 'test/test_config.py:TestConfig:test_config_early_exit_commandline', 'test/test_config.py:TestConfig:test_config_early_exit_files'] | ['test/test_config.py:TestConfig:test_config_all_environ_values', 'test/test_config.py:TestConfig:test_config_default_expected_locations', 'test/test_config.py:TestConfig:test_config_override_files', 'test/test_config.py:TestConfig:test_config_default_grouping'] | null | pytest /testbed/test/test_config.py -v --json-report | Bug Fix | false | true | false | false | 11 | 0 | 11 | false | false | ["yt_dlp/options.py->module->function_definition:parseOpts->function_definition:_load_from_config_dirs", "yt_dlp/plugins.py->module->class_definition:PluginFinder->function_definition:search_locations", "yt_dlp/plugins.py->module->class_definition:PluginFinder->function_definition:search_locations->function_definition:... |
yt-dlp/yt-dlp | 9,862 | yt-dlp__yt-dlp-9862 | ['9843'] | 39bc699d2e6e39b26af028cc09a7b1d460d00e31 | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -2219,6 +2219,7 @@ Some of yt-dlp's default options are different from that of youtube-dl and youtu
* yt-dlp versions between 2021.11.10 and 2023.06.21 estimated `filesize_approx` values for fragmented/manifest formats. This was added for convenienc... | diff --git a/test/test_YoutubeDL.py b/test/test_YoutubeDL.py
--- a/test/test_YoutubeDL.py
+++ b/test/test_YoutubeDL.py
@@ -4,6 +4,7 @@
import os
import sys
import unittest
+from unittest.mock import patch
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@@ -520,7 +521,33 @@ def te... | `--simulate` doesn't accurately simulate downloading under certain conditions
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've ver... | cc @dirkf
I'm a little hazy as to why one would want to use `--simulate` because all it basically tells you is that the extractor didn't (with luck) crash. If you want to know, say, what format(s) will be selected there is`--get-format` or eqv. Since no video download is being run, it can't tell you anything about any... | 2024-05-05 09:51:35+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
# Run the specified test file
# C... | ['test/test_YoutubeDL.py:TestYoutubeDL:test_subtitles', 'test/test_YoutubeDL.py:TestYoutubeDL:test_ignoreerrors_for_playlist_with_url_transparent_iterable_entries', 'test/test_YoutubeDL.py:TestYoutubeDL:test_header_cookies', 'test/test_YoutubeDL.py:TestFormatSelection:test_audio_only_extractor_format_selection', 'test/... | ['test/test_YoutubeDL.py:TestFormatSelection:test_default_format_spec_without_ffmpeg', 'test/test_YoutubeDL.py:TestFormatSelection:test_default_format_spec_with_ffmpeg'] | null | pytest /testbed/test/test_YoutubeDL.py -v | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL->function_definition:process_video_result", "yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL->function_definition:_default_format_spec"] |
yt-dlp/yt-dlp | 10,390 | yt-dlp__yt-dlp-10390 | ['10391'] | 6c056ea7aeb03660281653a9668547f2548f194f | diff --git a/yt_dlp/extractor/youtube.py b/yt_dlp/extractor/youtube.py
--- a/yt_dlp/extractor/youtube.py
+++ b/yt_dlp/extractor/youtube.py
@@ -3130,7 +3130,8 @@ def _decrypt_nsig(self, s, video_id, player_url):
def _extract_n_function_name(self, jscode):
funcname, idx = self._search_regex(
- ... | diff --git a/test/test_youtube_signature.py b/test/test_youtube_signature.py
--- a/test/test_youtube_signature.py
+++ b/test/test_youtube_signature.py
@@ -167,6 +167,10 @@
'https://www.youtube.com/s/player/590f65a6/player_ias.vflset/en_US/base.js',
'1tm7-g_A9zsI8_Lay_', 'xI4Vem4Put_rOg',
),
+ ... | [youtube] nsig extraction failed: Some formats may be missing
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I... | null | 2024-07-08 20:46:07+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . .... | ['test/test_youtube_signature.py:TestSignature:test_nsig_js_e06dea74', 'test/test_youtube_signature.py:TestSignature:test_nsig_js_dac945fd', 'test/test_youtube_signature.py:TestSignature:test_nsig_js_c81bbb4a', 'test/test_youtube_signature.py:TestSignature:test_signature_js_vflCGk6yw', 'test/test_youtube_signature.py:T... | ['test/test_youtube_signature.py:TestSignature:test_nsig_js_b22ef6e7'] | null | pytest /testbed/test/test_youtube_signature.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["yt_dlp/extractor/youtube.py->module->class_definition:YoutubeIE->function_definition:_extract_n_function_name"] |
keras-team/keras | 18,553 | keras-team__keras-18553 | ['18535'] | c8a5a8969a8712a9a1939937ce34158e04cfc09d | diff --git a/keras/ops/nn.py b/keras/ops/nn.py
--- a/keras/ops/nn.py
+++ b/keras/ops/nn.py
@@ -592,7 +592,7 @@ def __init__(
super().__init__()
self.pool_size = pool_size
self.strides = strides
- self.padding = padding
+ self.padding = padding.lower()
self.data_format =... | diff --git a/keras/ops/nn_test.py b/keras/ops/nn_test.py
--- a/keras/ops/nn_test.py
+++ b/keras/ops/nn_test.py
@@ -121,12 +121,16 @@ def test_conv(self):
# Test 1D conv.
inputs_1d = KerasTensor([None, 20, 3])
kernel = KerasTensor([4, 3, 2])
- self.assertEqual(
- knn.conv(inp... | depthwise_conv ops padding same is not working in on torch backend
```python
import numpy as np
import os
os.environ["KERAS_BACKEND"] = "jax" # 'tensorflow', 'torch', 'jax'
import keras_core as keras
from keras_core import ops
input = np.ones((1, 613, 696, 3))
kernel = np.ones((1, 5, 3, 1))
```
```pyt... | null | 2023-10-05 20:35:56+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_relu', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_silu', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_leaky_relu', 'keras/ops/nn_test.py:NNOpsCorrectnessTest:test_max_pool', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_one_hot_dtype1', 'keras/ops/nn_test.p... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_depthwise_conv', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_conv'] | null | pytest /testbed/keras/ops/nn_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 6 | 6 | 12 | false | false | ["keras/ops/nn.py->module->function_definition:conv_transpose", "keras/ops/nn.py->module->function_definition:separable_conv", "keras/ops/nn.py->module->class_definition:MaxPool->function_definition:__init__", "keras/ops/nn.py->module->function_definition:conv", "keras/ops/nn.py->module->function_definition:max_pool", ... |
keras-team/keras | 18,871 | keras-team__keras-18871 | ['18864'] | 10252a9e7d68c6818423deee1c4c8549038e4171 | diff --git a/keras/models/model.py b/keras/models/model.py
--- a/keras/models/model.py
+++ b/keras/models/model.py
@@ -7,7 +7,6 @@
from keras import utils
from keras.api_export import keras_export
from keras.layers.layer import Layer
-from keras.legacy.saving import legacy_h5_format
from keras.models.variable_mappi... | diff --git a/keras/saving/saving_api_test.py b/keras/saving/saving_api_test.py
--- a/keras/saving/saving_api_test.py
+++ b/keras/saving/saving_api_test.py
@@ -171,8 +171,10 @@ def test_h5_deprecation_warning(self):
with mock.patch.object(logging, "warning") as mock_warn:
saving_api.save_model(mode... | Feature duplication on model.save() and keras.saving.save_model()
When I was reading the code of model saving, I got strange feeling.
https://github.com/keras-team/keras/blob/724321c7b39a90f6125b79931284aa9932c673a0/keras/models/model.py#L294-L297
It says `model.save()` is an alias for `keras.saving.save_model()`. ... | Yes, feel free to open a PR to reduce code redundancy. Thanks! | 2023-12-02 09:56:38+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/saving/saving_api_test.py:LoadWeightsTests:test_load_keras_weights', 'keras/saving/saving_api_test.py:LoadModelTests:test_load_model_with_custom_objects', 'keras/saving/saving_api_test.py:LoadWeightsTests:test_load_h5_weights_by_name', 'keras/saving/saving_api_test.py:LoadModelTests:test_basic_load', 'keras/sav... | ['keras/saving/saving_api_test.py:SaveModelTestsWarning:test_h5_deprecation_warning'] | null | pytest /testbed/keras/saving/saving_api_test.py -v --junitxml=test-results.xml | Refactoring | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/saving/saving_api.py->module->function_definition:save_model", "keras/models/model.py->module->class_definition:Model->function_definition:save"] |
keras-team/keras | 18,975 | keras-team__keras-18975 | ['18970'] | 4a4a139c7aada9f4495620e5a1c5f7ef20d84395 | diff --git a/keras/trainers/compile_utils.py b/keras/trainers/compile_utils.py
--- a/keras/trainers/compile_utils.py
+++ b/keras/trainers/compile_utils.py
@@ -468,6 +468,8 @@ def build(self, y_true, y_pred):
"must be a callable. "
f"Received instead:\nloss={loss} of type {type(... | diff --git a/keras/trainers/compile_utils_test.py b/keras/trainers/compile_utils_test.py
--- a/keras/trainers/compile_utils_test.py
+++ b/keras/trainers/compile_utils_test.py
@@ -251,6 +251,21 @@ def test_single_output_case(self):
value = compile_loss(y_true, y_pred)
self.assertAllClose(value, 0.06833... | Setting loss="crossentropy" in the compile method of a model raises an error: 'list' object has no attribute 'shape'
I love the workflow style of Keras so I decide to make some new metric in my own project. I want metrics more general like "accuracy". So when I run some tests like above, I came across that the loss see... | null | 2023-12-20 14:15:26+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/trainers/compile_utils_test.py:TestCompileLoss:test_list_loss_dict_data', 'keras/trainers/compile_utils_test.py:TestCompileLoss:test_single_output_case', 'keras/trainers/compile_utils_test.py:TestCompileMetrics:test_custom_metric_function', 'keras/trainers/compile_utils_test.py:TestCompileMetrics:test_name_conv... | ['keras/trainers/compile_utils_test.py:TestCompileLoss:test_single_output_case_with_crossentropy_loss'] | null | pytest /testbed/keras/trainers/compile_utils_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/trainers/compile_utils.py->module->class_definition:CompileLoss->function_definition:build"] |
keras-team/keras | 19,190 | keras-team__keras-19190 | ['19180'] | 436937dea3d52eecff3cb6f1bd5161f23c825fae | diff --git a/keras/layers/preprocessing/text_vectorization.py b/keras/layers/preprocessing/text_vectorization.py
--- a/keras/layers/preprocessing/text_vectorization.py
+++ b/keras/layers/preprocessing/text_vectorization.py
@@ -492,6 +492,10 @@ def from_config(cls, config):
config["split"] = serialization_l... | diff --git a/keras/layers/preprocessing/text_vectorization_test.py b/keras/layers/preprocessing/text_vectorization_test.py
--- a/keras/layers/preprocessing/text_vectorization_test.py
+++ b/keras/layers/preprocessing/text_vectorization_test.py
@@ -1,11 +1,15 @@
+import os
+
import numpy as np
import pytest
import ten... | `ValueError`: `ngrams` when loading a model with a `TextVectorization` layer
### Describe a bug
Loading a model that contains a `TextVectorization` layer with `ngram` set to a tuple results in a `ValueError`.
### Code to Reproduce
```python
import numpy as np
import tensorflow as tf
from tensorflow import k... | null | 2024-02-16 15:30:56+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-bullseye
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
libffi-dev \
python3-dev \
gfortran \
libopenblas-dev \
... | ['keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_set_vocabulary', 'keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_ragged_tensor_output_length', 'keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_fixed_vocabulary', 'keras/... | ['keras/layers/preprocessing/text_vectorization_test.py:TextVectorizationTest:test_save_load_with_ngrams_flow'] | null | pytest /testbed/keras/layers/preprocessing/text_vectorization_test.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/layers/preprocessing/text_vectorization.py->module->class_definition:TextVectorization->function_definition:from_config"] |
keras-team/keras | 19,201 | keras-team__keras-19201 | ['19199'] | ec67b760ba25e1ccc392d288f7d8c6e9e153eea2 | diff --git a/keras/backend/jax/distribution_lib.py b/keras/backend/jax/distribution_lib.py
--- a/keras/backend/jax/distribution_lib.py
+++ b/keras/backend/jax/distribution_lib.py
@@ -200,12 +200,12 @@ def initialize(job_addresses, num_processes, process_id):
f"{len(job_addresses)} jobs, but num_process... | diff --git a/keras/backend/jax/distribution_lib_test.py b/keras/backend/jax/distribution_lib_test.py
--- a/keras/backend/jax/distribution_lib_test.py
+++ b/keras/backend/jax/distribution_lib_test.py
@@ -50,7 +50,7 @@ def test_device_conversion(self):
def test_initialize_with_all_job_addresses(self, mock_jax_initia... | Typo in keras.distribution.initialize()
Hi,
There is a typo when calling `keras.distribution.initialize` due to a typo in the jax backend. The function pass the `corrdinator_address` argument instead of `coordinator_address` to `jax.distributed.initialize`
```log
---> 13 keras.distribution.initialize()
File /... | null | 2024-02-19 18:18:24+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files... | ['keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_processes', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_distribute_tensor', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_distribute_variable', 'keras/backend/jax/distribution_lib_test.py:JaxDi... | ['keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_initialize_with_all_job_addresses', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_initialize_with_coordinater_address'] | null | python -m pytest /testbed/keras/backend/jax/distribution_lib_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/backend/jax/distribution_lib.py->module->function_definition:initialize"] |
keras-team/keras | 19,459 | keras-team__keras-19459 | ['19437'] | 68e0368c680decbc7c9e1da57b56b3a8212b3ec2 | diff --git a/keras/backend/numpy/random.py b/keras/backend/numpy/random.py
--- a/keras/backend/numpy/random.py
+++ b/keras/backend/numpy/random.py
@@ -67,6 +67,7 @@ def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
def dropout(inputs, rate, noise_shape=None, seed=None):
+ dtype = inputs.... | diff --git a/keras/layers/regularization/alpha_dropout_test.py b/keras/layers/regularization/alpha_dropout_test.py
--- a/keras/layers/regularization/alpha_dropout_test.py
+++ b/keras/layers/regularization/alpha_dropout_test.py
@@ -15,6 +15,7 @@ def test_alpha_dropout_basics(self):
"rate": 0.2,
... | Keras with TF backend GaussianDropout gives error with mixed_bfloat16
When using Keras with 3.1.1 with Tensorflow 2.16.1 backend, using GaussianDropout layer with mixed_bfloat16 results in the following error message:
```
TypeError: Exception encountered when calling GaussianDropout.call().
Input 'y' of 'Mul' Op h... | BTW, I can see that Keras 2.15 uses dtype=inputs.dtype when calling self._random_generator.random_normal function.
Another addition: Keras 3 Documentation suggests setting mixed policy with following line:
`tf.keras.config.set_dtype_policy('mixed_bfloat16')`
instead of the one I supplied above. Still same error. | 2024-04-08 07:27:18+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/random/random_test.py:RandomDTypeTest:test_normal_float64', 'keras/random/random_test.py:RandomDTypeTest:test_categorical_int8', 'keras/random/random_test.py:RandomDTypeTest:test_randint_uint8', 'keras/random/random_test.py:RandomTest:test_truncated_normal1', 'keras/random/random_test.py:RandomTest:test_shuffle... | ['keras/random/random_test.py:RandomDTypeTest:test_binomial_bfloat16', 'keras/layers/regularization/gaussian_dropout_test.py:GaussianDropoutTest:test_gaussian_dropout_basics', 'keras/random/random_test.py:RandomDTypeTest:test_gamma_bfloat16', 'keras/random/random_test.py:RandomDTypeTest:test_beta_bfloat16', 'keras/laye... | null | python -m pytest /testbed/keras/layers/regularization/alpha_dropout_test.py /testbed/keras/layers/regularization/dropout_test.py /testbed/keras/layers/regularization/gaussian_dropout_test.py /testbed/keras/layers/regularization/gaussian_noise_test.py /testbed/keras/random/random_test.py -v --json-report | Bug Fix | false | true | false | false | 7 | 0 | 7 | false | false | ["keras/layers/regularization/gaussian_noise.py->module->class_definition:GaussianNoise->function_definition:call", "keras/backend/tensorflow/random.py->module->function_definition:gamma", "keras/backend/tensorflow/random.py->module->function_definition:binomial", "keras/backend/numpy/random.py->module->function_defini... |
keras-team/keras | 19,466 | keras-team__keras-19466 | ['19407'] | 504716cb71973d4d4e485eb1724a3c4d3b621a69 | diff --git a/keras/ops/numpy.py b/keras/ops/numpy.py
--- a/keras/ops/numpy.py
+++ b/keras/ops/numpy.py
@@ -3992,6 +3992,9 @@ class Nonzero(Operation):
def call(self, x):
return backend.numpy.nonzero(x)
+ def compute_output_spec(self, x):
+ return KerasTensor([None] * len(x.shape))
+
@keras_... | diff --git a/keras/ops/numpy_test.py b/keras/ops/numpy_test.py
--- a/keras/ops/numpy_test.py
+++ b/keras/ops/numpy_test.py
@@ -1311,6 +1311,10 @@ def test_ndim(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.ndim(x).shape, (2,))
+ def test_nonzero(self):
+ x = KerasTensor((None, 5, ... | Numpy Ops function nonzero(x) appers to be missing check for symbolic tensors
In updating code from Keras 2 to 3, we noticed that nonzero function continues to throw errors for use of KerasTensor in TF functions, even when run though tf.keras.ops
Digging into the source, it appears that this function does not recei... | null | 2024-04-09 17:23:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_take_sparse_axis_0_float64', 'keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_transpose', 'keras/ops/numpy_test.py:NumpyTwoInputOpsStaticShapeTest:test_less_equal', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_prod_none', 'keras/ops/numpy_test.... | ['keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_nonzero'] | null | python -m pytest /testbed/keras/ops/numpy_test.py -v --json-report | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/ops/numpy.py->module->function_definition:nonzero", "keras/ops/numpy.py->module->class_definition:Nonzero->function_definition:compute_output_spec"] |
keras-team/keras | 19,484 | keras-team__keras-19484 | ['19411'] | 6a9bc4c051f0e4ee5e4ff48f08fd14230036dc46 | diff --git a/keras/optimizers/base_optimizer.py b/keras/optimizers/base_optimizer.py
--- a/keras/optimizers/base_optimizer.py
+++ b/keras/optimizers/base_optimizer.py
@@ -567,7 +567,7 @@ def _get_current_learning_rate(self):
):
return self._learning_rate(self.iterations)
elif callable(sel... | diff --git a/keras/optimizers/optimizer_test.py b/keras/optimizers/optimizer_test.py
--- a/keras/optimizers/optimizer_test.py
+++ b/keras/optimizers/optimizer_test.py
@@ -243,3 +243,12 @@ def test_tf_checkpointing(self):
checkpoint.restore(save_path)
pred = model.predict(x)
self.assertAllClos... | keras adamw optimizer failed with callable parameters in TensorFlow2.16
When we were working on upgrading keras 2 to keras 3 in TensorFlow plugin, one of our adamw related unit test failed, which is a sub unit test using callable lambda as learning_rate argument. We also found this ut failed in TensorFlow2.16 official... | https://github.com/keras-team/keras/blob/6c591d7d34c3ffaa50e805fd75c83d9c2a23414f/keras/optimizers/base_optimizer.py#L560
Here is the root cause. If learning_rate is a callable object, then it doesn't need any arguments.
I might give this one a stab if no one picks it up.
@kapoor1992 , You can create a PR
@sachinpras... | 2024-04-10 22:45:57+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/optimizers/optimizer_test.py:OptimizerTest:test_set_weights', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_ema', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_get_method', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_clip_args', 'keras/optimizers/optimizer_test.py:OptimizerTest:test... | ['keras/optimizers/optimizer_test.py:OptimizerTest:test_callable_learning_rate'] | null | python -m pytest /testbed/keras/optimizers/optimizer_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/optimizers/base_optimizer.py->module->class_definition:BaseOptimizer->function_definition:_get_current_learning_rate"] |
keras-team/keras | 19,636 | keras-team__keras-19636 | ['19629'] | 880f0cdd67591474d8ed98a6b192655322b7ecfc | diff --git a/keras/src/dtype_policies/dtype_policy.py b/keras/src/dtype_policies/dtype_policy.py
--- a/keras/src/dtype_policies/dtype_policy.py
+++ b/keras/src/dtype_policies/dtype_policy.py
@@ -1,5 +1,4 @@
from keras.src import backend
-from keras.src import ops
from keras.src.api_export import keras_export
from ke... | diff --git a/keras/src/layers/layer_test.py b/keras/src/layers/layer_test.py
--- a/keras/src/layers/layer_test.py
+++ b/keras/src/layers/layer_test.py
@@ -437,13 +437,13 @@ def test_mixed_precision(self):
y = layer(x)
self.assertEqual(layer.compute_dtype, "float16")
self.assertEqual(layer.var... | keras autocast casts numpy int types to float
In keras 2 I was using model input tuples with mixed types (some float and some int). This worked nicely with all policies. In keras 3 in case numpy arrays are used used as input np.int32 will be converted into tf.float32 or tf.float16 (depending on policy).
See her... | The expected behavior is that all inputs should be autocasted to `self.input_dtype`, which is what's happening here.
You could just set `input_dtype` to be what you want.
Alternatively, you can make a layer/model that does not cast/convert its inputs at all, by setting `self._convert_input_args = False`. You will... | 2024-04-29 02:11:03+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/layers/layer_test.py:LayerTest:test_training_arg_value_resolution', 'keras/src/layers/layer_test.py:LayerTest:test_rng_seed_tracking', 'keras/src/layers/layer_test.py:LayerTest:test_add_loss', 'keras/src/layers/layer_test.py:LayerTest:test_trainable_setting', 'keras/src/layers/layer_test.py:LayerTest:test_r... | ['keras/src/layers/layer_test.py:LayerTest:test_autocast_with_np_array'] | null | python -m pytest /testbed/keras/src/layers/layer_test.py /testbed/keras/src/layers/normalization/spectral_normalization_test.py /testbed/keras/src/testing/test_case.py -v --json-report | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/src/dtype_policies/dtype_policy.py->module->class_definition:DTypePolicy->function_definition:_should_cast", "keras/src/dtype_policies/dtype_policy.py->module->class_definition:DTypePolicy->function_definition:convert_input"] |
keras-team/keras | 19,641 | keras-team__keras-19641 | ['19591'] | 9f4da5159a098256dfbccd2c926107953a6812e5 | diff --git a/keras/src/backend/tensorflow/nn.py b/keras/src/backend/tensorflow/nn.py
--- a/keras/src/backend/tensorflow/nn.py
+++ b/keras/src/backend/tensorflow/nn.py
@@ -252,6 +252,12 @@ def _conv_xla():
# If kernel's in_channel does not match input's channels, it indicates
# convolution is broken d... | diff --git a/keras/src/ops/nn_test.py b/keras/src/ops/nn_test.py
--- a/keras/src/ops/nn_test.py
+++ b/keras/src/ops/nn_test.py
@@ -1445,23 +1445,29 @@ def test_conv_2d_group_2(self, strides, dilation_rate):
)
self.assertAllClose(outputs, expected)
- @parameterized.product(strides=(1, (1, 1, 1), 2... | Conv3D crash when the data_format is 'channels_first' and using Tensorflow backend
According to the [document](https://keras.io/api/layers/convolution_layers/convolution3d/) of Conv3D in keras website, Conv3D should accept inputs with data format 'channels_first' or 'channels_last'.
While in this [colab](https://colab... | According to the error message, the lack of support is only on CPU -- GPU should work fine. There's no CPU kernel for channels_first Conv3D. We can't fix that on the Keras side except by doing a transpose/counter-transpose in that case, which would be very inefficient.
Got it. I'll try it on GPU.
@fchollet
Sorry for ... | 2024-04-30 00:14:46+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_depthwise_conv_2d2', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_log_sigmoid', 'keras/src/ops/nn_test.py:NNOpsDtypeTest:test_sigmoid_bfloat16', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_average_pool', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d2', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d4', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d8', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d10', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d6', 'ke... | null | python -m pytest /testbed/keras/src/ops/nn_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/nn.py->module->function_definition:conv"] |
keras-team/keras | 19,773 | keras-team__keras-19773 | ['19770'] | a243d91e43b4c43fe8d184b541b608b6ddd80f71 | diff --git a/keras/src/layers/preprocessing/string_lookup.py b/keras/src/layers/preprocessing/string_lookup.py
--- a/keras/src/layers/preprocessing/string_lookup.py
+++ b/keras/src/layers/preprocessing/string_lookup.py
@@ -316,6 +316,7 @@ def __init__(
raise ValueError(
"`sparse=True` can ... | diff --git a/keras/src/layers/preprocessing/string_lookup_test.py b/keras/src/layers/preprocessing/string_lookup_test.py
--- a/keras/src/layers/preprocessing/string_lookup_test.py
+++ b/keras/src/layers/preprocessing/string_lookup_test.py
@@ -5,6 +5,7 @@
from keras.src import backend
from keras.src import layers
fro... | [BUG] keras.layers.StringLookup and Vocabulary of Tensors
There is a bug in keras.layers.StringLookup when initializing it with a vocabulary of tensors.
```
import tensorflow as tf
vocab = ["a", "b", "c", "d"]
data = [["a", "c", "d"], ["d", "z", "b"]]
layer = tf.keras.layers.StringLookup(vocabulary=tf.convert_... | Hi @rlcauvin ,
Thanks for report. I have reporduced the issue with Keras3 and TF2.15v as well. Tested with Tf2.12v and it works well.[Gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/9b18cf4427067c71060aa3adfcf03873/19770.ipynb)
The rootcause pointed by you seems proper solution. In **TF2.12v** , I can... | 2024-05-29 06:29:26+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_set_vocabulary', 'keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_config', 'keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_tf_data_compatibility', 'keras/src/layers/preprocessing/string_lo... | ['keras/src/layers/preprocessing/string_lookup_test.py:StringLookupTest:test_tensor_as_vocab'] | null | python -m pytest /testbed/keras/src/layers/preprocessing/string_lookup_test.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["keras/src/layers/preprocessing/string_lookup.py->module->class_definition:StringLookup->function_definition:__init__"] |
keras-team/keras | 19,775 | keras-team__keras-19775 | ['19772'] | a243d91e43b4c43fe8d184b541b608b6ddd80f71 | diff --git a/keras/src/backend/tensorflow/numpy.py b/keras/src/backend/tensorflow/numpy.py
--- a/keras/src/backend/tensorflow/numpy.py
+++ b/keras/src/backend/tensorflow/numpy.py
@@ -1310,6 +1310,10 @@ def less_equal(x1, x2):
def linspace(
start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0
):
... | diff --git a/keras/src/ops/numpy_test.py b/keras/src/ops/numpy_test.py
--- a/keras/src/ops/numpy_test.py
+++ b/keras/src/ops/numpy_test.py
@@ -2488,17 +2488,13 @@ def test_linspace(self):
np.linspace(start, stop, 5, retstep=True)[0],
)
self.assertAllClose(
- backend.convert_to_... | ops.linspace broken in Tensorflow when num is a tf.Tensor
When using ops.linspace with Tensorflow backend, if the `num` argument is a tf.Tensor the code will break here:
https://github.com/keras-team/keras/blob/a243d91e43b4c43fe8d184b541b608b6ddd80f71/keras/src/backend/tensorflow/numpy.py#L1332
Because `start` and ... | Hi @gustavoeb ,
Thanks for the report. I have reproduced the issue and attached [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/4bab4d097a48b487f32c28a1e89a2d9f/19772.ipynb) here. The Op `linspace` is breaking when the value of `num` is `int` or `float` | 2024-05-29 09:55:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expand_dims_float32', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_all', 'keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expm1_float64', 'keras/src/ops/numpy_test.py:SparseTest:test_binary_correctness_sparse_tensor_multiply_sparse_dense_float32', '... | ['keras/src/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_linspace'] | null | python -m pytest /testbed/keras/src/ops/numpy_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/numpy.py->module->function_definition:linspace"] |
keras-team/keras | 19,826 | keras-team__keras-19826 | ['19821'] | 2305fada8889e86463493bb4893b13ee8a8f0573 | diff --git a/keras/src/ops/numpy.py b/keras/src/ops/numpy.py
--- a/keras/src/ops/numpy.py
+++ b/keras/src/ops/numpy.py
@@ -4345,26 +4345,44 @@ def call(self, x):
def compute_output_spec(self, x):
x_shape = list(x.shape)
+ repeats = self.repeats
+ if isinstance(repeats, int):
+ r... | diff --git a/keras/src/ops/numpy_test.py b/keras/src/ops/numpy_test.py
--- a/keras/src/ops/numpy_test.py
+++ b/keras/src/ops/numpy_test.py
@@ -1364,7 +1364,7 @@ def test_repeat(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.repeat(x, 2).shape, (None,))
self.assertEqual(knp.repeat(x, 3... | `keras.ops.repeat` cannot return an exptected shape when `x` is a `KerasTensor` and the `axis` is `None`
Hello. Thank you for your contributions and maintenance for the best Keras.
I'm following the instructions of [Conditional GAN (code samples, uses Keras 3)](https://keras.io/examples/generative/conditional_gan/)... | I can look into this and report my findings in a few hours
This is due to an oversight caused by the different ways Keras and other backends handle the `repeats` parameter.
You can submit a PR after you solve it.
Edited: [Was confused about the expected dimensions of the output but I found the mistake in my logic] | 2024-06-10 15:05:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expand_dims_float32', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_all', 'keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expm1_float64', 'keras/src/ops/numpy_test.py:SparseTest:test_binary_correctness_sparse_tensor_multiply_sparse_dense_float32', '... | ['keras/src/ops/numpy_test.py:NumpyOneInputOpsStaticShapeTest:test_repeat', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_repeat'] | null | python -m pytest /testbed/keras/src/ops/numpy_test.py -v | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/ops/numpy.py->module->class_definition:Repeat->function_definition:compute_output_spec"] |
keras-team/keras | 19,838 | keras-team__keras-19838 | ['19825'] | 26abe697a8802de40cb2761fc98b843fe1b2d5f6 | diff --git a/keras/src/losses/losses.py b/keras/src/losses/losses.py
--- a/keras/src/losses/losses.py
+++ b/keras/src/losses/losses.py
@@ -1711,6 +1711,9 @@ def sparse_categorical_crossentropy(
array([0.0513, 2.303], dtype=float32)
"""
+ if len(y_true.shape) == len(y_pred.shape) and y_true.shape[-1] == 1... | diff --git a/keras/src/losses/losses_test.py b/keras/src/losses/losses_test.py
--- a/keras/src/losses/losses_test.py
+++ b/keras/src/losses/losses_test.py
@@ -1055,7 +1055,7 @@ def test_no_reduction(self):
from_logits=True, reduction=None
)
loss = cce_obj(y_true, logits)
- self.ass... | sparse_categorical_crossentropy with ignore_class fails for 4D inputs
Using `ignore_class` with `keras.losses.sparse_categorical_crossentropy` and 4D inputs (Batch x Height x Width x Class) fails with a ValueError indicating wrong shapes.
Minimal example to reproduce:
```
import numpy as np
import tensorflow as t... | > y_true = np.zeros((1, 224, 224, 1))
=> `y_true = np.zeros((1, 224, 224))`
Shouldn't `y_true` has one dimension less than `y_pred`?
Oh, you are right, with `y_true = np.zeros((1, 224, 224))` it seems to work...
However, when omitting `ignore_class` from `sparse_categorical_crossentropy`, `y_true = np.zeros((1... | 2024-06-11 16:45:49+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy repository contents
COPY . .
# In... | ['keras/src/losses/losses_test.py:CategoricalFocalCrossentropyTest:test_label_smoothing', 'keras/src/losses/losses_test.py:SparseCategoricalCrossentropyTest:test_unweighted', 'keras/src/losses/losses_test.py:MeanAbsoluteErrorTest:test_zero_weighted', 'keras/src/losses/losses_test.py:CategoricalCrossentropyTest:test_con... | ['keras/src/losses/losses_test.py:SparseCategoricalCrossentropyTest:test_ignore_class'] | null | pytest /testbed/keras/src/losses/losses_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/losses/losses.py->module->function_definition:sparse_categorical_crossentropy"] |
keras-team/keras | 19,844 | keras-team__keras-19844 | ['19828'] | 1c60668f6bdd05dab619806e7b2dc25d3ed4ccbf | diff --git a/keras/src/initializers/__init__.py b/keras/src/initializers/__init__.py
--- a/keras/src/initializers/__init__.py
+++ b/keras/src/initializers/__init__.py
@@ -49,6 +49,7 @@
"uniform": RandomUniform,
"normal": RandomNormal,
"orthogonal": OrthogonalInitializer,
+ "Orthogonal"... | diff --git a/keras/src/initializers/random_initializers_test.py b/keras/src/initializers/random_initializers_test.py
--- a/keras/src/initializers/random_initializers_test.py
+++ b/keras/src/initializers/random_initializers_test.py
@@ -147,6 +147,10 @@ def test_orthogonal_initializer(self):
self.run_class_ser... | Keras 3.0 load h5 model with Orthogonal initializer fails
Hi guys,
I'm trying to load an h5 model that was working in earlier versions.
* This is a small part of the h5 file, where you can see (last part of the snippet) a recurrent initializer with a classname of **Orthogonal**.
```
{"name": "decoder_gru0", ... | Hi @mahnehsilla -
Thanks for raising the issue. Can you share the code snippet and h5 model with me where you are getting this error ? So I can reproduce it and try to help you on this.
| 2024-06-12 08:33:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential
# Copy the entire repository
COPY . .
# Install tensorflow and other backend dependencies first
RUN pip install tensorflow n... | ['keras/src/layers/rnn/gru_test.py:GRUTest:test_pass_initial_state', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_variance_scaling', 'keras/src/layers/rnn/gru_test.py:GRUTest:test_statefulness', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_variance_scaling_inval... | ['keras/src/layers/rnn/gru_test.py:GRUTest:test_legacy_implementation_argument', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_orthogonal_initializer'] | null | pytest /testbed/keras/src/initializers/random_initializers_test.py /testbed/keras/src/layers/rnn/gru_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["keras/src/layers/rnn/gru.py->module->class_definition:GRU->function_definition:__init__"] |
keras-team/keras | 19,863 | keras-team__keras-19863 | ['19535'] | f6cf6a0e77dd504cfc35dd499dd8694b0b80b4ae | diff --git a/keras/src/utils/summary_utils.py b/keras/src/utils/summary_utils.py
--- a/keras/src/utils/summary_utils.py
+++ b/keras/src/utils/summary_utils.py
@@ -76,17 +76,31 @@ def bold_text(x, color=None):
def format_layer_shape(layer):
- if not layer._inbound_nodes:
+ if not layer._inbound_nodes and not ... | diff --git a/keras/src/utils/summary_utils_test.py b/keras/src/utils/summary_utils_test.py
--- a/keras/src/utils/summary_utils_test.py
+++ b/keras/src/utils/summary_utils_test.py
@@ -40,3 +40,37 @@ def print_to_variable(text, line_break=False):
self.assertNotIn("Optimizer params", summary_content)
... | model.summary() broken for custom models subclassed from keras.Model
### Current behavior?
**Custom model classes built from keras.Model do not think they get built properly, and the model.summary() is missing information.** However, the model will run just fine. In keras version 2.15.0, we see it working properly, ... | > the layer does not have a `build()` method implemented and it looks like
it has unbuilt state. This will cause the layer to be marked as built, despite not being actually built, which
may cause failures down the line. Make sure to implement a proper `build()` method.
As indicated by this message, you need to i... | 2024-06-17 09:58:10+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential
# Copy the entire repository
COPY . .
# Install tensorflow and other backend dependencies first
RUN pip install tensorflow n... | ['keras/src/utils/summary_utils_test.py:SummaryUtilsTest:test_print_model_summary1', 'keras/src/utils/summary_utils_test.py:SummaryUtilsTest:test_print_model_summary0'] | ['keras/src/utils/summary_utils_test.py:SummaryUtilsTest:test_print_model_summary_custom_build'] | null | pytest /testbed/keras/src/utils/summary_utils_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/utils/summary_utils.py->module->function_definition:format_layer_shape"] |
keras-team/keras | 19,915 | keras-team__keras-19915 | ['19913'] | f0bae912201bbd265a3485ccf4f490be2fc675c7 | diff --git a/keras/src/export/export_lib.py b/keras/src/export/export_lib.py
--- a/keras/src/export/export_lib.py
+++ b/keras/src/export/export_lib.py
@@ -654,13 +654,18 @@ def make_tensor_spec(structure):
# into plain Python structures because they don't work with jax2tf/JAX.
if isinstance(structure,... | diff --git a/keras/src/export/export_lib_test.py b/keras/src/export/export_lib_test.py
--- a/keras/src/export/export_lib_test.py
+++ b/keras/src/export/export_lib_test.py
@@ -196,6 +196,22 @@ def call(self, inputs):
)
revived_model.serve(bigger_input)
+ # Test with keras.saving_lib
+ t... | Unable to export reloaded model
Saving and reloading model makes it impossible to export it as a SavedModel artifact.
Reloaded model has shapes defined as lists while export function expect tuples.
Casting the shape to tuple in this particular place resolves the issue, but there may be other errors related to this ... | null | 2024-06-25 14:03:04+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_export_method_sequential', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_multiple_inputs', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_multi_input_output_functional_model', 'keras/src/export/export_lib_test.py:Ex... | ['keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_input_structure_tuple', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_input_structure_array', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_input_structure_dict'] | null | pytest /testbed/keras/src/export/export_lib_test.py -v --junitxml=test-results.xml | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["keras/src/export/export_lib.py->module->function_definition:_get_input_signature->function_definition:make_tensor_spec"] |
keras-team/keras | 19,924 | keras-team__keras-19924 | ['19921'] | a2e9a5252d2eab389bd19d359e6e7325a8232c79 | diff --git a/keras/src/saving/saving_lib.py b/keras/src/saving/saving_lib.py
--- a/keras/src/saving/saving_lib.py
+++ b/keras/src/saving/saving_lib.py
@@ -160,6 +160,9 @@ def _save_model_to_fileobj(model, fileobj, weights_format):
f.write(config_json.encode())
weights_file_path = None
+ w... | diff --git a/keras/src/saving/saving_lib_test.py b/keras/src/saving/saving_lib_test.py
--- a/keras/src/saving/saving_lib_test.py
+++ b/keras/src/saving/saving_lib_test.py
@@ -634,6 +634,7 @@ def save_own_variables(self, store):
with zipfile.ZipFile(filepath) as zf:
all_filenames = zf.namelist()
... | Bug in Keras 3.4.0: Loading model error 'No such file or directory: 'model.weights.h5'
### Environment:
Ubuntu 22.04
Tensorflow 2.16.1
Keras 3.4.0
### Reproducing steps
(1) Create the following python script `tf-save.py` to generate model file:
```
import os.path
import pandas as pd
import numpy as n... | We have confirmed this issue is not Tensorflow issue but bug introduced in Keras 3.4.0
https://github.com/tensorflow/tensorflow/issues/70273#issuecomment-2191371907
Our MLflow CI starting to fail since yesterday due to the same reason (becaus yesterday Keras 3.4.0 was released)
https://github.com/mlflow-automation/ml... | 2024-06-26 14:50:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/saving/saving_lib_test.py:SavingBattleTest:test_bidirectional_lstm_saving', 'keras/src/saving/saving_lib_test.py:SavingTest:test_saved_module_paths_and_class_names', 'keras/src/saving/saving_lib_test.py:SavingBattleTest:test_nested_functional_model_saving', 'keras/src/saving/saving_lib_test.py:SavingTest:te... | ['keras/src/saving/saving_lib_test.py:SavingTest:test_save_model_exception_raised'] | null | pytest /testbed/keras/src/saving/saving_lib_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/src/saving/saving_lib.py->module->function_definition:_load_model_from_fileobj", "keras/src/saving/saving_lib.py->module->function_definition:_save_model_to_fileobj"] |
keras-team/keras | 19,937 | keras-team__keras-19937 | ['19932'] | 309f2c9c8959222e59d537b447c087a65c8b8998 | diff --git a/keras/src/losses/loss.py b/keras/src/losses/loss.py
--- a/keras/src/losses/loss.py
+++ b/keras/src/losses/loss.py
@@ -1,4 +1,5 @@
from keras.src import backend
+from keras.src import dtype_policies
from keras.src import ops
from keras.src import tree
from keras.src.api_export import keras_export
@@ -10... | diff --git a/keras/src/losses/loss_test.py b/keras/src/losses/loss_test.py
--- a/keras/src/losses/loss_test.py
+++ b/keras/src/losses/loss_test.py
@@ -4,6 +4,7 @@
import pytest
from keras.src import backend
+from keras.src import dtype_policies
from keras.src import losses as losses_module
from keras.src import o... | `unhashable type: 'DTypePolicy'` may leads problems in keras 3.4.1
Hello. Thank you for your contributions and maintenance for the best Keras.
I'm working on a customized loss and using `keras.DTypePolicy` to config the dtype in it, as the following:
```python
class MyCustomizedLoss(keras.losses.Loss):
def __... | Hi @Zhaopudark -
Thanks for reporting the issue. I have tested the code snippet and reproduces the reported behaviour. Attached [gist](https://colab.sandbox.google.com/gist/mehtamansi29/62c99255871ca72042fb42c3f3391c5a/19932-unhashable-type-dtypepolicy-may-leads-problems-in-keras-3-4-1.ipynb) file for reference.
We... | 2024-06-29 15:23:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/losses/loss_test.py:LossTest:test_pickle', 'keras/src/losses/loss_test.py:LossTest:test_mask', 'keras/src/metrics/metric_test.py:MetricTest:test_serialization', 'keras/src/losses/loss_test.py:LossTest:test_get_method', 'keras/src/metrics/metric_test.py:MetricTest:test_pickle', 'keras/src/losses/loss_test.py... | ['keras/src/metrics/metric_test.py:MetricTest:test_dtype_arg', 'keras/src/losses/loss_test.py:LossTest:test_dtype_arg'] | null | pytest /testbed/keras/src/losses/loss_test.py /testbed/keras/src/metrics/metric_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 24 | 25 | false | false | ["keras/src/metrics/metric.py->module->class_definition:Metric->function_definition:__init__", "keras/src/losses/losses.py->module->class_definition:MeanAbsoluteError", "keras/src/losses/losses.py->module->class_definition:Huber", "keras/src/losses/losses.py->module->class_definition:Tversky", "keras/src/losses/losses.... |
keras-team/keras | 19,973 | keras-team__keras-19973 | ['19769'] | 10a008fac10e2eb7dd343c128cbf2e0f971fa993 | diff --git a/keras/src/layers/attention/multi_head_attention.py b/keras/src/layers/attention/multi_head_attention.py
--- a/keras/src/layers/attention/multi_head_attention.py
+++ b/keras/src/layers/attention/multi_head_attention.py
@@ -210,6 +210,21 @@ def build(
key: Optional shape of the `key` tensor.
... | diff --git a/keras/src/layers/attention/multi_head_attention_test.py b/keras/src/layers/attention/multi_head_attention_test.py
--- a/keras/src/layers/attention/multi_head_attention_test.py
+++ b/keras/src/layers/attention/multi_head_attention_test.py
@@ -148,6 +148,10 @@ def test_shape_mismatch_error(self, query_shape,... | Inconsistent assertion in keras.layers.MultiHeadAttention
I've noticed that depending on what is fed as the key, query and value to the keras.layers.MultiHeadAttention the assertion query_shape==value_shape is only _sometimes_ activated.
Minimal working example (no assertion error):
```
`import os`
`os.environ["K... | null | 2024-07-11 01:00:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the entire repository
COPY . .
# ... | ['keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_compute_output_shape_without_key_same_proj', 'keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_basics', 'keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_high_dim_a... | ['keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_shape_mismatch_error_key_value_dim_mismatch', 'keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_shape_mismatch_error_query_value_dim_mismatch', 'keras/src/layers/attention/multi_head_attention_test.p... | null | python -m pytest /testbed/keras/src/layers/attention/multi_head_attention_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/layers/attention/multi_head_attention.py->module->class_definition:MultiHeadAttention->function_definition:build"] |
keras-team/keras | 20,002 | keras-team__keras-20002 | ['19982'] | 576daec845cbc83cebb040e018ba9fdae1902738 | diff --git a/keras/src/models/sequential.py b/keras/src/models/sequential.py
--- a/keras/src/models/sequential.py
+++ b/keras/src/models/sequential.py
@@ -137,6 +137,12 @@ def _maybe_rebuild(self):
if isinstance(self._layers[0], InputLayer) and len(self._layers) > 1:
input_shape = self._layers[0].... | diff --git a/keras/src/models/sequential_test.py b/keras/src/models/sequential_test.py
--- a/keras/src/models/sequential_test.py
+++ b/keras/src/models/sequential_test.py
@@ -150,6 +150,58 @@ def test_basic_flow_as_a_submodel(self):
y = model(x)
self.assertEqual(y.shape, (2, 3, 4))
+ def test_bas... | "ValueError: Undefined shapes are not supported." when calling model.call()
hello everybody.
I'm having trouble creating a Siamese network class, which extends keras.Model , from a function that returns the same model. My knowledge about [keras.Model](https://keras.io/api/models/model/) isn't good, so I don't know i... | Got the same Error,


Hey @jpeg-souza
you can try the following:
```python
import keras
from keras import ops
def euclidean_di... | 2024-07-17 03:10:57+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the entire repository
COPY . .
# ... | ['keras/src/models/sequential_test.py:SequentialTest:test_compute_output_shape', 'keras/src/models/sequential_test.py:SequentialTest:test_functional_properties', 'keras/src/models/sequential_test.py:SequentialTest:test_legacy_flow_with_input_shape', 'keras/src/models/sequential_test.py:SequentialTest:test_list_inputs',... | ['keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_with_functional_model_as_first_layer', 'keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_with_sequential_model_as_first_layer'] | null | python -m pytest /testbed/keras/src/models/sequential_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/src/utils/summary_utils.py->module->function_definition:format_layer_shape", "keras/src/models/sequential.py->module->class_definition:Sequential->function_definition:_maybe_rebuild"] |
keras-team/keras | 20,008 | keras-team__keras-20008 | ['19991', '19991'] | 0ed820f5649bcb27531d73cfc023763712fc8bf9 | diff --git a/keras/src/backend/tensorflow/nn.py b/keras/src/backend/tensorflow/nn.py
--- a/keras/src/backend/tensorflow/nn.py
+++ b/keras/src/backend/tensorflow/nn.py
@@ -237,28 +237,25 @@ def _conv():
dilations=dilation_rate,
)
- # Reason for making this function is in Tensorflow, `groups > ... | diff --git a/keras/src/ops/nn_test.py b/keras/src/ops/nn_test.py
--- a/keras/src/ops/nn_test.py
+++ b/keras/src/ops/nn_test.py
@@ -1479,6 +1479,19 @@ def test_conv_3d(self, strides, padding, data_format):
)
self.assertAllClose(outputs, expected, rtol=1e-5, atol=1e-5)
+ # Test for tracing erro... | Regression bug when using 3D convolution with channels_first on GPU
The following code stopped working after release 3.3.3 when running on GPU and using `run_eagerly=False`
```python
import keras
import numpy as np
# 3D input with channels_first
model_input = keras.Input(shape=(1, 10, 10, 10))
# (None, 1, 10,... | I'm running on Nvidia driver 550.54.15, CUDA version 12.4 and am using a H100XM-80C GPU
I was able to replicate the issue using Keras 3.4.1 on GPU, attaching the Gist for reference
[](https://colab.sandbox.google.com/gist/sachinprasadhs/5cea3254fc749928420f... | 2024-07-18 05:28:29+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the entire repository
COPY . .
# ... | ['keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_multi_hot_dtype_float32_true', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_depthwise_conv_2d2', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_log_sigmoid', 'keras/src/ops/nn_test.py:NNOpsDtypeTest:test_sigmoid_bfloat16', 'keras/src/ops/nn_test.py:NNOp... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d2', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d4', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d8', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d10', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d6', 'ke... | null | python -m pytest /testbed/keras/src/ops/nn_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/nn.py->module->function_definition:conv"] |
coder/code-server | 3,277 | coder__code-server-3277 | ['2985'] | f8d8ad38c166d74332fb71adbce5f08c14bfc7e3 | diff --git a/lib/vscode/.eslintignore b/lib/vscode/.eslintignore
--- a/lib/vscode/.eslintignore
+++ b/lib/vscode/.eslintignore
@@ -19,3 +19,4 @@
# These are code-server code symlinks.
src/vs/base/node/proxy_agent.ts
src/vs/ipc.d.ts
+src/vs/server/common/util.ts
diff --git a/lib/vscode/src/vs/server/browser/client.ts... | diff --git a/test/unit/register.test.ts b/test/unit/register.test.ts
--- a/test/unit/register.test.ts
+++ b/test/unit/register.test.ts
@@ -22,11 +22,11 @@ describe("register", () => {
})
beforeEach(() => {
+ jest.clearAllMocks()
jest.mock("@coder/logger", () => loggerModule)
})
after... | Log out button should not appear if not using password
<!--
Hi there! 👋
Thanks for reporting a bug. Please see https://github.com/cdr/code-server/blob/main/docs/FAQ.md#how-do-i-debug-issues-with-code-server and include any logging information relevant to the issue.
Please search for existing issues before fil... | Good call! We didn't consider that when adding the Log out feature. Thanks for bringing this to our attention! Added to an upcoming milestone. | 2021-05-03 20:26:53+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:14
# Update apt sources to use archive.debian.org
RUN sed -i -e 's/deb.debian.org/archive.debian.org/g' \
-e '/security.debian.org/d' \
-e '/updates/d' \
/etc/apt/sources.list
RUN apt-get update && apt-get install -y git build-essential g++ libx11-d... | ['/testbed/test/unit/health.test.ts->/healthz', '/testbed/test/unit/cli.test.ts->should use existing if --new-window is set', "/testbed/test/unit/serviceWorker.test.ts->should call the proper callbacks for 'activate'", '/testbed/test/unit/register.test.ts->should log an error', '/testbed/test/unit/proxy.test.ts->should... | ['/testbed/test/unit/register.test.ts->register when navigator and serviceWorker are NOT defined should log an error'] | [] | yarn test:unit --json --silent | Bug Fix | false | true | false | false | 4 | 0 | 4 | false | false | ["src/browser/register.ts->program->function_declaration:registerServiceWorker", "lib/vscode/src/vs/workbench/browser/parts/titlebar/menubarControl.ts->program->class_declaration:CustomMenubarControl->method_definition:constructor", "src/common/util.ts->program->function_declaration:logError", "lib/vscode/src/vs/workbe... |
Subsets and Splits
Unique Repositories in Test
Lists unique repository names from the dataset, providing a basic overview of the repositories present.
Filtered RocketMQ Offset Issues
The query filters specific entries based on multiple criteria, providing a basic overview of relevant issues but without deeper analysis or insights.
Filtered Test Cases for Gson 2.10
The query filters specific entries based on detailed conditions, providing limited insight but useful for finding exact matches in the dataset.