Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

autocast() got an unexpected keyword argument 'cache_enabled when use trainer.torch_jit_model_eval #35706

Open
2 of 4 tasks
Wanguy opened this issue Jan 15, 2025 · 3 comments
Open
2 of 4 tasks
Labels

Comments

@Wanguy
Copy link
Contributor

Wanguy commented Jan 15, 2025

System Info

  • transformers version: 4.46.3
  • Platform: Linux-4.18.0-147.el8_1.x86_64-x86_64-with-glibc2.10
  • Python version: 3.8.10
  • Huggingface_hub version: 0.26.5
  • Safetensors version: 0.4.5
  • Accelerate version: 1.0.1
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.4.1+cu118 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?:
  • Using GPU in script?:
  • GPU type: NVIDIA A100-SXM4-80GB

Who can help?

@muellerzr
@SunMarc

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

When using the torch_jit_model_eval() method in trainer, it prompts

"failed to use PyTorch jit mode due to: autocast() got an unexpected keyword argument 'cache_enabled'."

Looking at the details, it was found that the error was caused by the self.accelerator.autocast(cache_enabled=False) method. Its method definition is def autocast(self, autocast_handler: AutocastKwargs = None), and there is no cache_enabled method.

Is this because the code here has not been updated, or because I ignored some settings?

Is there a solution now?

Expected behavior

Work normally.

@Wanguy Wanguy added the bug label Jan 15, 2025
@Wanguy Wanguy changed the title When use trainer.torch_jit_model_eval, autocast() got an unexpected keyword argument 'cache_enabled when use trainer.torch_jit_model_eval Jan 15, 2025
@Wanguy
Copy link
Contributor Author

Wanguy commented Jan 15, 2025

The last accelerator.autocast method that supports the cache_enabled parameter is in version v0.34.2.

@SunMarc
Copy link
Member

SunMarc commented Jan 15, 2025

Thanks for the report @Wanguy ! This is indeed deprecated as you can see here. Would you like to submit a PR to fix that ? Thanks !

@Wanguy
Copy link
Contributor Author

Wanguy commented Jan 15, 2025

@SunMarc Of course, I will submit a fix PR ASAP to fix this bug. :)

Wanguy added a commit to Wanguy/transformers that referenced this issue Jan 16, 2025
SunMarc pushed a commit that referenced this issue Jan 16, 2025
…l` (#35722)

Fix the bug that the accelerator.autocast does not pass parameters correctly when calling torch_jit_model_eval (#35706)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants