-
Notifications
You must be signed in to change notification settings - Fork 6.1k
Initialize CI for code quality and testing #256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
610a65e
c03d666
2cb3a34
4033c0f
422d26a
96549f1
5ac7659
323672e
25d3b66
6e70c20
726afc6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
name: Run code quality checks | ||
|
||
on: | ||
pull_request: | ||
branches: | ||
- main | ||
push: | ||
branches: | ||
- main | ||
|
||
jobs: | ||
check_code_quality: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- uses: actions/checkout@v3 | ||
- name: Set up Python | ||
uses: actions/setup-python@v4 | ||
with: | ||
python-version: "3.7" | ||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
pip install .[quality] | ||
- name: Check quality | ||
run: | | ||
black --check --preview examples tests src utils scripts | ||
isort --check-only examples tests src utils scripts | ||
flake8 examples tests src utils scripts | ||
doc-builder style src/diffusers docs/source --max_len 119 --check_only --path_to_docs docs/source | ||
anton-l marked this conversation as resolved.
Show resolved
Hide resolved
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
name: Run non-slow tests | ||
|
||
on: | ||
pull_request: | ||
branches: | ||
- main | ||
|
||
env: | ||
HF_HOME: /mnt/cache | ||
OMP_NUM_THREADS: 8 | ||
MKL_NUM_THREADS: 8 | ||
PYTEST_TIMEOUT: 60 | ||
|
||
jobs: | ||
run_tests_cpu: | ||
name: Diffusers tests | ||
runs-on: [ self-hosted, docker-gpu ] | ||
container: | ||
image: python:3.7 | ||
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/ | ||
|
||
steps: | ||
- name: Checkout diffusers | ||
uses: actions/checkout@v3 | ||
with: | ||
fetch-depth: 2 | ||
|
||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
python -m pip install torch --extra-index-url https://download.pytorch.org/whl/cpu | ||
python -m pip install -e .[quality,test] | ||
|
||
- name: Environment | ||
run: | | ||
python utils/print_env.py | ||
|
||
- name: Run all non-slow selected tests on CPU | ||
run: | | ||
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile -s tests/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
name: Run all tests | ||
|
||
on: | ||
push: | ||
branches: | ||
- main | ||
|
||
env: | ||
HF_HOME: /mnt/cache | ||
OMP_NUM_THREADS: 8 | ||
MKL_NUM_THREADS: 8 | ||
PYTEST_TIMEOUT: 1000 | ||
RUN_SLOW: yes | ||
|
||
jobs: | ||
run_tests_single_gpu: | ||
name: Diffusers tests | ||
strategy: | ||
fail-fast: false | ||
matrix: | ||
machine_type: [ single-gpu ] | ||
runs-on: [ self-hosted, docker-gpu, '${{ matrix.machine_type }}' ] | ||
container: | ||
image: nvcr.io/nvidia/pytorch:22.07-py3 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you know which PyTorch version this is? Couldn't really find it here: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think Yeah PyTorch 1.13 is nightly-build or from source so I think PyTorch 1.12 would be the better choice here. In Transformers we automatically update the docker, but I think this is a bit too much here for now. Maybe let's better just do it manually in the beginning. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In |
||
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/ | ||
|
||
steps: | ||
- name: Checkout diffusers | ||
uses: actions/checkout@v3 | ||
with: | ||
fetch-depth: 2 | ||
|
||
- name: NVIDIA-SMI | ||
run: | | ||
nvidia-smi | ||
|
||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
python -m pip uninstall -y torch torchvision torchtext | ||
python -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu116 | ||
python -m pip install -e .[quality,test] | ||
python -m pip install scipy transformers | ||
|
||
- name: Environment | ||
run: | | ||
python utils/print_env.py | ||
|
||
- name: Run all (incl. slow) tests on GPU | ||
run: | | ||
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile -s tests/ |
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
#!/usr/bin/env python3 | ||
|
||
# coding=utf-8 | ||
# Copyright 2022 The HuggingFace Inc. team. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
# this script dumps information about the environment | ||
|
||
import os | ||
import platform | ||
import sys | ||
|
||
|
||
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" | ||
|
||
print("Python version:", sys.version) | ||
|
||
print("OS platform:", platform.platform()) | ||
print("OS architecture:", platform.machine()) | ||
|
||
try: | ||
import torch | ||
|
||
print("Torch version:", torch.__version__) | ||
print("Cuda available:", torch.cuda.is_available()) | ||
print("Cuda version:", torch.version.cuda) | ||
print("CuDNN version:", torch.backends.cudnn.version()) | ||
print("Number of GPUs available:", torch.cuda.device_count()) | ||
except ImportError: | ||
print("Torch version:", None) | ||
|
||
try: | ||
import transformers | ||
|
||
print("transformers version:", transformers.__version__) | ||
except ImportError: | ||
print("transformers version:", None) |
Uh oh!
There was an error while loading. Please reload this page.