unofficial mirror of guix-patches@gnu.org 
 help / color / mirror / code / Atom feed
blob cd09312676705cc8ff1aff7052f52bd2cd88bbee 2136 bytes (raw)
name: gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch 	 # note: path name is non-authoritative(*)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
 
diff --git a/gfpgan/archs/gfpganv1_arch.py b/gfpgan/archs/gfpganv1_arch.py
index eaf3162..34ae5a2 100644
--- a/gfpgan/archs/gfpganv1_arch.py
+++ b/gfpgan/archs/gfpganv1_arch.py
@@ -3,7 +3,6 @@ import random
 import torch
 from basicsr.archs.stylegan2_arch import (ConvLayer, EqualConv2d, EqualLinear, ResBlock, ScaledLeakyReLU,
                                           StyleGAN2Generator)
-from basicsr.ops.fused_act import FusedLeakyReLU
 from basicsr.utils.registry import ARCH_REGISTRY
 from torch import nn
 from torch.nn import functional as F
@@ -170,10 +169,7 @@ class ConvUpLayer(nn.Module):
 
         # activation
         if activate:
-            if bias:
-                self.activation = FusedLeakyReLU(out_channels)
-            else:
-                self.activation = ScaledLeakyReLU(0.2)
+            self.activation = ScaledLeakyReLU(0.2)
         else:
             self.activation = None
 
diff --git a/gfpgan/archs/stylegan2_bilinear_arch.py b/gfpgan/archs/stylegan2_bilinear_arch.py
index 1342ee3..5cffb44 100644
--- a/gfpgan/archs/stylegan2_bilinear_arch.py
+++ b/gfpgan/archs/stylegan2_bilinear_arch.py
@@ -1,7 +1,6 @@
 import math
 import random
 import torch
-from basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu
 from basicsr.utils.registry import ARCH_REGISTRY
 from torch import nn
 from torch.nn import functional as F
@@ -190,7 +189,7 @@ class StyleConv(nn.Module):
             sample_mode=sample_mode,
             interpolation_mode=interpolation_mode)
         self.weight = nn.Parameter(torch.zeros(1))  # for noise injection
-        self.activate = FusedLeakyReLU(out_channels)
+        self.activate = ScaledLeakyReLU()
 
     def forward(self, x, style, noise=None):
         # modulate
@@ -568,10 +567,7 @@ class ConvLayer(nn.Sequential):
                 and not activate))
         # activation
         if activate:
-            if bias:
-                layers.append(FusedLeakyReLU(out_channels))
-            else:
-                layers.append(ScaledLeakyReLU(0.2))
+            layers.append(ScaledLeakyReLU(0.2))
 
         super(ConvLayer, self).__init__(*layers)
 

debug log:

solving cd09312676 ...
found cd09312676 in https://yhetil.org/guix-patches/9f038ba9a340cb10fea00673080c5d67d0a71cb9.camel@gmail.com/

applying [1/1] https://yhetil.org/guix-patches/9f038ba9a340cb10fea00673080c5d67d0a71cb9.camel@gmail.com/
diff --git a/gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch b/gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch
new file mode 100644
index 0000000000..cd09312676

1:20: trailing whitespace.
 
1:30: trailing whitespace.
 
1:49: trailing whitespace.
 
1:61: trailing whitespace.
 
1:63: trailing whitespace.
 
Checking patch gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch...
Applied patch gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch cleanly.
warning: squelched 1 whitespace error
warning: 6 lines add whitespace errors.

index at:
100644 cd09312676705cc8ff1aff7052f52bd2cd88bbee	gnu/packages/patches/python-gfpgan-unfuse-leaky-relu.patch

(*) Git path names are given by the tree(s) the blob belongs to.
    Blobs themselves have no identifier aside from the hash of its contents.^

Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/guix.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).