With Guix on Arch, Running Guix's Godot and cool-retro-term programs results in a completely garbled GUI. Godot ran with OpenGL ES2 instead of ES3 looks a bit better except the fonts fail to render correctly. Cool-retro-term complains about how using a variable width font may cause display alignment issues, although I'm not sure that is related. Perhaps it indicates it is a font issue? The Arch versions work just fine. I haven't installed Guix System on this machine yet so I'm not sure if they would work that way. I have seen this kind of graphical artefact before with KDE when I've suspended my machine and it's come back all broken.. It may be related to the fact that I have an AMD Radeon Fury X graphics card. (I haven't explicitly installed any proprietary drivers; I think AMDGPU is free). Any one have any clues on how to debug such a thing? https://brendan.scot/p/borked-godot.png
Hi,
Brendan Tildesley <mail@brendan.scot> writes:
> Any one have any clues on how to debug such a thing?
>
> https://brendan.scot/p/borked-godot.png
check dmesg for something like:
--8<---------------cut here---------------start------------->8---
[ 337.066640] amdgpu 0000:01:00.0: GPU fault detected: 146 0x0000480c for process Xorg pid 845 thread Xorg:cs0 pid 846
[ 337.068114] amdgpu 0000:01:00.0: VM_CONTEXT1_PROTECTION_FAULT_ADDR 0x00000000
[ 337.069354] amdgpu 0000:01:00.0: VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x0404800C
[ 337.070674] amdgpu 0000:01:00.0: VM fault (0x0c, vmid 2, pasid 32768) at page 0, read from 'TC0' (0x54433000) (72)
--8<---------------cut here---------------end--------------->8---
if you have something similar, try doing:
--8<---------------cut here---------------start------------->8---
rm -rf $HOME/.cache/mesa_shader_cache/
--8<---------------cut here---------------end--------------->8---
--
WBR, Boris Dekshteyn
Wow you are a genius, that fixed it. How did you know? Sorry I got no notification of your email, maybe because you emailed bug-guix instead of 35575@debbugs.gnu.org. So that's fixed my issue but I wonder how it can be fixed in general so others don't have this issue.
Brendan Tildesley <mail@brendan.scot> writes:
> Wow you are a genius, that fixed it. How did you know? Sorry I got no
> notification of your email, maybe because you emailed bug-guix instead
> of 35575@debbugs.gnu.org.
>
> So that's fixed my issue but I wonder how it can be fixed in general so
> others don't have this issue.
I suspect that the problem is caused by the transition from llvm6 to
llvm7 in the mesa package.
So, i think, the right decision would be if mesa do check cache version
and invalidate it in case of mismatch.
On the other side — Guix itself, like any other distribution, should not
change the contents of user's home directories. May be, proper solution
will be indicate in the ~guix pull --news~, like it's implemented in debian
or gentoo.
--
WBR, Boris Dekshteyn
To follow up on this old bug, I believe the issue may come from here: https://gitlab.freedesktop.org/mesa/mesa/-/blob/master/src/compiler/glsl/shader_cache.cpp#L144 Mesa calculates a sha1 based on some things they reason affect the output, but likely it is not truely a function of every parameter than can make a difference to the shader output. When we updated from llvm6 to lvm7 I'm guessing it changed the shaders somehow, and the llvm version is not included in the hash. Since I have zero understanding mesa, I'm not capable of determining the best solution. One thought is that if we included the mesa /gnu/store path in the calculation, this would make the hash's truely unique for a given mesa version, but also cached shaders that /would/ work would be routinely discarded after an update (i assume?). Would this be sensible or completely break something else? Should we just add the llvm version, or just start a mesa bug report asking for input? The code: ralloc_asprintf_append(&buf, "tf: %d ", prog->TransformFeedback.BufferMode); for (unsigned int i = 0; i < prog->TransformFeedback.NumVarying; i++) { ralloc_asprintf_append(&buf, "%s ", prog->TransformFeedback.VaryingNames[i]); } /* SSO has an effect on the linked program so include this when generating * the sha also. */ ralloc_asprintf_append(&buf, "sso: %s\n", prog->SeparateShader ? "T" : "F"); /* A shader might end up producing different output depending on the glsl * version supported by the compiler. For example a different path might be * taken by the preprocessor, so add the version to the hash input. */ ralloc_asprintf_append(&buf, "api: %d glsl: %d fglsl: %d\n", ctx->API, ctx->Const.GLSLVersion, ctx->Const.ForceGLSLVersion); /* We run the preprocessor on shaders after hashing them, so we need to * add any extension override vars to the hash. If we don't do this the * preprocessor could result in different output and we could load the * wrong shader. */ char *ext_override = getenv("MESA_EXTENSION_OVERRIDE"); if (ext_override) { ralloc_asprintf_append(&buf, "ext:%s", ext_override); } /* DRI config options may also change the output from the compiler so * include them as an input to sha1 creation. */ char sha1buf[41]; _mesa_sha1_format(sha1buf, ctx->Const.dri_config_options_sha1); ralloc_strcat(&buf, sha1buf); for (unsigned i = 0; i < prog->NumShaders; i++) { struct gl_shader *sh = prog->Shaders[i]; _mesa_sha1_format(sha1buf, sh->sha1); ralloc_asprintf_append(&buf, "%s: %s\n", _mesa_shader_stage_to_abbrev(sh->Stage), sha1buf);
[-- Attachment #1: Type: text/plain, Size: 1311 bytes --] Brendan Tildesley <mail@brendan.scot> writes: > To follow up on this old bug, I believe the issue may come from here: > https://gitlab.freedesktop.org/mesa/mesa/-/blob/master/src/compiler/glsl/shader_cache.cpp#L144 > > Mesa calculates a sha1 based on some things they reason affect the > output, but likely it is not truely a function of every parameter than > can make a difference to the shader output. When we updated from llvm6 > to lvm7 I'm guessing it changed the shaders somehow, and the llvm > version is not included in the hash. Since I have zero understanding > mesa, I'm not capable of determining the best solution. One thought is > that if we included the mesa /gnu/store path in the calculation, this > would make the hash's truely unique for a given mesa version, but also > cached shaders that /would/ work would be routinely discarded after an > update (i assume?). Would this be sensible or completely break something > else? Should we just add the llvm version, or just start a mesa bug > report asking for input? Is this still relevant? I haven't heard reports about this in a long time, nor experienced it (anymore) on my super-experimental systems that switch LLVM and Mesa versions all the time. So I think the issue might have been fixed upstream? [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 487 bytes --]
On 1/4/20 12:53 am, Marius Bakke wrote:
> Is this still relevant? I haven't heard reports about this in a long
> time, nor experienced it (anymore) on my super-experimental systems that
> switch LLVM and Mesa versions all the time. So I think the issue might
> have been fixed upstream?
Closing because the issue seems to have only occurred once a long time
ago and may have been fixed up stream.