On Fri, 30 Apr 2021 01:43:37 +0200 Leo Le Bouter wrote: > I think that the technicality of software development must be > redefined so that the hierarchy between the experienced and the > beginner disappears [...] I've been thinking a lot about these topics, and there are radical solutions that can work today, but the scope is quite limited. Sadly, technology in general is way too complex to enable any individual to be able to practically exercise the freedom to modify how it works at every level: For instance, computer hardware is way too complex, not everyone is able to modify how the RAM initialization is done in Libreboot. And that's just for the software side that is visible to users. There is also a big interest in modifying how the hardware works, for instance by making custom CPUs on FPGAs, which again requires domain specific knowledge. Despite that, several indigenous communities run their own GSM networks[1] and while doing that at least some of them seem to have managed to do what you are talking about (remove power relationships between the people with specific knowledge and the people without that specific knowledge): - The people running the network have to obey what the village assembly decided. So everybody decide together with the people running the networks. - They also managed to have the protocol modified for them at a deeper level[2] to make their GSM network behave more like peer to peer networks between villages instead of the way GSM networks are typically deployed. Note that in this example above many of the communities that run their own GSM networks didn't chose (yet?) to have Internet access for instance so the scope is quite limited. Things get more complicated when trying to bring this amount of control down to every single individual for every aspect of every digital technologies. And many people have personal computers this also looks important to reflect on. The approach taken by lisp machines was interesting: it enabled users to practically modify all the source code (most or all of it wasn't free software if I understood right). They used CPUs made specifically to run lisp code to do that but as I understand the downside was that, even if most of the code was in lisp, there was probably still some code in assembly or other languages as the lisp interpreter was probably not written in lisp. Another approach that I find interesting is the one taken by operating systems like OpenBSD (which is not FSDG compliant) where the design is much more simple. The downside is that it lacks many functionalities that some users might miss. Hyperbola (which is FSDG compliant) is working on a new operating system which will reuse the OpenBSD kernel, so it might be interesting. Here the bar is probably higher as it requires to be able to write C to modify the kernel, but at least some part of it looks a lot more simple than Linux. For instance if we compare OpenBSD's pledge system call and Linux's secomp, pledge is only one driver in the kernel whereas secomp also requires a compiler and so on. And more importantly, software is being written to take advantage of secomp, so I guess that at least in some cases some people depend on secomp because of that. One approach which seems to also have been taken by Guix is to work to make it more simple to modify how things works. Since lisp is parsable it might even be possible to write graphical user interfaces (in lisp for instance) to enable users to modify how the system is configured, how it works, etc. But the downside is that it still depends on complex code in the software it packages (like linux-libre, gcc, etc). With learning, there are also interesting approaches used for learning concepts used in digital technology (like the concepts used by git, etc)[3] which could potentially enable more people to be able to modify the way technology works, but it also requires time. And for hardware, we start to have ISAs like RISCV or PPC64LE that can at least be implemented with designs under free licenses. So it could potentially open the door for more simple hardware, but that hardware also needs to be fast and not too expensive or complex to produce, so it could also not work out because of that. So I'm really wondering where we could go with all that, could it all become step by step, incrementally more accessible when combining several of these things together (like better teaching + more simple software and hardware)? Or do we really need to re-do it from scratch to make technology accessible? Or change the way we live and interact with technology to make it fit people instead of having people be required to fit technology? Unfortunately I don't really have a deep enough vision on this topic to know which step to take in advance so I've mostly resorted to the incremental improvements, hoping to advance things in the right direction up to the point where it could work well enough or show us its limit and push us to find another path. References: ----------- [1]https://www.tic-ac.org/ [2]https://osmocom.org/news/120 [3]https://techlearningcollective.com/ Denis.