Tobias Bernard: Community Power Part 1: Misconceptions

https://blogs.gnome.org/tbernard/2021/06/11/community-power-1/

https://blogs.gnome.org/tbernard/?p=8410

People new to the GNOME community often have a hard time understanding how we set goals, make decisions, assume responsibility, prioritize tasks, and so on. In short: They wonder where the power is.

When you don’t know how something works it’s natural to come up with a plausible story based on the available information. For example, some people intuitively assume that since our product is similar in function and appearance to those made by the Apples and Microsofts of the world, we must also be organized in a similar way.

This leads them to think that GNOME is developed by a centralized company with a hierarchical structure, where developers are assigned tasks by their manager, based on a roadmap set by higher management, with a marketing department coordinating public-facing messaging, and so on. Basically, they think we’re a tech company.

This in turn leads to things like

  • People making customer service style complaints, like they would to a company whose product they bought
  • General confusion around how resources are allocated (“Why are they working on X when they don’t even have Y?”)
  • Blaming/praising the GNOME Foundation for specific things to do with the product

If you’ve been around the community for a while you know that this view of the project bears no resemblance to how things actually work. However, given how complex the reality is it’s not surprising that some people have these misconceptions.

To understand how things are really done we need to examine the various groups involved in making GNOME, and how they interact.

GNOME Foundation

The GNOME Foundation is a US-based non-profit that owns the GNOME trademark, hosts our Gitlab and other infrastructure, organizes conferences, and employs one full-time GTK developer. This means that beyond setting priorities for said GTK developer, it has little to no influence on development.

Individual Developers

The people actually making the product are either volunteers (and thus answer to nobody), or work for one of about a dozen companies employing people to work on various parts of GNOME. All of these companies have different interests and areas of focus depending on how they use GNOME, and tend to contribute accordingly.

In practice the line between “employed” contributor and volunteer can be quite blurry, as many contributors are paid to work on some specific things but also additionally contribute to other parts of GNOME in their free time.

Maintainers

Each module (e.g. app, library, or system component) has one or more maintainers. They are responsible for reviewing proposed changes, making releases, and generally managing the project.

In theory the individual maintainers of each module have more or less absolute power over those modules. They can merge any changes to the code, add and remove features, change the user interface, etc.

However, in practice maintainers rarely make non-trivial changes without consulting/communicating with other stakeholders across the project, for example the design team on things related to the user experience, the maintainers of other modules affected by a change, or the release team if dependencies change.

Release Team

The release team is responsible for coordinating the release of the entire suite of GNOME software as a single coherent product.

In addition to getting out two major releases every year (plus various point releases) they also curate what is and isn’t part of the core set of GNOME software, take care of the GNOME Flatpak runtimes, manage dependencies, fix build failures, and other related tasks.

The Release Team has a lot of power in the sense that they literally decide what is and isn’t part of GNOME. They can add and remove apps from the core set, and set system-wide default settings. However, they do not actually develop or maintain most of the modules, so the degree to which they can concretely impact the product is limited.

Design Team

Perhaps somewhat unusually for a free software project GNOME has a very active and well-respected design team (If I do say so myself :P). Anything related to the user experience is their purview, and in theory they have final say.

This includes most major product initiatives, such as introducing new apps or features, redesigning existing ones, the visual design of apps and system, design patterns and guidelines, and more.

However: There is nothing forcing developers to follow design team guidance. The design team’s power lies primarily in people trusting them to make the right decisions, and working with them to implement their designs.

How do things get done then?

No one person or group ultimately has much power over the direction of the project by themselves. Any major initiative requires people from multiple groups to work together.

This collaboration requires, above all, mutual trust on a number of levels:

  • Trust in the abilities of people from other teams, especially when it’s not your area of expertise
  • Trust that other people also embody the project’s values
  • Trust that people care about GNOME first and foremost (as opposed to, say, their employer’s interests)
  • Trust that people are in it for the long run (rather than just trying to quickly land something and then disappear)

This atmosphere of trust across the project allows for surprisingly smooth and efficient collaboration across dozens of modules and hundreds of contributors, despite there being little direct communication between most participants.


This concludes the first part of the series. In part 2 we’ll look at the various stages of how a feature is developed from conception to shipping.

Until then, happy hacking!

Jussi Pakkanen: Typesetting a full book part II, Scribus

https://nibblestew.blogspot.com/2021/06/typesetting-full-book-part-ii-scribus.html

Some time ago I wrote a blog post on what it's like to typeset an entire book using nothing but LibreOffice. One of the comments mentioned that LO does not do a great job of aligning text. This is again probably because it needs to copy MS Word's behaviour, which means greedy line splitting. Supposedly Scribus does this a lot better, but the only way to be really sure was to typeset the whole text with Scribus. So that's what I did (using the latest 1.5 release from Flathub).

Workflow for Scribus

Every program has the things it is good for and things it's not that good for. Scribus' strengths lie in producing output with fairly short pieces of text with precise layout requirements, especially if there are many images. A traditional "single flow of text" is not that, so there are some things you need to plan for.

First of all, a Scribus document should not be created until the text is almost completely finished. Doing big changes (like adding text to existing chapters, changing physical page size etc) can become quite tedious. Scribus also does not do long pieces of text particularly smootly. I tried loading all 350is pages to a single linked frame sequence. It sort of worked, but things got quite laggy quite quickly. Eventually I converged on a layout where every chapter was its own set of linked frames. The text was imported directly from LO files that held one chapter each. The original had just one big LO file, so I had to split it up by hand for the import. If the original had been done with master documents, this would have been simpler.

The table of contents had to be done by hand again. Scribus has support for tables, but they could not be used, because tables drew outlines around each cell and I could not find a way to switch that off. Websearching found several pages with info, but none of them worked. It also turns out that you can not add page references to table cells, only to text frames. No, I don't know why either. The option was greyed out in the menus and trying to sneakily copypaste a page reference from a text frame to a table caused a segfault.

Issues discovered

While LO was surprisingly bug free, Scribus was less so and I encountered many bugs and baffling missing features, such as:
  • Scribus would sometimes create empty text frames far outside the document (i.e. to page 600 on a 300 page document)
  • Text frames got a strange empty character at their end which would cause text overflow warnings, deleting it did not help as the empty characters kept reappearing
  • Adding a page reference to an anchor point would always link to the page where the linked frame sequence started, not where the anchor was placed
  • Text is not hyphenated automatically, only by selecting a text frame and then selecting extras > hyphenate text in the main menu, one would imagine hyphenation being a paragraph style property instead
  • I managed to create an anchor point that does not exist anywhere except the mark list, but deleting it leads to an immediate segfault
None of these obstacles were insurmountable, but they made for a nonsmooth experience. Eventually the work was done and here is how they compare (LO on the left, Scribus on the right).
As you can probably tell, Scribus creates more condensed output. The settings were the same for both programs (automatically translated from LO styles by Scribus, not verified by hand) and LO's output file was 339 pages compared to 326 for Scribus.

Which one should you use then?

Like most things in life, that depends. If your document has a notable amount of mathematics, then you most likely want to go with LaTeX. If the document is something like a magazine or you require the highest typographical quality possible, then Scribus is a good choice. For "plain old books" the question becomes more complicated.

If you need a fully color managed workflow, then Scribus is the only viable option. If the default output of LO is good enough for you, the document has few figures and you are fine with needing to have a great battle at the end to line the images up, LO provides a fairly smooth experience.  You have to use styles properly, though, or the whole thing will end up in tears. LO is especially suitable for documents with lots of levels, headings and cross references between the two. LaTeX is also very good with those, but its unfortunate downside is that defining new styles is really hard. So is changing fonts, so you'd better be happy with Computer Modern. If the document has lots of images, then LaTeX's automatic figure floats make a ton of manual work completely disappear.

Original data

The original source documents as well as the PDF output for both programs can be found in this Github repo

Lennart Poettering: The Wondrous World of Discoverable GPT Disk Images

http://0pointer.net/blog/the-wondrous-world-of-discoverable-gpt-disk-images.html

TL;DR: Tag your GPT partitions with the right, descriptive partition types, and the world will become a better place.

A number of years ago we started the Discoverable Partitions Specification which defines GPT partition type UUIDs and partition flags for the various partitions Linux systems typically deal with. Before the specification all Linux partitions usually just used the same type, basically saying "Hey, I am a Linux partition" and not much else. With this specification the GPT partition type, flags and label system becomes a lot more expressive, as it can tell you:

  1. What kind of data a partition contains (i.e. is this swap data, a file system or Verity data?)
  2. What the purpose/mount point of a partition is (i.e. is this a /home/ partition or a root file system?)
  3. What CPU architecture a partition is intended for (i.e. is this a root partition for x86-64 or for aarch64?)
  4. Shall this partition be mounted automatically? (i.e. without specifically be configured via /etc/fstab)
  5. And if so, shall it be mounted read-only?
  6. And if so, shall the file system be grown to its enclosing partition size, if smaller?
  7. Which partition contains the newer version of the same data (i.e. multiple root file systems, with different versions)

By embedding all of this information inside the GPT partition table disk images become self-descriptive: without requiring any other source of information (such as /etc/fstab) if you look at a compliant GPT disk image it is clear how an image is put together and how it should be used and mounted. This self-descriptiveness in particular breaks one philosophical weirdness of traditional Linux installations: the original source of information which file system the root file system is, typically is embedded in the root file system itself, in /etc/fstab. Thus, in a way, in order to know what the root file system is you need to know what the root file system is. 🤯 🤯 🤯

(Of course, the way this recursion is traditionally broken up is by then copying the root file system information from /etc/fstab into the boot loader configuration, resulting in a situation where the primary source of information for this — i.e. /etc/fstab — is actually mostly irrelevant, and the secondary source — i.e. the copy in the boot loader — becomes the configuration that actually matters.)

Today, the GPT partition type UUIDs defined by the specification have been adopted quite widely, by distributions and their installers, as well as a variety of partitioning tools and other tools.

In this article I want to highlight how the various tools the systemd project provides make use of the concepts the specification introduces.

But before we start with that, let's underline why tagging partitions with these descriptive partition type UUIDs (and the associated partition flags) is a good thing, besides the philosophical points made above.

  1. Simplicity: in particular OS installers become simpler — adjusting /etc/fstab as part of the installation is not necessary anymore, as the partitioning step already put all information into place for assembling the system properly at boot. i.e. installing doesn't mean that you always have to get fdisk and /etc/fstab into place, the former suffices entirely.

  2. Robustness: since partition tables mostly remain static after installation the chance of corruption is much lower than if the data is stored in file systems (e.g. in /etc/fstab). Moreover by associating the metadata directly with the objects it describes the chance of things getting out of sync is reduced. (i.e. if you lose /etc/fstab, or forget to rerun your initrd builder you still know what a partition is supposed to be just by looking at it.)

  3. Programmability: if partitions are self-descriptive it's much easier to automatically process them with various tools. In fact, this blog story is mostly about that: various systemd tools can naturally process disk images prepared like this.

  4. Alternative entry points: on traditional disk images, the boot loader needs to be told which kernel command line option root= to use, which then provides access to the root file system, where /etc/fstab is then found which describes the rest of the file systems. Where precisely root= is configured for the boot loader highly depends on the boot loader and distribution used, and is typically encoded in a Turing complete programming language (Grub…). This makes it very hard to automatically determine the right root file system to use, to implement alternative entry points to the system. By alternative entry points I mean other ways to boot the disk image, specifically for running it as a systemd-nspawn container — but this extends to other mechanisms where the boot loader may be bypassed to boot up the system, for example qemu when configured without a boot loader.

  5. User friendliness: it's simply a lot nicer for the user looking at a partition table if the partition table explains what is what, instead of just saying "Hey, this is a Linux partition!" and nothing else.

Uses for the concept

Now that we cleared up the Why?, lets have a closer look how this is currently used and exposed in systemd's various components.

Use #1: Running a disk image in a container

If a disk image follows the Discoverable Partition Specification then systemd-nspawn has all it needs to just boot it up. Specifically, if you have a GPT disk image in a file foobar.raw and you want to boot it up in a container, just run systemd-nspawn -i foobar.raw -b, and that's it (you can specify a block device like /dev/sdb too if you like). It becomes easy and natural to prepare disk images that can be booted either on a physical machine, inside a virtual machine manager or inside such a container manager: the necessary meta-information is included in the image, easily accessible before actually looking into its file systems.

Use #2: Booting an OS image on bare-metal without /etc/fstab or kernel command line root=

If a disk image follows the specification in many cases you can remove /etc/fstab (or never even install it) — as the basic information needed is already included in the partition table. The systemd-gpt-auto-generator logic implements automatic discovery of the root file system as well as all auxiliary file systems. (Note that the former requires an initrd that uses systemd, some more conservative distributions do not support that yet, unfortunately). Effectively this means you can boot up a kernel/initrd with an entirely empty kernel command line, and the initrd will automatically find the root file system (by looking for a suitably marked partition on the same drive the EFI System Partition was found on).

(Note, if /etc/fstab or root= exist and contain relevant information they always takes precedence over the automatic logic. This is in particular useful to tweaks thing by specifying additional mount options and such.)

Use #3: Mounting a complex disk image for introspection or manipulation

The systemd-dissect tool may be used to introspect and manipulate OS disk images that implement the specification. If you pass the path to a disk image (or block device) it will extract various bits of useful information from the image (e.g. what OS is this? what partitions to mount?) and display it.

With the --mount switch a disk image (or block device) can be mounted to some location. This is useful for looking what is inside it, or changing its contents. This will dissect the image and then automatically mount all contained file systems matching their GPT partition description to the right places, so that you subsequently could chroot into it. (But why chroot if you can just use systemd-nspawn? 😎)

Use #4: Copying files in and out of a disk image

The systemd-dissect tool also has two switches --copy-from and --copy-to which allow copying files out of or into a compliant disk image, taking all included file systems and the resulting mount hierarchy into account.

Use #5: Running services directly off a disk image

The RootImage= setting in service unit files accepts paths to compliant disk images (or block device nodes), and can mount them automatically, running service binaries directly off them (in chroot() style). In fact, this is the base for the Portable Service concept of systemd.

Use #6: Provisioning disk images

systemd provides various tools that can run operations provisioning disk images in an "offline" mode. Specifically:

systemd-tmpfiles

With the --image= switch systemd-tmpfiles can directly operate on a disk image, and for example create all directories and other inodes defined in its declarative configuration files included in the image. This can be useful for example to set up the /var/ or /etc/ tree according to such configuration before first boot.

systemd-sysusers

Similar, the --image= switch of systemd-sysusers tells the tool to read the declarative system user specifications included in the image and synthesizes system users from it, writing them to the /etc/passwd (and related) files in the image. This is useful for provisioning these users before the first boot, for example to ensure UID/GID numbers are pre-allocated, and such allocations not delayed until first boot.

systemd-machine-id-setup

The --image= switch of systemd-machine-id-setup may be used to provision a fresh machine ID into /etc/machine-id of a disk image, before first boot.

systemd-firstboot

The --image= switch of systemd-firstboot may be used to set various basic system setting (such as root password, locale information, hostname, …) on the specified disk image, before booting it up.

Use #7: Extracting log information

The journalctl switch --image= may be used to show the journal log data included in a disk image (or, as usual, the specified block device). This is very useful for analyzing failed systems offline, as it gives direct access to the logs without any further, manual analysis.

Use #8: Automatic repartitioning/growing of file systems

The systemd-repart tool may be used to repartition a disk or image in an declarative and additive way. One primary use-case for it is to run during boot on physical or VM systems to grow the root file system to the disk size, or to add in, format, encrypt, populate additional partitions at boot.

With its --image= switch it the tool may operate on compliant disk images in offline mode of operation: it will then read the partition definitions that shall be grown or created off the image itself, and then apply them to the image. This is particularly useful in combination with the --size= which allows growing disk images to the specified size.

Specifically, consider the following work-flow: you download a minimized disk image foobar.raw that contains only the minimized root file system (and maybe an ESP, if you want to boot it on bare-metal, too). You then run systemd-repart --image=foo.raw --size=15G to enlarge the image to the 15G, based on the declarative rules defined in the repart.d/ drop-in files included in the image (this means this can grow the root partition, and/or add in more partitions, for example for /srv or so, maybe encrypted with a locally generated key or so). Then, you proceed to boot it up with systemd-nspawn --image=foo.raw -b, making use of the full 15G.

Versioning + Multi-Arch

Disk images implementing this specifications can carry OS executables in one of three ways:

  1. Only a root file system

  2. Only a /usr/ file system (in which case the root file system is automatically picked as tmpfs).

  3. Both a root and a /usr/file system (in which case the two are combined, the /usr/ file system mounted into the root file system, and the former possibly in read-only fashion`)

They may also contain OS executables for different architectures, permitting "multi-arch" disk images that can safely boot up on multiple CPU architectures. As the root and /usr/ partition type UUIDs are specific to architectures this is easily done by including one such partition for x86-64, and another for aarch64. If the image is now used on an x86-64 system automatically the former partition is used, on aarch64 the latter.

Moreover, these OS executables may be contained in different versions, to implement a simple versioning scheme: when tools such as systemd-nspawn or systemd-gpt-auto-generator dissect a disk image, and they find two or more root or /usr/ partitions of the same type UUID, they will automatically pick the one whose GPT partition label (a 36 character free-form string every GPT partition may have) is the newest according to strverscmp() (OK, truth be told, we don't use strverscmp() as-is, but a modified version with some more modern syntax and semantics, but conceptually identical).

This logic allows to implement a very simple and natural A/B update scheme: an updater can drop multiple versions of the OS into separate root or /usr/ partitions, always updating the partition label to the version included there-in once the download is complete. All of the tools described here will then honour this, and always automatically pick the newest version of the OS.

Verity

When building modern OS appliances, security is highly relevant. Specifically, offline security matters: an attacker with physical access should have a difficult time modifying the OS in a way that isn't noticed. i.e. think of a car or a cell network base station: these appliances are usually parked/deployed in environments attackers can get physical access to: it's essential that in this case the OS itself sufficiently protected, so that the attacker cannot just mount the OS file system image, make modifications (inserting a backdoor, spying software or similar) and the system otherwise continues to run without this being immediately detected.

A great way to implement offline security is via Linux' dm-verity subsystem: it allows to securely bind immutable disk IO to a single, short trusted hash value: if an attacker manages to offline modify the disk image the modified disk image won't match the trusted hash anymore, and will not be trusted anymore (depending on policy this then just result in IO errors being generated, or automatic reboot/power-off).

The Discoverable Partitions Specification declares how to include Verity validation data in disk images, and how to relate them to the file systems they protect, thus making if very easy to deploy and work with such protected images. For example systemd-nspawn supports a --root-hash= switch, which accepts the Verity root hash and then will automatically assemble dm-verity with this, automatically matching up the payload and verity partitions. (Alternatively, just place a .roothash file next to the image file).

Future

The above already is a powerful tool set for working with disk images. However, there are some more areas I'd like to extend this logic to:

bootctl

Similar to the other tools mentioned above, bootctl (which is a tool to interface with the boot loader, and install/update systemd's own EFI boot loader sd-boot) should learn a --image= switch, to make installation of the boot loader on disk images easy and natural. It would automatically find the ESP and other relevant partitions in the image, and copy the boot loader binaries into them (or update them).

coredumpctl

Similar to the existing journalctl --image= logic the coredumpctl tool should also gain an --image= switch for extracting coredumps from compliant disk images. The combination of journalctl --image= and coredumpctl --image= would make it exceptionally easy to work with OS disk images of appliances and extracting logging and debugging information from them after failures.

And that's all for now. Please refer to the specification and the man pages for further details. If your distribution's installer does not yet tag the GPT partition it creates with the right GPT type UUIDs, consider asking them to do so.

Thank you for your time.

Ross Burton: Faster image transfer across the network with zsync

http://www.burtonini.com/blog/2021/06/10/yocto-zsync/

Those of us involved in building operating system images using tools such as OpenEmbedded/Yocto Project or Buildroot don't always have a power build machine under our desk or in the same building on gigabit. Our build machine may be in the cloud, or in another office over a VPN running over a slow residential ADSL connection. In these scenarios, repeatedly downloading gigabyte-sized images for local testing can get very tedious.

There are some interesting solutions if you use Yocto: you could expose the shared state over the network and recreate the image, which if the configurations are the same will result in no local compilation. However this isn't feasible if your local machine isn't running Linux or you just want to download the image without any other complications. This is where zsync is useful.

zsync is a tool similar to rsync but optimised for transfering single large files across the network. The server generates metadata containing the chunk information, and then shares both the image and the metadata over HTTP. The client can then use any existing local file as a seed file to speed up downloading the remote file.

On the server, run zsyncmake on the file to be transferred to generate the .zsync metadata. You can also pass -z if the file isn't already compressed to tell it to compress the file first.

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic

$ zsyncmake -z core-image-minimal-*.wic

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 4.7K Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.manifest
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic
-rw-r--r-- 1 ross ross  53M Jun 10 13:45 core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz

Here we have ~420MB of disk image, which compressed down to a slight 53MB, and just ~5KB of metadata. This image compressed very well as the raw image is largely empty space, but for the purposes of this example we can ignore that.

The zsync client downloads over HTTP and has some non-trivial requirements so you can't just use any HTTP server, specifically my go-to dumb server (Python's integrated http.server) isn't sufficient. If you want a hassle-free server then the Node.js package http-server works nicely, or any other proper server will work. However you choose to do it, share both the .zsync and .wic.gz files.

$ npm install -g http-server
$ http-server -p 8080 /path/to/images

Now you can use the zsync client to download the images. Sadly zsync isn't actually magical, so the first download will still need to download the full file:

$ zsync http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.zsync
No relevent local data found - I will be downloading the whole file.
downloading from http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz:
#################### 100.0% 7359.7 kBps DONE

verifying download...checksum matches OK
used 0 local, fetched 55208393

However, subsequent downloads will be a lot faster as only the differences will be fetched. Say I decide that core-image-minimal is too, well, minimal, and build core-image-sato which is a full X.org stack instead of just busybox. After building the the image and metadata we now have a ~700MB image:

-rw-r--r-- 1 ross ross 729M Jun 10 14:17 core-image-sato-fvp-base-20210610125939.rootfs.wic
-rw-r--r-- 1 ross ross 118M Jun 10 14:18 core-image-sato-fvp-base-20210610125939.rootfs.wic.gz
-rw-r--r-- 1 ross ross 2.2M Jun 10 14:19 core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync```

Normally we'd have to download the full 730MB, but with zsync we can just fetch the differences. By telling the client to use the existing core-image-minimal as a seed file, we can fetch the new core-image-sato:

$ zsync -i core-image-minimal-fvp-base-20210610124230.rootfs.wic  http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync
reading seed file core-image-minimal-fvp-base-20210610124230.rootfs.wic
core-image-minimal-fvp-base-20210610124230.rootfs.wic. Target 70.5% complete.
downloading from http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.gz:
#################### 100.0% 10071.8 kBps DONE     

verifying download...checksum matches OK
used 538800128 local, fetched 70972961

By using the seed file, zsync determined that it already has 70% of the file on disk, and downloaded just the remaining chunks.

For incremental builds the differences can be very small when using the Yocto Project, as thanks to the reproducible builds effort there are no spurious changes (such as embedded timestamps or non-deterministic compilation) on recompiles.

Now, obviously I don't recommend doing all of this by hand. For Yocto Project users, as of right now there is a patch queued for meta-openembedded adding a recipe for zsync-curl, and a patch queued for openembedded-core to add zsync and gzsync image conversion types (for IMAGE_FSTYPES, for example wic.gzsync) to generate the metadata automatically. Bring your own HTTP server and you can fetch without further effort.

Thibault Martin: On the Sustainability of the GNOME Foundation

https://blog.ergaster.org/post/20210606-on-gnome-foundation-sustainability/

This blog post was originally a question and answer on GNOME’s Discourse to discuss how candidates to the board would be able to help making the GNOME Foundation sustainable.
Following a blog post by GNOME Foundation’s president Robert McQueen about The Next Steps for the GNOME Foundation, GNOME Designer and Foundation’s board member Allan Day opened a discussion for the board to issue recommendations to the GNOME Foundation members when voting for a candidate.

Abanoub Ghadban: The first steps in GSoC

https://abanoubgh.wordpress.com/2021/06/02/the-first-steps-in-gsoc/

http://abanoubgh.wordpress.com/?p=36

I am starting a new blog series, for covering my GSoC’21 journey with GNOME Foundation. It’s already been two weeks since I received the acceptance email of GSoC. My project focuses on improving tracker support for custom ontologies. In this blog I’m going to talk about how I applied for GSoC and introduce the project on witch I’ll be working this summer.

First, let me introduce my self. I’m Abanoub Ghadban, a fourth year student at faculty of computer engineering from Egypt. I started my journey in GSoC in December 2020 when one of my friend who participated in GSoC last year told me about the experience he gained while working on his project with GNOME. I get started with GNOME apps easily thanks to the GNOME new comers guide. I started by looking at the basics of GLIB and GObject, I found the GLIB/GTK book very useful. The concepts I learned from the book and documentations became much clearer after looking at how they are used in GNOME apps. I started exploring gnome-photos app, then I searched for a “new comers” issue and solved it in this merge request. The maintainer of gnome-photos was very helpful in solving the threats he found in my code. Also, I investigated some issues in natuilus, glib and tracker. I decided to apply for a project related to tracker. The mentors were very helpful in guiding me to choose the project and write the proposal.

Currently, we are in the community bonding period, during this time participants should start communicating with their mentors to start planing ahead how they should start when the time comes. The first thing I’ve done after celebrating :D, was getting in touch with the mentors. We talked about the resources that can help me to get started with the project, how I can prepare my development environment and how we can communicate with each others.

The goal of my project is improving tracker support for custom ontologies. That is done by:

  • Fixing crashes that happen when tracker tries to parse an invalid ontology.
  • Adding support to the ontology parser for the out of order definitions in the ontology file.
  • TrackerNamespaceManager should support custom ontologies more easily (details).

So, here are the things I’ve done so far:

  • Cloned tracker repository, built it using both GNOME builder and meson.
  • Installed dev dependencies and configured VS Code to open tracker project in it. Honestly, I found it more useful than GNOME builder :).
  • Looked at the architecture of tracker and tracker-miners and how the architecture changed from tracker2 to tracker3.
  • Reading tracker documentations about how to create new ontologies.
  • Debugging tracker using gdb, I used it to find out how the ontology files are parsed.
  • Reading tracker documentations about TrackerNamespaceManager and TrackerResource.

Guess this is a good start, but still there is much to do for the upcoming days. Hope every thing works fine during this internship, GSoC here we GO.

Jussi Pakkanen: An overhaul of Meson's WrapDB dependency management/package manager service

https://nibblestew.blogspot.com/2021/06/an-overhaul-of-mesons-wrapdb-dependency.html

For several years already Meson has had a web service called WrapDB for obtaining and building dependencies automatically. The basic idea is that it takes unaltered upstream tarballs, adds Meson build definitions (if needed) as a patch on top and builds the whole thing as a Meson subproject. While it has done its job and provided many packages, the UX for adding new versions has been a bit cumbersome.

Well no more! With a lot of work from people (mostly Xavier Claessens) all of WrapDB has been overhauled to be simpler. Instead of separate repos, all wraps are now stored in a single repo, making things easier.  Adding new packages or releases now looks like this:

  • Fork the repo
  • Add the necessary files
  • Submit a PR
  • Await results of automatic CI and (non-automatic :) reviewer comments
  • Fix issues until the PR is merged
The documentation for the new system is still being written, but submissions are already open. You do need the current trunk of Meson to use the v2 WrapDB. Version 1 will remain frozen for now so old projects will keep on building. All packages and releases from v1 WrapDB have been moved to v2, except some old ones that have been replaced by something better (e.g. libjpeg has been replaced by libjpeg-turbo) so starting to use the new version should be transparent for most people.

Submitting new dependencies

Anyone can submit any dependency project that they need (assuming they are open source, of course). All you need to do is to convert the project's build definition to Meson and then submit a pull request as described above. You don't need permission from upstream to submit the project. The build files are MIT licensed so projects that want to provide native build definitions should be able to integrate WrapDB's build definitions painlessly.

Submitting your own libraries

Have you written a library that already builds with Meson and would like to make it available to all Meson users with a single command:

meson wrap install yourproject

The procedure is even simpler than above, you just need to file a pull request with the upstream info. It only takes a few minutes.

Kai A. Hiller: The Beginning

https://blog.kaialexhiller.de/?p=6

Hello, I’m Kai. I’m a computer science student at the KIT in Germany. This year I am participating in my second Google Summer of Code at the GNOME foundation to work on Fractal. My mentor is Julian Sparber, who works towards end-to-end encryption in Fractal and already gave me a warm welcome. I created this blog to keep everyone interested updated on my progress over the course of the summer.

Fractal Logo
Fractal Logo

For those who don’t already know, Fractal is a messaging client for the GNOME desktop powered by the Matrix protocol. In the last year the ecosystem on which Fractal is built changed dramatically (GTK 4, matrix-rust-sdk) and the current code base shows its weaknesses. With those considerations, Fractal’s developers came to the conclusion that a rewrite from scratch is the best way forward. This undertaking is called Fractal NEXT.

My goal for the Summer of Code 2021 is to bring Fractal NEXT to feature parity with the current Fractal code base. There has already been a lot of work on the architecture and groundwork for the new Fractal over the last months, but the current implementation is still bare-bone.

The main features remaining to be implemented are room management and account management as well as support for more message types. I will start by implementing the elements required to work with rooms and its members. This will mainly manifest in a room settings panel that allows editing of the room avatar, title and description, inviting and kicking members and managing their power levels. Account management will resemble the account page of the current Fractal with options to edit the user’s avatar, name, third party identifiers and the device list, and also the possibility to deactivate ones Matrix account. Another small but important thing will be the addition of shortcuts for all actions, so Fractal is more accessible, and power users will feel at home.

I am very glad to be able to work on Fractal for this Summer of Code and hope it will be a lot of fun.

Also check out Alejandro’s blog, who, too, contributes to Fractal this summer by implementing multi-account support.

Alejandro Domínguez: Another year in GSoC and Fractal(-next)

https://aledomu.github.io/gnome/another-year-in-gsoc-and-fractal/

https://aledomu.github.io/gnome/another-year-in-gsoc-and-fractal

This year I applied for Google Summer of Code again and chose Fractal to work on multi-account support. I got accepted (that’s why I’m writing this), so today I start the coding (and design) period to achieve that.

Any of you who had followed what happened in my internship in 2020 and afterwards might remember that I had the same goal back then. The problem was that the way the app was structured internally made it incredibly difficult to do so without all hell breaking loose. You can remind (or read) the details in my final report last year and what I learned in the process.

After that, I kept on integrating matrix-rust-sdk in Fractal and removing completely the old backend. That was completed in January this year, but after fixing all the shortcomings in the code that dealt with the server (and introducing new bugs related both to Synapse not fully conforming to the official specification and a few bugs in matrix-rust-sdk), I tried to work further to enable the encryption machinery in the backend, but the undertaking to do so and make the UI workable at all was much greater than we expected previously. My #1 concern when I started on this project was maintainability, so I started thinking that probably the only sane way out was to rewrite Fractal, something that I suspected since August when I still was in the middle of GSoC 2020. The fact that GTK 4.0 was just released, its binding crates had better support for using XML templates and subclassing and we already had proof that matrix-rust-sdk could work for our needs made it a very compelling alternative, “just” needing to rewrite the UI from scratch.

This happened more or less when Julian Sparber started working on encryption support, so whatever the choice was, it had to allow him to focus on his task as soon as possible. It was deemed that keeping with the incremental refactors would be much slower for everyone, so Fractal-next got started with proper support from third-party libraries. Julian started working on it, getting a basic chat feature set going on in a few weeks that allowed him to get part of his goal done in parallel. He posted what he did and an overview of the architecture of Fractal-next in his blog. I looked around occasionally just to give some advice at the beginning, mostly concerned with code organization and modules.

One big milestone ahead to make Fractal-next just be Fractal is to reach feature parity. That’s something that Kai will work on as part of his GSoC internship, as he explains in his blog. In the meantime, I will add support for logging in with multiple accounts. I think we won’t clash each other, so we can do our own thing independently.

Hopefully by autumn we should have a really nice release that brings new features that the community has been looking for in a long time and make the project much more future-proof.

Patrick Griffis: HTTP/2 in libsoup3, WebKitGTK, and Epiphany

https://blog.tingping.se/2021/06/07/http2-in-libsoup.html

https://blog.tingping.se/2021/06/07/http2-in-libsoup

The latest development release of libsoup 3, 2.99.8, now enables HTTP/2 by default. So lets look into what that means and how you can try it out.

Performance

In simple terms what HTTP/2 provides for improved performance is more efficient network usage when requesting multiple files from a single host. It does this by avoiding making new connections whenever possible and over that single connection allowing multiple requests to happen at the same time.

It is easy to imagine many workloads this would improve, such as flatpak downloading a lot of objects from a single server.

Here are some examples in Epiphany:

gophertiles

This is a benchmark made to directly test the best case for HTTP/2. As you can see in the inspector (which has been improved to show more network information) you can see HTTP/2 creates a single connection and completes in 229ms. HTTP/1 on the other-hand creates 6 connections taking 1.5 seconds. This all happens on a network which is a best case for HTTP/1, a low latency wired gigabit connection; As network latency increases HTTP/2’s lead grows dramatically.

browser screenshot using http2

browser screenshot using http1

Youtube

For a more real world example Youtube is a great demo. It hosts a lot of files for a webpage but it isn’t a perfect benchmark as it still involves multiple hosts that don’t share connections. HTTP/2 still has the slight lead, again versus HTTP/1’s best case.

inspector screenshot using http2

inspector screenshot using http1

Testing

This work is all new and we would really like some testing and feedback. The easiest way to run this yourself is with this custom Epiphany Flatpak (sorry for the slow download server, and it will not work with NVidia drivers).

You can get useful debug output both through the WebKit inspector (ctrl+shift+i) and by running with --env='G_MESSAGES_DEBUG=libsoup-http2;nghttp2'.

Please report any bugs you find to https://gitlab.gnome.org/gnome/libsoup/issues.