Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 13 2018

Christian Hergert: Keeping those headers aligned

One dead give-away of a GNOME/Gtk programmer is how they format their headers. For the better part of two decades, many of us have been trying to keep things aligned. Whether this is cargo-culted or of real benefit depends on the reader. Generally, I find them easier to filter through.

Unfortunately, none of indent, clang-format, nor uncrustify have managed to exactly represent our style which makes automated code formatting tools rather problematic. Especially in an automated fashion.

For example, notice how the types and trailing asterisks, stay aligned, in multiple directions.

FOO_EXPORT
void   foo_do_something_async  (Foo                  *self,
                                const gchar * const  *params,
                                GCancellable         *cancellable,
                                GAsyncReadyCallback   callback,
                                gpointer              user_data);
FOO_EXPORT
Bar   *foo_do_something_finish (Foo                  *self,
                                GAsyncResult         *result,
                                GError              **error);

Keeping that sort of code aligned is quite a pain. Even for vim users who can fairly easily repeat commands. Worse, it can explode patches into unreadable messiness.

Anyway, I added a new command in Builder last night that will format these in this style so long as you don’t do anything to trip it up. Just select a block of function declarations, and run format-decls from the command bar.

It doesn’t yet handle vtable entries, but that shouldn’t be too painful. Also, it doesn’t handle miscellaneous other C code in-between declarations (except G_GNUC_* macros, __attribute_() etc.

Jussi Pakkanen: Easy MSI installer creator

Shipping programs on Windows platforms becomes a lot simpler (especially in corporate environments) if you can create an MSI installer. The only Free software solution for that is the WiX installer toolkit. The fairly big downside to this is that it very much tied to how Visual Studio does things with GUIDs and all that. The installer's contents and behavior is defined with an XML file whose format is both verbose and confusing.

Most Unix developers, once faced with this, will almost immediately blurt out something like "Why can't I just do DESTDIR=c:\some\path ninja install and have it make an installer out of the result?" So I created a script that does exactly that.

The basic usage is simple. First you do a staged install into some directory and create a JSON file describing the installation that would look like this:

{
    "update_guid": "YOUR-GUID-HERE",
    "version": "1.0.0",
    "product_name": "Product name here",
    "manufacturer": "Your organization's name here",
    "name": "Name of product here",
    "name_base": "myprog",
    "comments": "A comment describing the program",
    "installdir": "MyProg",
    "license_file": "License.rtf",
    "parts": [
        {"id": "MainProgram",
         "title": "Program name",
         "description": "The MyProg program",
         "absent": "disallow",
         "staged_dir": "staging"
        }
    ]
}

Running the script would then create a standalone MSI installer with the contents of the staging directory.

Multiple components in one installer

Some programs ship with multiple parts that the user can choose whether to install each part. This is supported by the script. First you must split the files in multiple staging directories, one per component and then add entries to the parts array. See the repository for an example.
Michael Meeks: 2018-06-13 Wednesday.
Matthias Clasen: Flatpak in detail

Peter Hutterer: libinput and its device quirks files

This post does not describe a configuration system. If that's all you care about, read this post here and go be angry at someone else. Anyway, with that out of the way let's get started.

For a long time, libinput has supported model quirks (first added in Apr 2015). These model quirks are bitflags applied to some devices so we can enable special behaviours in the code. Model flags can be very specific ("this is a Lenovo x230 Touchpad") or generic ("This is a trackball") and it just depends on what the specific behaviour is that we need. The x230 touchpad for example has a custom pointer acceleration but trackballs are marked so they get some config options mice don't have/need.

In addition to model tags we also have custom attributes. These are free-form and provide information that we cannot get from the kernel. These too can be specific ("this model needs a pressure threshold of N") or generic ("bluetooth keyboards are an external keyboards").

Overall, it's a good system. Most users never have to care that we even have this. The whole point is that any device-specific quirks need to be merged only once for each model, then everyone with the same device gets to benefit on the next update.

Originally quirks were hardcoded but this required rebuilding libinput for any changes. So we moved this to utilise the udev hwdb. For the trivial work of fetching udev properties we got a lot of flexibility in how we can match against devices. For example, an entry may look like this:


libinput:name:*AlpsPS/2 ALPS GlidePoint:dmi:*svnDellInc.:pnLatitudeE6220:*
LIBINPUT_ATTR_PRESSURE_RANGE=100:90
The above uses a name match and the dmi modalias match to apply a property for the touchpad on the Dell Latitude E6330. The exact match format is defined by a bunch of udev rules that ship as part of libinput.

Using the udev hwdb maked the quirk storage a plaintext file that can be updated independently of libinput, including local overrides for testing things before merging them upstream. Having said that, it's definitely not public API and can change even between stable branch updates as properties are renamed or rescoped to fit the behaviour more accurately. For example, a model-specific tag may be renamed to a behaviour-specific tag as we find more devices affected by the same issue.

The main issue with the quirks now is that we keep accumulating more and more of them and I'm starting to hit limits with the udev hwdb match behaviour. The hwdb is great for single matches but not so great for cascading matches where one match may overwrite another match. The hwdb match system is largely implementation-defined so it's not always predictable which match rule wins out in the end.

Second, debugging the udev hwdb is not at all trivial. It's a bit like git - once you're used to it it's just fine but until then the air turns yellow with all the swearing being excreted by the unsuspecting user.

So long story short, libinput 1.12 will replace the hwdb model quirks database with a set of .ini files. The model quirks will be installed in /usr/share/libinput/ or whatever prefix your distribution prefers instead. It's a bunch of files with fairly simplistic instructions, each [section] has a set of MatchFoo=Bar directives and the ModelFoo=bar or AttrFoo=bar tags. See this file for an example. If all MatchFoo directives apply to a device, the Model and Attr tags are applied. Matching works in inter- and intra-file sequential order so the last section in a file overrides the first section of that file and the highest-sorting file overrides the lowest-sorting file. Otherwise the tags are accumulated, so if two files match on the same device with different tags, both tags are applied. So far, so unexciting.

Sometimes it's necessary to install a temporary local quirk until upstream libinput is updated or the distribution updates its package. For this, the /etc/libinput/local-overrides.quirks file is read in as well (if it exists). Note though that the config files are considered internal API, so any local overrides may stop working on the next libinput update. Should've upstreamed that quirk, eh?

These files give us the same functionality as the hwdb - we can drop in extra files without recompiling. They're more human-readable than a hwdb match and it's a lot easier to add extra match conditions to it. And we can extend the file format at will. But the biggest advantage is that we can quite easily write debugging tools to figure out why something works or doesn't work. The libinput list-quirks tool shows what tags apply to a device and using the --verbose flag shows you all the files and sections and how they apply or don't apply to your device.

As usual, the libinput documentation has details.

June 12 2018

Bastien Nocera: Fingerprint reader support, the second coming
7964 958c
Felipe Borges: Contributing to Boxes

June 10 2018

Sam Thursfield: Tagcloud

June 09 2018

Jim Hall: Battery on my new Librem 13

Christian Hergert: A new completion engine for Builder

Since my initial announcement of Builder at GUADEC in 2014, I’ve had a vision in the back of my mind about how I’d like completion to work in Builder. However, there have been more important issues to solve and I’m just one person. So it was largely put on the back burner because after a few upstream patches, the GtkSourceView design was good enough.

However, as we start to integrate more external tooling into Builder, the demands and design of what those completion layers expect of the application have changed. And some of that is in conflict with the API/ABI we have in the long-term stable versions of GtkSourceView.

So over the past couple of weeks, I’ve built a new completion engine for Builder that takes these new realities into account.

A screenshot of Builder's new completion engine showing results from clang in a C source file.

It has a number of properties I wanted for Builder such as:

Reduced Memory and CPU Usage

Some tooling wants to give you a large set of proposals for completion and then expects the IDE to filter in the UI process. Notably, this is how Clang works. That means that a typical Gtk application written in C could easily have 25,000 potential completion proposals.

In the past we mitigated this through a number of performance tricks, but it still required creating thousands of GObjects, linked lists, queues, and such. That is an expensive thing to do on a key-press, especially when communicating with a sub-process used for crash-isolation.

So the new completion provider API takes advantage of GListModel which is an interface that focuses on how to have a collection of GObjects which don’t need to be “inflated” until they’ve been requested. In doing so, we can get our GVariant IPC message from the gnome-builder-clang sub-process as a single allocation. Then, as results are requested by the completion display, a GObject is inflated on demand to reference an element of that larger GVariant.

In doing so, we provide a rough upper bound on how many objects need to be created at any time to display the results to the user. We can also still sort and filter the result set without having to create a GObject to represent the proposal. That’s a huge win on memory allocator churn.

Consistent and Convenient Refiltering

Now that we have external tooling that expects UI-side refiltering of proposals, we need to make that easier for tooling to do without having to re-query. So the fuzzy search and highlighting tools have been moved into IdeCompletion for easy access by completion providers.

As additional text is provided for completion, the providers are notified to perform filters on their result set. Since the results are GListModel-based, everything updates in the UI out-of-band nicely with a minimal number of gsignal emissions. Compare this to GtkTreeModel which has to emit signals for every row insertion, change, or deletion!

Alternative Styling

When working with completions for programming languages, we’re often dealing with 3 potential groups of content. The return value, the name and possible parameters, and miscellaneous data. To get the styling we want for all of this, I chose to forgo the use of GtkTreeView and use widgets directly. That means that we can use CSS like we do everywhere else. But also, it means that some additional engineering is required.

We only want to create widgets for the visible rows, because otherwise we’re wasting memory and time crunching CSS for things that won’t be seen. We also want to avoid creating new widgets every time the visible range of proposals is changed.

The result is IdeCompletionListBox which is a GtkBox containing GtkListBoxRow and some GtkSizeGroups to give things a columnar effect. Because the number of created widgets is small things stay fast and snappy while giving us the desired effect. Notably, it implements GtkScrollable so if you place it in a GtkScrolledWindow you still get the expected behavior.

Further more, we can adjust the window sizing and placement to be more natural for code-related proposals.

Dynamic Priority Control

We want the ability to change the priority of some completion providers based on the context of the completion. The new design allows for providers to increase their priority when they know they have something of high-importance given some piece of contextual knowledge.

Long term, we also want to provide an API for providers to register a small number of suggested completions that will be raised to the top-level, regardless of what provider gave them. This is necessary instead of having global “scoring” since that would require both O(n) scans of the data set as well as coming up with a strategy to score disparate systems (and search engines prove that rarely works well).

More to do

There are still a couple things that I think we should address that may influence the API design more. For example:

  • How should we handle string interpolation? A simplified API for completions when working inside of strings might be useful. Think strftime(), printf(), etc as potential examples here.
  • The upcoming Gtk+ 3.24 release will give us access to the move_to_rect() API. Combined with some Wayland xdg_popup improvements, this could allow us to make our display widget more flexible.
  • Parameter completion is still a bit of an annoying process. We could probably come up with a strategy to make the display look a lot better here.
  • Give some tweaks and knobs for how much and what to complete (just function name vs parameters and types).

Conclusions

Rarely do I write any code that doesn’t have bugs. Now that this is landing in Builder Nightly soon, I could use some more testing and bug filing from the community at large.

I’m very happy with the improvements over the past couple of months. Between getting Clang out of process and this to allow us to make clang completion fast, I think we’re in a much better place.

We can’t get this design into older GtkSourceView releases, but we can probably look at some form of integration into what will eventually integrate with Gtk4. I would be very happy if it influenced new API releases of the library so that we don’t need to carry the implementation downstream.

Saurabh Singh: Adding self registering keys to lua-factory

For the past few weeks I’ve been hacking away at GNOME Games and Grilo. Here’s what I’ve done so far.

May 14th - June 3rd

My first task was to fetch metadata using thegamesdb and use it to get developer and publisher of a game to GNOME Games. For this, I had to add the appropriate system keys to Grilo, the only problem being the keys in question were too app-specific to be added as system keys and there was no provision of self registering keys in lua based sources.

The struggle

The solution was pretty simple, I began implementing self registering keys to Grilo for lua sources to use all the while fixing any bugs I encountered on the way.

Bastien Nocera, gave me a very bright idea to register new keys while setting their value to GRL_DATA itself. I completed this by implementing two function in Grilo.

  • grl_data_set_for_id ()
  • grl_data_add_for_id ()

How do they work?

void grl_data_set_for_id (GrlData *data, const gchar *key_name, const GValue *value);

The key_name to be registered, value to be set and data object are first passed as parameter to the function.

  registry = grl_registry_get_default ();
  key_id = grl_registry_lookup_metadata_key (registry, key_name);

The key_name is then looked up in the registry for any matching GrlKeyID.

  if (key_id != GRL_METADATA_KEY_INVALID) {
    grl_data_set (data, key_id, value);
  }

If found, the data is set normally using grl_data_set ().

else {
    switch (G_VALUE_TYPE (value)) {
    case G_TYPE_INT:
      spec = g_param_spec_int (key_name,
                               key_name,
                               key_name,
                               0, G_MAXINT,
                               0,
                               G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE);

      key_id = grl_registry_register_metadata_key (registry, spec, GRL_METADATA_KEY_INVALID, NULL);
      grl_data_set (data, key_id, value);
      break;

    case G_TYPE_INT64:
      spec = g_param_spec_int64 (key_name,
                                 key_name,
                                 key_name,
                                 -1, G_MAXINT64,
                                 -1,
                                 G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE);

      key_id = grl_registry_register_metadata_key (registry, spec, GRL_METADATA_KEY_INVALID, NULL);
      grl_data_set (data, key_id, value);
      break;

    case G_TYPE_STRING:
      spec = g_param_spec_string (key_name,
                                  key_name,
                                  key_name,
                                  NULL,
                                  G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE);

      key_id = grl_registry_register_metadata_key (registry, spec, GRL_METADATA_KEY_INVALID, NULL);
      grl_data_set (data, key_id, value);
      break;

    case G_TYPE_BOOLEAN:
      spec = g_param_spec_boolean (key_name,
                                   key_name,
                                   key_name,
                                   FALSE,
                                   G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE);

      key_id = grl_registry_register_metadata_key (registry, spec, GRL_METADATA_KEY_INVALID, NULL);
      grl_data_set (data, key_id, value);
      break;

    case G_TYPE_FLOAT:
      spec = g_param_spec_float (key_name,
                                 key_name,
                                 key_name,
                                 0, G_MAXFLOAT,
                                 0,
                                 G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE);

      key_id = grl_registry_register_metadata_key (registry, spec, GRL_METADATA_KEY_INVALID, NULL);
      grl_data_set (data, key_id, value);
      break;

    default:
      if (type == G_TYPE_DATE_TIME) {
        spec = g_param_spec_boxed (key_name,
                                   key_name,
                                   key_name,
                                   G_TYPE_DATE_TIME,
                                   G_PARAM_STATIC_STRINGS | G_PARAM_READWRITE);

        key_id = grl_registry_register_metadata_key (registry, spec, GRL_METADATA_KEY_INVALID, NULL);
        grl_data_set (data, key_id, value);
      }
    }
  }
}

If not found, the appropriate g_param_spec is defined for that particular G_TYPE, the key is then registered using grl_registry_regsiter_metadata_key () and value is set using grl_data_set (). This function sets the first value associated with key_name in data. If key_name already has a first value, old value is replaced by the new one.

void grl_data_set_for_id (GrlData *data, const gchar *key_name, const GValue *value);

This function also works similarly to the same as above one, with only a few minor differences. The value associated with key_name to data is appended instead of set. This key_name is used to create a new GParamSpec instance, which is further used to create and register a key using grl_registry_register_metadata_key(). The value is added using grl_data_add* instead of grl_data_set ().

After implementing these, I was just a step away from allowing lua sources to have self registering keys. The only work left was to modify lua plugins to use the above functions when metadata is added to GrlMedia.

Now, adding self registering keys was as easy as typing

    if game.Developer then
      media.developer = game.Developer.xml
    end

Lastly, I added a test case to grilo-plugins to verify these changes. With this, allowing self registering keys to Grilo was over.

Adding Developer & Publisher to Games

GNOME Games currently has a very basic UI. I added Developer & Publisher to Game object which will further be used for segregating games into different views, such as a Developer view allowing users to select games from a particular developer and similarly a Publisher view. I’ve already started working on this and will be posting more on this soon.

I hope to see you all at GUADEC this year. Cheers!

June 08 2018

Will Thompson: When is an exit code not an exit code?

Ivan Molodetskikh: GSoC 2018: Filter Infrastructure

Introduction

This summer I’m working on librsvg, a GNOME library for rendering SVG files, particularly on porting the SVG filter effects from C to Rust. That involves separating the code for different filters from one huge C file into individual files for each filter, and then porting the filter rendering infrastructure and the individual filters.

Thankfully, in the large C file the code for different filters was divided by comment blocks, so several vim macros later I was done with the not so exciting splitting part.

Representing Filters in Rust

SVG filter effects are applied to an existing SVG element to produce a modified graphical result. Each filter consists of a number of filter primitives. The primitives take raster images (bitmaps) as an input (this can be, for example, the rasterized element where the filter was applied, the background snapshot of the canvas at the time the filter was invoked, or an output of another filter primitive), do something with it (like move the pixels to a different position, apply Gaussian blur, or blend two input images together) and produce raster images as an output.

Each filter primitive has a number of properties. The common properties include the bounds of the region where the filter primitive is doing its processing, the name assigned to the primitive’s result, and the input that the primitive operates on. I collected the common properties into the following types:

struct Primitive {
    x: Cell<Option<RsvgLength>>,
    y: Cell<Option<RsvgLength>>,
    width: Cell<Option<RsvgLength>>,
    height: Cell<Option<RsvgLength>>,
    result: RefCell<Option<String>>,
}

struct PrimitiveWithInput {
    base: Primitive,
    in_: RefCell<Option<Input>>,
}

Each filter primitive struct is meant to contain one of these two common types along with any extra properties as needed. The common types provide functions for parsing their respective properties so that code need not to be duplicated in each filter.

Note that these properties are just “descriptions” of the final values to be used during rendering. For example, an RsvgLength can be equal to 2 or 50%, and the actual length in pixels is evaluated during rendering and depends on various rendering state such as the coordinate system in use and the size of the enclosing element.

The filter primitive processing behavior is nicely described as a trait:

trait Filter {
    fn render(&self, ctx: &FilterContext)
        -> Result<FilterResult, FilterError>;
}

Here FilterContext contains various filter state such as the rasterized bitmap representation of the SVG element the filter is being applied to and results of previously rendered primitives, and allows retrieving the necessary input bitmaps. Successful rendering results in a FilterResult which has the name assigned to the primitive and the output image, and errors (like non-existent input filter primitive) end up in FilterError.

When a filter is invoked, it goes through its child nodes (filter primitives) in order, render()s them and stores the results in the FilterContext.

Pixel Iteration

Since many filter primitives operate on a per-pixel basis, it’s important to have a convenient way of transforming the pixel values.

Librsvg uses image surfaces from Cairo, a 2D graphics library, for storing bitmaps. An image surface stores its pixel values in RGBA format in a large contiguous array row by row with optional strides between the rows. The plain way of accessing the values is image[y * stride + x * 4 + ch] where ch is 0, 1, 2 and 3 for R, G, B and A respectively. However, writing this out is rather tedious and error-prone.

As the first step, I added a pixel value struct:

struct Pixel {
    pub r: u8,
    pub g: u8,
    pub b: u8,
    pub a: u8,
}

and extended cairo-rs‘s image surface data accessor with the following methods:

fn get_pixel(
    &self,
    stride: usize,
    x: usize,
    y: usize,
) -> Pixel;

fn set_pixel(
    &mut self,
    stride: usize,
    pixel: Pixel,
    x: usize,
    y: usize,
);

using the known trick of declaring a trait containing the new methods and implementing it for the target type. Unfortunately, stride has to be passed through manually because the (foreign) data accessor type doesn’t offer a public way of retrieving it. Adding methods to cairo-rs directly would allow to get rid of this extra argument.

Next, since the pattern of iterating over pixels of an image surface within the given bounds comes up rather frequently in filter primitives, I added a Pixels iterator inspired by the image crate. It allows writing code like this:

for (x, y, pixel) in Pixels::new(&image, bounds) {
    /* ... */
}

instead of the repetitive plain version:

for y in bounds.y0..bounds.y1 {
    for x in bounds.x0..bounds.x1 {
        let pixel = image.get_pixel(stride, x, y);
        /* ... */
    }
}

Filters with multiple input images can process pixels simultaneously in the following fashion using the standard Rust iterator combinators:

for (x, y, p, p2) in Pixels::new(&image, bounds)
    .map(|(x, y, p)| {
        (x, y, p, image2.get_pixel(stride, x, y))
    })
{
    let out_pixel = /* ... */;
    out_image.set_pixel(stride, out_pixel, x, y);
}

Benchmarking

Rust is known for its zero-cost abstractions, however it’s still important to keep track of performance because it’s very well possible to write code in such a way that’s hard to optimize away. Fortunately, a benchmarking facility is provided on nightly Rust out of the box: the test feature with the Bencher type.

Benchmark sources are usually placed in the benches/ subdirectory of the crate and look like this:

#![feature(test)]
extern crate rsvg_internals;

#[cfg(test)]
mod tests {
    use super::*;
    use test::Bencher;

    #[bench]
    fn my_benchmark_1(b: &mut Bencher) {
        /* initialization */

        b.iter(|| {
            /* code to be benchmarked */
        });
    }

    #[bench]
    fn my_benchmark_2(b: &mut Bencher) {
        /* ... */
    }

    /* ... */
}

After ensuring the crate’s crate-type includes "lib", you can run benchmarks with cargo +nightly bench.

I created three benchmarks, one for the straightforward iteration:

b.iter(|| {
    let mut r = 0;
    let mut g = 0;
    let mut b = 0;
    let mut a = 0;

    for y in BOUNDS.y0..BOUNDS.y1 {
        for x in BOUNDS.x0..BOUNDS.x1 {
            let base = y * stride + x * 4;

            r += image[base + 0] as usize;
            g += image[base + 1] as usize;
            b += image[base + 2] as usize;
            a += image[base + 3] as usize;
        }
    }

    (r, g, b, a)
})

One for iteration using get_pixel():

b.iter(|| {
    let mut r = 0;
    let mut g = 0;
    let mut b = 0;
    let mut a = 0;

    for y in BOUNDS.y0..BOUNDS.y1 {
        for x in BOUNDS.x0..BOUNDS.x1 {
            let pixel = image.get_pixel(stride, x, y);

            r += pixel.r as usize;
            g += pixel.g as usize;
            b += pixel.b as usize;
            a += pixel.a as usize;
        }
    }

    (r, g, b, a)
})

And one for the Pixels iterator:

b.iter(|| {
    let mut r = 0;
    let mut g = 0;
    let mut b = 0;
    let mut a = 0;

    for (_x, _y, pixel) in Pixels::new(&image, BOUNDS) {
        r += pixel.r as usize;
        g += pixel.g as usize;
        b += pixel.b as usize;
        a += pixel.a as usize;
    }

    (r, g, b, a)
})

Here are the results I’ve got:

test tests::bench_pixels                   ... bench:     991,137 ns/iter (+/- 62,654)
test tests::bench_straightforward          ... bench:     992,124 ns/iter (+/- 7,119)
test tests::bench_straightforward_getpixel ... bench:   1,034,037 ns/iter (+/- 11,121)

Looks like the abstractions didn’t introduce any overhead indeed!

Implementing a Filter Primitive

Let’s look at how to write a simple filter primitive in Rust. As an example I’ll show the offset filter primitive which moves its input on the canvas by a specified number of pixels.

Offset has an input and two additional properties for the offset amounts:

struct Offset {
    base: PrimitiveWithInput,
    dx: Cell<RsvgLength>,
    dy: Cell<RsvgLength>,
}

Since each filter primitive is an SVG node, it needs to implement NodeTrait which contains a function for parsing the node’s properties:

impl NodeTrait for Offset {
    fn set_atts(
        &self,
        node: &RsvgNode,
        handle: *const RsvgHandle,
        pbag: &PropertyBag,
    ) -> NodeResult {
        // Parse the common properties.
        self.base.set_atts(node, handle, pbag)?;

        // Parse offset-specific properties.
        for (_key, attr, value) in pbag.iter() {
            match attr {
                Attribute::Dx => self.dx.set(parse(
                    "dx",
                    value,
                    LengthDir::Horizontal,
                    None,
                )?),
                Attribute::Dy => self.dy.set(parse(
                    "dy",
                    value,
                    LengthDir::Vertical,
                    None,
                )?),
                _ => (),
            }
        }

        Ok(())
    }
}

Finally, we need to implement the Filter trait. Note that render() accepts an additional &RsvgNode argument, which refers to the filter primitive node. It’s different from &self in that it contains various common SVG node state.

impl Filter for Offset {
    fn render(
        &self,
        node: &RsvgNode,
        ctx: &FilterContext,
    ) -> Result<FilterResult, FilterError> {
        // Compute the processing region bounds.
        let bounds = self.base.get_bounds(ctx);

        // Compute the final property values.
        let cascaded = node.get_cascaded_values();
        let values = cascaded.get();

        let dx = self
            .dx
            .get()
            .normalize(&values, ctx.drawing_context());
        let dy = self
            .dy
            .get()
            .normalize(&values, ctx.drawing_context());

        // The final offsets depend on the currently active
        // affine transformation.
        let paffine = ctx.paffine();
        let ox = (paffine.xx * dx + paffine.xy * dy) as i32;
        let oy = (paffine.yx * dx + paffine.yy * dy) as i32;

        // Retrieve the input surface.
        let input_surface =
            get_surface(self.base.get_input(ctx))?;

        // input_bounds contains all pixels within bounds,
        // for which (x + ox) and (y + oy) also lie
        // within bounds.
        let input_bounds = IRect {
            x0: clamp(bounds.x0 - ox, bounds.x0, bounds.x1),
            y0: clamp(bounds.y0 - oy, bounds.y0, bounds.y1),
            x1: clamp(bounds.x1 - ox, bounds.x0, bounds.x1),
            y1: clamp(bounds.y1 - oy, bounds.y0, bounds.y1),
        };

        // Create an output surface.
        let mut output_surface =
            ImageSurface::create(
                cairo::Format::ARgb32,
                input_surface.get_width(),
                input_surface.get_height(),
            ).map_err(FilterError::OutputSurfaceCreation)?;

        let output_stride =
            output_surface.get_stride() as usize;

        // An extra scope is needed because output_data
        // borrows output_surface, but we need to move
        // out of it to return it.
        {
            let mut output_data =
                output_surface.get_data().unwrap();

            for (x, y, pixel) in
                Pixels::new(&input_surface, input_bounds)
            {
                let output_x = (x as i32 + ox) as usize;
                let output_y = (y as i32 + oy) as usize;
                output_data.set_pixel(
                    output_stride,
                    pixel,
                    output_x,
                    output_y,
                );
            }
        }

        // Return the result of the processing.
        Ok(FilterResult {
            name: self.base.result.borrow().clone(),
            output: FilterOutput {
                surface: output_surface,
                bounds,
            },
        })
    }
}

Conclusion

The project is coming along very nicely with a few simple filters already working in Rust and a couple of filter tests getting output closer to the reference images.

I’ll be attending this year’s GUADEC, so I hope to see you there in July!

June 07 2018

Christian Schaller: 3rd Party Software in Fedora Workstation

Peter Hutterer: Observations on trackpoint input data

This time we talk trackpoints. Or pointing sticks, or whatever else you want to call that thing between the GHB keys. If you don't have one and you've never seen one, prepare to be amazed. [1]

Trackpoints are tiny joysticks that react to pressure [2], convert that pressure into relative x/y events and pass that on to whoever is interested in it. The harder you push, the higher the deltas. This is where the simple and obvious stops and it gets difficult. But then again, if it was that easy I wouldn't write this post, you wouldn't have anything to read, so somehow everyone wins. Whoop-dee-doo.

All the data and measurements below refer to my trackpoint, a Lenovo T440s. It may not apply to any other trackpoints, including those on on different laptop models or even on the same laptop model with different firmware versions. I've written the below with a lot of cringing and handwringing. I want to write data that is irrefutable, but the universe is against me and what the universe wants, the universe gets. Approximately every second sentence below has a footnote of "actual results may vary". Feel free to re-create the data on your device though.

Measuring trackpoint range is highly subjective, so you'll have to trust me when I describe how specific speeds/pressure ranges feel. There are three ranges of pressure on my trackpoint (sort-of):

  • Pressure range one: When resting the finger on the trackpoint I don't really need to apply noticable pressure to make the trackpoint send events. Just moving the finger on the trackpoint makes it send events, albeit sporadically.
  • Pressure range two: Going beyond range one requires applying real pressure and feels to me like we're getting into RSI territory. Not a problem for short periods, but definitely not something I'd want all the time. It's the pressure I'd use to cross the screen.
  • Pressure range three: I have to push hard. I definitely wouldn't want to do this during everyday interaction and it just feels wrong anyway. This pressure range is for testing maximum deltas, not one you would want to use otherwise.
The first/second range are easier delineated than the second/third range because going from almost no pressure to some real pressure is easy. Going from some pressure to too much pressure is more blurry, there is some overlap between second and third range. Either way, keep these ranges in mind though as I'll be using them in the explanations below.

Ok, so with the physical conditions explained, let's look at what we have to worry about in software:

  • It is impossible to provide a constant input to a trackpoint if you're a puny human. Without a robotic setup you just cannot apply constant pressure so any measurements have some error. You also get to enjoy a feedback loop - pressure influences pointer motion but that pointer motion influences how much pressure you inadvertently apply. This makes any comparison filled with errors. I don't know if I'm applying the same pressure on the two devices I'm testing, I don't know if a user I'm asking to test something uses constant/the same/the right pressure.
  • Not all trackpoints are created equal. Some trackpoints (mostly in Lenovos), have configurable sensibility - 256 levels of it. [3] So one trackpoint measured does not equal another trackpoint unless you keep track of the firmware-set sensibility. Those trackpoints also have other toggles. More importantly and AFAIK, this type of trackpoint also has a built-in acceleration curve. [4] Other trackpoints (ALPS) just have a fixed sensibility, I have no idea whether those have a built-in acceleration curve or merely have a linear-ish pressure->delta mappings.

    Due to some design choices we did years ago, systemd increases the sensitivity on some devices (the POINTINGSTICK_SENSITIVITY property). So even on a vanilla install, you can't actually rely on the trackpoint being set to the manufacturer default. This was in an attempt to make trackpoints behave more consistently, systemd had the hwdb and it seemed like the right place to put device-specific quirks. In hindsight, it was the wrong design choice.
  • Deltas are ... unreliable. At high sensitivity and high pressures you might get a sequence of [7, 7, 14, 8, 3, 7]. At lower pressure you get the deltas at seemingly random intervals. This could be because it's hard to keep exact constant pressure, it could be a hardware issue.
  • evdev has been the default driver for almost a decade and before that it was the mouse driver for a long time. So the kernel will "Divide 4 since trackpoint's speed is too fast" [sic] for some trackpoints. Or by 8. Or not at all. In other words, the kernel adjusts for what the default user space is and userspace is based on what the kernel provides. On the newest ALPS trackpoints the kernel has stopped doing any in-kernel scaling (good!) but that means that the deltas are out by a factor of 8 now.
  • Trackpoints don't always have the same pressure ranges for x/y. AFAICT the y range is usually a bit less than the x range on many or most trackpoints. A bit weird because the finger position would suggest that strong vertical pressure is easier to apply than sideways pressure.
  • (Some? All?) Trackpoints have built-in calibration procedures to find and set their own center-point. Without that you'll get the trackpoint eventually being ever so slightly off center over time, causing a mouse pointer that just wanders off the screen, possibly into the woods, without the obligatory red cape and basket full of whatever grandma eats when she's sick.

    So the calibration is required but can be triggered accidentally by the user: If you push with the same pressure into the same direction for 2-5 seconds (depending on $THINGS) you trigger the calibration procedure and the current position becomes the new center point. When you release, the cursor wanders off for a few seconds until the calibration sets things straight again. If you ever see the cursor buzz off in a fixed direction or walking backwards for a centimetre or two you've triggered that calibration. The only way to avoid this is to make sure the pointer acceleration mechanism allows you to reach any target within 2 seconds and/or never forces you to apply constant pressure for more than 2 seconds. Now there's a challenge...

Ok. If you've been paying attention instead of hoping for a TLDR that's more elusive than Godot, we're now aware of the various drawbacks of collecting data from a trackpoint. Let's go and look at data. Sensitivity is set to the kernel default of 128 in sysfs, the default reporting rate is 100Hz. All observations are YMMV and whatnot, especially the latter.

Trackpoint deltas are in integers but the dynamic range of delta values is tiny. You mostly get 1 or 2 and it requires quite a fair bit of pressure to get up to 5 or more. At low pressure you get deltas of 1, but less frequently. Visualised, the relationship between deltas and the interval between deltas is like this:


Illustration of the relation between pressure and deltas/intervals
At low pressure, we get deltas of 1 but high intervals. As the pressure increases, the interval between events shrinks until at some point the interval between events matches the reporting rate (100Hz/10ms). Increasing the pressure further now increases the deltas while the intervals remain at the reporting rate. For example, here's an event sequence at low pressure:

E: 63796.187226 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +20ms
E: 63796.227912 0002 0001 0001 # EV_REL / REL_Y 1
E: 63796.227912 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +40ms
E: 63796.277549 0002 0000 -001 # EV_REL / REL_X -1
E: 63796.277549 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +50ms
E: 63796.436793 0002 0000 -001 # EV_REL / REL_X -1
E: 63796.436793 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +159ms
E: 63796.546114 0002 0001 0001 # EV_REL / REL_Y 1
E: 63796.546114 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +110ms
E: 63796.606765 0002 0000 -001 # EV_REL / REL_X -1
E: 63796.606765 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +60ms
E: 63796.786510 0002 0000 -001 # EV_REL / REL_X -1
E: 63796.786510 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +180ms
E: 63796.885943 0002 0001 0001 # EV_REL / REL_Y 1
E: 63796.885943 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +99ms
E: 63796.956703 0002 0000 -001 # EV_REL / REL_X -1
E: 63796.956703 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +71ms
This was me pressing lightly but with perceived constant pressure and the time stamps between events go from 20m to 180ms. Remember what I said above about unreliable deltas? Yeah, that.

Here's an event sequence from a trackpoint at a pressure that triggers almost constant reporting:


E: 72743.926045 0002 0000 -001 # EV_REL / REL_X -1
E: 72743.926045 0002 0001 -001 # EV_REL / REL_Y -1
E: 72743.926045 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 72743.939414 0002 0000 -001 # EV_REL / REL_X -1
E: 72743.939414 0002 0001 -001 # EV_REL / REL_Y -1
E: 72743.939414 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +13ms
E: 72743.949159 0002 0000 -002 # EV_REL / REL_X -2
E: 72743.949159 0002 0001 -002 # EV_REL / REL_Y -2
E: 72743.949159 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 72743.956340 0002 0000 -001 # EV_REL / REL_X -1
E: 72743.956340 0002 0001 -001 # EV_REL / REL_Y -1
E: 72743.956340 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +7ms
E: 72743.978602 0002 0000 -001 # EV_REL / REL_X -1
E: 72743.978602 0002 0001 -001 # EV_REL / REL_Y -1
E: 72743.978602 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +22ms
E: 72743.989368 0002 0000 -001 # EV_REL / REL_X -1
E: 72743.989368 0002 0001 -001 # EV_REL / REL_Y -1
E: 72743.989368 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +11ms
E: 72743.999342 0002 0000 -001 # EV_REL / REL_X -1
E: 72743.999342 0002 0001 -001 # EV_REL / REL_Y -1
E: 72743.999342 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 72744.009154 0002 0000 -001 # EV_REL / REL_X -1
E: 72744.009154 0002 0001 -001 # EV_REL / REL_Y -1
E: 72744.009154 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 72744.018965 0002 0000 -002 # EV_REL / REL_X -2
E: 72744.018965 0002 0001 -003 # EV_REL / REL_Y -3
E: 72744.018965 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +9ms
Note how there is an events in there with 22ms? Maintaining constant pressure is hard. You can re-create the above recordings by running evemu-record.

Pressing hard I get deltas up to maybe 5. That's staying within the second pressure range outlined above, I can force higher deltas but what's the point. So the dynamic range for deltas alone is terrible - we have a grand total of 5 values across the comfortable range.

Changing the sensitivity setting higher than the default will send higher deltas, including deltas greater than 1 before reaching the report rate. Setting it to lower than the default (does anyone do that?) sends smaller deltas. But doing so means changing the hardware properties, similar to how some gaming mice can switch dpi on the fly.

I leave you with a fun thought exercise in correlation vs. causation: your trackpoint uses PS/2, your touchpad probably uses PS/2. Your trackpoint has a reporting rate of 100Hz but when you touch the touchpad half the bandwidth is used by the touchpad. So your trackpoint sends half the events when you have the palm resting on the touchpad. From my observations, the deltas don't double in size. In other words, your trackpoint just slows down to roughly half the speed. I can reduce the reporting rate to approximately a third by putting two or more fingers onto the touchpad. Trackpoints haven't changed that much over the years but touchpads have. So the takeway is: 10 years ago touchpads were smaller and trackpoints were faster. Simply because you could use them without touching the touchpad. Mind blown (if true, measuring these things is hard...)

Well, that was fun, wasn't it. I'm glad you stayed that long, because I did and it'd feel lonely otherwise. In the next post I'll outline the pointer acceleration curves for trackpoints and what we're going to to about that. Besides despairing, that is.

[1] I doubt you will be, but it always pays to be prepared.
[2] In this post I'm using "pressure" here as side-ways pressure, not downwards pressure. Some trackpoints can handle downwards pressure and modify the acceleration based on it (or expect userland to do so).
[3] Not that this number is always correct, the Lenovo CompactKeyboard USB with Trackpoint has a default sensibility of 5 - any laptop trackpoint would be unusable at that low value (their default is 128).
[4] I honestly don't know this for sure but ages ago I found a hw spec document that actually detailed the process. Search for ""TrackPoint System Version 4.0 Engineering Specification", page 43 "2.6.2 DIGITAL TRANSFER FUNCTION"

June 06 2018

Marco Barisione: Using clang-format only on newly written code
Richard Hughes: Updating Wacom Firmware In Linux

Peter Hutterer: libinput is now on gitlab.freedesktop.org

Thanks to Daniel Stone's efforts, libinput is now on gitlab. For a longer explanation on the move from the old freedesktop infrastructure (cgit, bugzilla, etc.) to the gitlab instance hosted by freedesktop.org, see this email.

All open bugs have been migrated from bugzilla to gitlab too, the documentation has been updated acccordingly, and we're ready to go. The new base URL for libinput in gitlab is: https://gitlab.freedesktop.org/libinput/.

June 05 2018

Alexandru Fazakas: 23rd of April

On the 23rd of this year’s April, the accepted GSoC projects were announced. It was a super stressful day for me and I barely slept on the previous night as I was eagerly waiting for the list to be posted.

I kept checking the official site and my email but nothing would show up! I wanted to know as soon as possible, be it 6 AM! Of course, I still had to wait most of the day for it as I’m on EET and (presumably) the GSoC organisation is not on the same page here.

An university organization which I am part of had a meeting that day which I attended. While different ideas were being tossed around and discussed, I decided to refresh the Google Summer of Code homepage to see if anything’s up, albeit no email had been delivered.

Lo and behold (not as surprising as it was for me considering I am writing this) my project had been accepted and I was about to start my bonding period as an official member and contributor under the GNOME community!

I doubt I’ll soon (if ever) forget the feelings I went through as I saw my name listed there. At first, I could not find myself. The GNOME projects list kept going and going, I even went past my fellow Nautilus GSOC’er project and would not see my name. Eventually, I saw it, “Tests, profiling and debug framework for Nautilus” with  my name on top of it. It just felt both rewarding (as I had been contributing to Nautilus for a while up to that point) and relaxing, knowing I would get to contribute to something I use on my day-to-day work and alongside the people I got to learn so much from, all whilst being a part a of a huge project, whose name is familiar to millions of users.

What followed was 2 weeks of bonding and interacting with the community (which I had already grown fairly familiar to), learning the workflow and getting to know the project and organization even better. Luckily for me, contributing for about 5-6 months helped me with these, so the bonding period felt comfortable.

 

Alexandru Fazakas: Nautilus File Operations

The first thing I was to start working on under my mentor’s guidance, Carlos Soriano, was the implementation of unit tests.

While unit tests are meant to be fairly short and simple, tackling individual instances of a functionality or component, Nautilus would not really allow us to do that. Due to Nautilus’ nature and its tight relation to I/O operations, unit testing for us meant cherry-picking the simpler functions which we use and testing these. However, for the larger, more important components, we’d rely on integration tests, which represented one of the following items on our list.

We started working on nautilus file operations first, which involves functionalities such as copy/paste, move, trashing, deleting. Before this, although I had contributed one unit test before, I decided I would start small. So, while going through our file-operations code, I found a function which tests if a directory contains any children files. As I needed to get a better hold of the libraries we would work with (the glib testing framework, in particular), I decided to write an unit test for this function. Fortunately, its implementation was straight-forward, more or less, so testing it did not prove too difficult (designing some tests and the edge cases). The merge request I opened on it was accepted, after a few changes, and is now in our master version (as you can see here).

Next, I wanted to go with something bigger, something which we outlined when talking about what we wanted to test exactly, so copy/paste it was! As per my previous experience, I tried creating a couple of small file hierarchies and copy/paste-ing these expecting everything to go smooth. Boy, was I wrong. What followed was my mentor explaining me why it did not work as I would suspect it to and how to work on it. Turns out, pretty much all of these operations are asynchronous, so before writing any actual tests, we need to create a synchronous version, one which can be used by the “async” one, just that it would be on a different thread.

Moving forward with the implementation of the asynchronous alternative for the copy operation, I started designing the tests, only to bump into another issue. Whenever we would copy file X to directory Y, we had the option of renaming it so, naturally, I designed cases for both alternatives. The bigger issue here was if we tried to copy directory X containing file Y into directory Z. Copying X into Z, while changing its name to “T”, would result in Y containing a directory named T (which is fine) aaaaand a file named Z (which wasn’t really fine). This was a flaw in our code, so after a quick discussion on it with my mentor, we concluded that its fine to split tests into two here as well: one where we don’t change any name (so the result would be Z/X/Y) and one where do change the target name (resulting in Z/T/T), only we would comment the second one and flag the issue in order to be worked on afterwards. I opened an issue on it and moved on with the implementation of the test.

Unfortunately, my finals session started so I had less time to work on my project. My mentor was awesome about it and said I should focus on my exams and it would be fine to contribute when I have the time for it. While it’s not final yet, this would be a sneak-peek at the copying test right now which I aim to finish alongside the move one (which I started writing while trying to figure out the new name issue regarding copying I’ve just mentioned above).

I honestly can’t wait to be done with finals in order to work on these. Contributing to to Nautilus, being an active member of its community feels way more rewarding than studying algorithms in uni. 🙂

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl