Hey everyone and welcome back to this
edition of the Linux and
open source news podcast.
I'm your host Nick and this is the show
where we discuss
everything related to Linux, open
source, the open web, privacy and
everything around those spaces.
So in today's episode we have GNOME or at
least the GNOME
Foundation announcing a five
year plan to get more funding, to get
more people on board, to
get more users for GNOME.
We have Windows and Microsoft announcing
a new feature called
Recall which looks like
a privacy and security nightmare.
We have Qualcomm potentially beating
Apple's latest chip in
their own ARM CPUs which have
full mainstream Linux support or at least
they are working on that
full mainstream support.
We have the first inklings of AI
legislation passed in the EU.
We have some news about Ubuntu and some
problems they might
start having with GNOME.
We have some big progress on Plasma and a
bunch of other things.
So as always if you want to learn more
about any of these topics
all the links that I used
are in the show notes and if you don't
really like waiting for
the end of the week to get
these news well you can also get them in
daily format from Monday
to Friday if you become
a Patreon member the link is in the
description of the show as well.
So now let's get started.
So GNOME has published a first draft of
their five year plan or at
least the GNOME Foundation
did that.
They published it to get a bit of
feedback from the
community, from developers, from
users and other interested parties who
might ship or might want
to ship GNOME or to even
just fund the foundation.
So they divided this plan into three big
goals each having
three separate objectives.
So the first big goal is to have
explosive growth of the
community of what they call
creators and users.
I guess creators mean developers here,
developers for GNOME but
also for GNOME applications,
maybe extensions and
everything around GNOME.
The three sub-objectives are first to
unify the community
around the shared vision.
I think that's something
GNOME has kind of already did.
They do have their own vision of what a
desktop environment is.
They have been pushing hard for this and
yes some people did
not like this vision but
at least the community around GNOME right
now is focused on the
same goals and it seems
to be working for app developers at least
because they have the
biggest and best ecosystem
for third-party apps on any
other desktop environment.
The second sub-goal is to make GNOME
relevant and attractive
to more diverse people and
probably this means accessibility, making
GNOME something that
everyone can use no matter
if they are fully able, if they have
disabilities, no matter
what everyone should be able to
use GNOME and they are already working on
that with their new
accessibility framework.
And finally they also want to increase
the commercial and
economic value of GNOME.
Now this one I don't
really know what that means.
My mind instantly went to oh no they want
to make distro space
to implement GNOME but
that's really probably not what that is.
It's probably just to make the project a
more desirable target for
fundraising and for potential
investors who would just drop money into
the GNOME Foundation to
support the GNOME project.
So it's probably more communication
around what GNOME is
doing and less trying to sell
GNOME as a product.
Now the second major goal is to create a
unified and integrated
suite of programs, services
and processes.
And this seems to encompass building more
integrated
technologies, so maybe having more
GNOME APIs that can be taken advantage of
by GNOME itself but
also by GNOME applications.
But it also seems to
target how GNOME is organized.
For example they want to reorganize their
myriad of little
events into a single annual
event that can include
more people at the same time.
So presumably maybe livestreams and
people being able to ask
questions online and them
being addressed, maybe taking more
advantage of the current
technologies to make a one
big GNOME event instead of multiple
little guadeks around
the year and smaller other
little development sprints.
Not sure what exactly they mean around
that but they do want to
conflate all events into
one.
Finally the third big goal is to
strengthen the GNOME
Foundation as a non-profit.
So presumably to make sure it is well
funded and that it works reliably.
There are objectives such as documenting
their impact and their
value which is going to be
absolutely mandatory if they want to
showcase what they've
done with the money they have
received.
They also want to double the annual
expense but also the
annual revenue budget of the
foundation.
So that just means get
more money so we can spend it.
And the final goal is prioritizing the
health and well-being of the foundation.
So probably not letting it run as it did
in the past by just
tapping in their treasury
and not really trying to
find new avenues for funding.
All of this seems to just mean we need
more budgets to do good
things with GNOME so we
need to find more money
first and then we can spend it.
All in all it is obviously not a plan for
GNOME, the desktop environment.
It's not a development roadmap, a feature
roadmap, it is a plan for
its supporting fundraising
and governance organism.
And I think it makes sense
as a bird's eye view thing.
To make that project work you need to
find reliable sources of
funding and then you can
spend that money towards growing GNOME as
a project whether it's in terms of having
a bigger community, having more
developers, having more
features, having more resources,
having more communication.
This also requires making sure that first
you have a big community, people that can
be hired with that money
to work on GNOME features.
And to do that you need a strong vision,
you need some big
events to draw people in, you
need some interesting technology people
want to use and you need
to document all of this
for potential funding partners.
It makes a lot of sense, the language
here feels extremely
corporate to me when reading
this but honestly that's probably what
people would expect from
a non-profit like trying
to find money, you need to appear like a
company but not be a
company and I think that's
what they did here.
So obviously you can submit your feedback
on that draft if you
have remarks, just remember
it is not feedback on GNOME the project,
no one is asking you if
you like GNOME, if you
think GNOME likes certain features,
that's not what this
draft is about, this is a plan
for the foundation itself, it's not a
development roadmap so don't
go around submitting feedback
saying hey we need this feature and
minimize button needs to
come back because that's not
what they're looking for here.
So there's a button to submit feedback if
you click on the link
that I left in the show
notes.
Now we also got a few interesting details
about the Qualcomm Snapdragon X Elite and
you might wonder what that thing is and
why I'm even talking
about this here, this is
an ARM CPU made obviously by Qualcomm for
laptops and initially
it looked like it would
be a partnership with Microsoft and it
would only land in Surface
laptops but it also looks
like Qualcomm is upstreaming a lot of
that support to the
mainline Linux kernel, they
already have the CPU support in place and
they're already upstreaming more and more
parts of the entire system on a chip
including how to handle the
battery, the USB host, potential
webcams plugged into it
and everything in between.
So this is a really cool CPU or system on
a chip to follow if
we're a Linux user and
we're also looking to move to ARM or at
least interested in ARM computing.
And the benchmarks are really not bad for
this chip as well
because they seem to indicate
it will more than compete
with Apple's latest computers.
In Geekbench 6 they apparently beat the
Apple M3 by a pretty
solid margin, if I remember
correctly the M3 is like 88% of the
performance of the
Qualcomm chip, so not bad.
It also looks like the Qualcomm chip
beats an Intel Core Ultra 7 155H as well.
Now the chip seems to run a little hotter
than Apple's M3 but it
still manages to beat
it in battery life as well, apparently
the M3 gets 16% less
battery life than Qualcomm's
chip while playing local video files.
In terms of graphics though, the chip
from Qualcomm seems a
little bit less interesting,
it's being beaten in blender render tests
by the M3 and by the Intel chip as well.
But it's still pretty impressive and of
course it's just
benchmarks and Qualcomm and other
manufacturers have a bad habit of
optimizing for specific benchmarks to
appear more powerful
than they are in actual real use.
But in the end it still looks like a
pretty decent chip and
while for now I think it's
only been announced to be used in a
Surface laptop from
Microsoft, with Qualcomm submitting
all the code and contributing everything
upstream and in the right
places not with proprietary
modules and weird things like that
although there will be
proprietary firmware in the
default Linux firmware packages, it's a
good option that we
might have to run ARM devices
with Linux with native support and it
might also be way better
supported than anything
from Apple because as much as the Azahi
developers have been doing
an insane amount of work,
it's still reverse engineering and it's
taking a lot longer for
them to actually manage to
find how things work and to develop
drivers and to get these drivers to
acceptable performance
levels compared to just having the
company contribute all the
drivers to the Linux kernel
itself.
So it looks like an interesting thing and
I will definitely
keep an eye on it to see
how good of a support it gives to Linux.
Now we need to talk about Windows and
Microsoft because Microsoft
held a big conference about
AI in Windows and what they call AI PCs
which they will brand
Copilot Plus PCs and among
all the usual AI features that was one
that stood out as
being particularly bad it's
Windows Recall and if you only look at it
from a bird's eye view, this thing seems
insanely useful.
You just ask your local AI on your PC
something and it's able
to find anything you did on
that PC that's remotely linked to that.
So for example if you were browsing for I
don't know Warhammer
miniatures and you completely
forgot what website you saw this super
deal on and you just
remember the image on the
box.
You ask your AI to find this exact image,
you type I don't know
golden warriors with
lances and it's gonna find an entire
screenshot of the website you were
browsing fully analyzed
by the AI and perfectly representing what
you were looking for.
You could also just be remembering hey I
saw this super cool bag
that I wanted to buy the
other day but I don't quite remember I
just remember it was a brown bag.
You type find me a website where I browse
for a brown bag and
it's gonna display that
for you.
Seems insanely useful.
In fact it's screenshots that is
displaying to you because
the way this thing works is
the dumbest I have ever seen.
It's just taking screenshots every few
seconds and it's storing
those screenshots on disk
and it's learning from those
to identify objects and themes.
And obviously this should raise a huge
red flag because this
means you will have screenshots
that might display very private
information or very
private activities stored without
your knowledge.
It can use up to 25 gigs of storage on
your PC to do so, that's
more than an entire Linux
distro's worth of disk space, and anyone
who logs into your PC
with your account or anyone
who can just access the disk's content
will have access to
these screenshots which feels
pretty bad.
Now Microsoft said that they will not
take screenshots when
you're using private browsing
in Edge but they didn't mention other
browsers or if they
could integrate with that.
And they also said it won't take
screenshots of copyrighted materials
although how it will
identify these is unknown as well.
The feature will of course be on by
default and fortunately
it seems that it might be
encrypted which lessens
the risk a little bit.
All the screenshots could be encrypted on
disk but since disk
encryption only exists
for Windows Pro and Enterprise users
because they have access
to BitLocker it is unclear
if Windows Home users will have those
screenshots encrypted on disk or not.
And if they don't it means that anyone
with a live CD or a live
USB drive could just go
and grab those screenshots and pass
through them and see any
credentials, login information,
passwords or whatever else the user might
have been doing or
any weird activity that
they might have been doing on their PC
that has been
screenshotted by this really fantastic
AI.
So obviously the problematic use cases
are plenty on public
PCs if the person managing
this PC is ill intentioned or has
forgotten to disable this
feature anyone who logs into
this PC could see what you did, including
your credentials
potentially or your personal
emails.
A work laptop will have all of that
accessible by the IT
team or even by your boss.
A family PC that shares a single account
is a major issue as well
or if you just get your
laptop stolen and they
manage to log into your account.
That last one is already a big big
problem for you even if
you weren't using the recall
feature but it only adds insult to injury
because for example
you might have a password
manager that has a master password, the
attacker that stole your
laptop cannot enter that but
they could just go and look at your
screenshots to see that
exact password that you unlocked
and maybe you displayed in a full text
which yeah that's bad.
So in the end it's another let's do AI
stuff way too quickly
without looking at the consequences
type of features and it's really creepy
in my opinion and it
also looks like there's
already some investigations at least in
the US in terms of
privacy to see if this feature
might even be legal at all.
So yeah, well done Microsoft, that's
another great use of AI
that we've definitely built
trust and confidence from the
public in these technologies.
And since we're talking about AI let's
get this other topic out of the way.
Unfortunately, or unfortunately depending
on where you stand on
AI regulation, it looks
like AI regulation is starting to appear
with EU member
countries agreeing on an AI act.
Under this new legislation any company
developing AI systems and
wanting to put them into the
hands of EU citizens will have strict
transparency obligations to
let people and regulators the
type of data they use to train with
detailed summaries of
where they grab this data, they
also will have to explain the specific
biases, they ironed out of
the model or they voluntarily
added to it, they will also have to
conduct more testing, do
audits and do some reporting
on their energy use and
cybersecurity measures.
This new act also restricts the
government's use of biometric
surveillance in public spaces
using AI technology.
And companies providing AI tools in the
EU will have to comply
in the next 12 months
if they are doing general purpose AI
models and they will have
more time, 36 months, if
they are building, let's call them more
sensitive AI tools, stuff
related to weapons, warfare,
law enforcement and the like.
This means that US companies will also
not be exempt from these
regulations if they aim
to provide their AI
tools in the EU market.
The fines can go up to 7% of the
company's global turnover.
And this law also seems to ban the use of
AI on social scoring,
on predictive policing
and on scraping of facial images from
CCTV footage or the
internet, which is also a very
good thing because it already gives
countries a clear no-no
on using AI to basically do
mass surveillance of their citizens.
There are other ways to do that, but at
least AI won't add to the pile.
All in all, it is a first barrier to make
sure that these tools
aren't trained on anything
and everything or at least if they are,
we'll know what they use.
And it also will help various countries
to ensure that these AI
companies are not just
going fast and breaking things and just
doing whatever they please.
It doesn't address the content property
side of things or the
licensing side of the material
used to train the AI, but I'm sure that
this will slowly be fixed
through court cases across
the world.
Now let's talk Ubuntu.
We have some more information about
Ubuntu 24.10, its
codenamed "Oracular" or "Riole",
which apparently is a form of small bird.
Apparently this new version will be
focused not on adding a ton
of features, but on polishing
what is already there.
The installer, for example, will finally
gain its TPM backed
disk encryption feature, it
won't be experimental anymore.
The installer will also give the user
more feedback when
issues arise, when for example
the system is trying to configure a
specific device but
fails, you will get clear error
messages and you will know what hasn't
worked when you're trying to
install the Nvidia drivers,
if there's an issue, it will tell you so.
And Ubuntu will also get a new welcome
wizard that will
presumably replace the one from
GNOME.
And the App Center from Ubuntu should
also get better at
recommending apps and displaying
user ratings.
As per using Flutter, there were some
doubts and issues with
Google laying off a sizeable
part of the Flutter team, but Ubuntu will
keep using it, they
are not abandoning this
technology and they will keep investing
on it, for example they
will migrate Flutter's
GTK backend from GTK3 to
GTK4 for better performance.
And in 24.10 there's also one major
change which is that
Wayland will be the default
for everyone, including Nvidia users,
they are probably banking
on explicit sync support
being released for everyone by the time
October rolls in, GNOME
will have support for it,
so Ubuntu also will, and if the Nvidia
drivers have been updated
then, and they're already
in beta with updates for that, I'll talk
about this later in
this episode, then probably
things should be a lot better for
Wayland, so they are gonna
move that to the default.
Now there were also some interesting
comments from Ubuntu's
desktop lead, they said they
want to focus on the desktop, and they
plan to expand Ubuntu's
desktop team by at least
50% over the next year, and this is
really good news, because
it means that they would
like to push the desktop a bit more when
it's been relatively
fixed in place for a while
now and it feels like they tried to add a
few things, but they
also aren't really focusing
a lot of that.
So yeah, it's interesting to see that
they do have plans to
make the Ubuntu desktop
a priority, or at least more of a
priority at Canonical.
And still on Ubuntu, they are having a
few problems with GNOME.
GNOME shipped a major feature in a point
release, in GNOME 46.1
they delivered explicit sync
in Mutter and the GNOME shell.
This is something they don't usually do
in GNOME, usually, or
at least for the longest
time point updates have just contained
bug fixes, security
fixes, and some minor changes
to the core applications of
the desktop, but that's rare.
Features are generally not a big part of
those point updates,
especially major features like
explicit sync.
And the fact that they changed this for
GNOME 46 makes their
relationship with Ubuntu a
bit uneasy, because GNOME shell and
Mutter will no longer be
covered by Ubuntu's micro
release exception, or MRE.
MRE is a policy that Ubuntu has to allow
releasing point updates to
certain components of the
distro without having to do any QA or,
well, no extra testing,
no backboarding of fixes
or patches that they apply.
So basically they say, okay, we have our
patch set, we can just
apply that to this minor
version, it will work exactly
in the same way as previously.
And we know that because we agreed on
that with the upstream project.
This is something that they do with GNOME
shell and Mutter, for
example, because Ubuntu
has a bunch of patches that they apply on
top of that to enable certain features or
to add stuff that hasn't been upstreamed
yet, like the triple
buffering patch for Mutter.
Ubuntu doesn't tend to ship new features
to its applications
in its repos, and tend
to not ship new features to the desktop
either along the life of a distro.
So GNOME shipping this major feature here
breaks this confidence that they have.
And this means that Ubuntu will no longer
just ship any point
update for GNOME, they
will have to look through it to backboard
their own fixes and
improvements, and to do
a lot more testing, they will have to
check every single point
update on a case by case
basis to see if it can be added as is or
if they have to do a lot of extra work.
And honestly, in this case, I'm not sure
if GNOME or Ubuntu is
right or wrong, I think
both viewpoints are perfectly fine.
GNOME needs to add explicit sync as fast
as possible, because
without it, the experience
with Nvidia and Wayland is
really bad for a lot of people.
So if they had the opportunity to ship
that in a point update
instead of waiting for September
for GNOME 47, I think they really should
and they really did well
shipping this right now.
But on the Ubuntu side, they had
basically an unspoken
agreement as far as I can understand
that GNOME would not break or would not
bring major features to
certain parts of the desktop
in between big releases.
And this unspoken agreement is apparently
now broken, meaning
that Ubuntu cannot just
grab the latest version of GNOME, apply
their patches, do some
automatic QA and publish
that because they know nothing has been
significantly changed or broken.
Now in the end, I think GNOME doesn't
have just Ubuntu to work with.
They ship GNOME to a lot of other
distributions and those
distributions tend to ship a much
closer to vanilla experience of GNOME.
So they absolutely should bring more
features whenever they
feel they can, and they should
not really limit themselves to having
features only every 6 months.
I'm sure a lot of people will think that
it's GNOME being GNOME
again, GNOME being little
dictators, but if you look at Plasma or
KDE, they always
shipped major big features
in point updates.
Plasma 6.0 got some big big updates as
well along its life, and
Plasma 6.1 will probably
have the same.
And no one really said that Plasma has a
problem doing that or
that Plasma is dictating how
distro should work.
So I think it's the distribution's job to
adapt to the desktop that they decide to
ship, it is their job to backboard their
patches or to apply their
patchset or to tweak their
patchset for every version of the desktop
or to decide to just
not ship a point update
if they feel it's
gonna break the experience.
But I don't think desktop environments or
applications should just limit themselves
because distributions
might have more work to do.
I think it's fair to let desktops and
apps do their work and
it's fair to let distros
apply an update policy that they won't
and that they decided upon.
But let me know what you think.
Now we gave GNOME some time in the
spotlight, so it's only
fair we do the same with KDE
and Plasma.
And KDE developers are still working on
Plasma 6.1, it should be
released in a bit less than
a month.
One of the things that they fixed in 6.1
is theming of their
applications on other desktops,
notably on GNOME.
I reported on this, I think it was two
weeks ago, but some
applications when run under
GNOME with the Advita theme as the
default just had no icons
in a lot of places because
they assumed Advita was a complete icon
theme when in fact it was
just an icon theme designed
for GNOME and it never wanted to have all
the icons KDE apps could want.
So this is going to be fixed in Plasma
6.1 and with
applications released after that
point.
The reason why it will be that KDE
applications will first try
to use the user defined icon
theme whether it's Advita or not and if
certain icons aren't in
that theme they look at the
theme that it inherits from.
So for example if you used I don't know
let's call it Tango icons
and it inherits from Advita.
They're going to look in Tango, if they
cannot find what they
want they're going to look
in Advita and if they cannot find what
they want in Advita
then they're just going to
use Breeze icons instead and Breeze icons
will always be
provided alongside KDE apps
that implement this.
For now only Kate, Console and Dolphin
support this but any
application could opt in to do
this as well.
On top of that in 6.1 Dolphin will be
able to generate well
I'm saying 6.1 it's also
in the KDE Gear Compilation
that should land a bit later.
So Dolphin will be able to generate
previews for remote
locations although it will warn
you that this is very very consuming in
terms of network resources
so it might be very slow.
Discover will let you know when a Flatpak
app is end of life
and has been replaced by
a new one and it also will let you switch
to that new app in one click.
And also you will get a much easier way
to get admin
privileges in Dolphin, you just
have a menu entry to act as
administrator, you get a
nice big warning pop up telling
you if you delete anything well we won't
stop you so be
careful and once you've done
that you're in admin mode and you can
browse Dolphin and move
files around just like you
were using Windows.
Now there are also some details on how
you will be able to
edit the desktop in Plasma
6.1 and how you will change the widgets,
how you will add more
widgets or tweak the current
panel or add a new panel.
Currently to do that you right click on
the panel or on the
desktop you choose edit mode
and then everything just layers on top of
your existing desktop
meaning that some elements
are hidden behind others if you display
for example the widget
sidebar and you wanted
to drag a widget where that sidebar
appears you have to drag
the widget out of the sidebar
then close the sidebar then move the
widget right where you want it.
If you have a top panel it's gonna be
hidden by the configuration
bar that appears in Plasma
to let you tweak the entire desktop.
It's a bit messy and it's hard to notice
when you're actually
in or out of that entire
mode so they're gonna
tweak that entirely.
In 6.1 when you enter the edit mode your
actual desktop and
wallpaper will zoom out, your
panels will still occupy their current
places on screen but
they won't be on top of your
desktop and wallpaper and everything else
that appears the
widget sidebar the little
pop-ups that appear when you try to edit
your panels they will
appear outside of your zoomed
out desktop meaning that they will never
hide the desktop itself
and you are able to interact
with elements without hiding whatever
you're doing and it's also
a lot clearer that you're
actually into edit mode when you're done
with edit mode the
desktop zooms back in occupies
the entire screen and
you know that you are done.
I think it looks pretty good I think it's
a lot clearer that
you're entering a special
mode when you're trying to customize your
desktop I think it's
an improvement over the
current way of doing things in plasma
there's a little video
you can watch in the article
that I linked in the show notes so you
can see how it looks
like it's pretty hard to
describe but basically if you know what
the overview looks like
in KD plasma it's gonna
look the same your desktop will zoom out
and you will have
elements to customize it on
the right and on the
left side of that desktop.
And finally on KD they just dropped the
KD gear compilations
version 24.05 because we're
in May and they use kind of the same
numbering scheme that Ubuntu does.
So updates to the core KD apps include
first improved
animations for Dolphin when you're
moving through and navigating through
files and folders when
you're dragging and dropping
items animations will convey what you're
doing in a more user friendly fashion.
You will also get more detailed
information about files for
example in the recent files
view you'll see the modification time and
date and if you're in
the trash you will see
the origin of the file.
Now Kdon Live, the video editor now can
let you apply effects to
multiple clips simultaneously
which is really important for
productivity and they also have an AI
powered tool to translate
subtitles automatically and they have
some performance improvements.
They also updates to NeoChat, the Matrix
client, it has a
dedicated search pop up and it can
now scan PDFs and documents for travel
related data and they
will display this information
in the conversation.
Tokodom, the Mastodon client will let you
write new posts in a
separate window, Elisa,
the music player now gives you either a
list or a grid view to
navigate your music collection
and there are also updates to the archive
manager arc, to the
aggregator RSS feed reader
and there are some new apps joining the
compilation, there's
Francis, a Pomodoro timer to know
when to take breaks when you're working,
there's Calm, that's an app
to give you guided breathing
exercises to reduce stress and there's
Odex which is an app to
let you rip music CDs.
So depending on your distro you will get
these updates more or
less quickly or not at all
until your next major update, there's
nothing revolutionary that
everyone absolutely needs
right now apart from maybe Kdon Live
users and if you absolutely
need that you can probably
install any and all of these apps through
Flatback or Snap to
always use the latest version
available.
Now this week there was also an
interesting blog post
about fixed kernel versions, as
in a distribution ships with a specific
kernel version and they
will not move to the latest
stable version, they will instead back
port fixes to the specific
version that they decided
to stop on.
And according to this post, this is not a
good way to build a kernel at all.
Now the research was conducted by CIQ,
that's a company that
offers commercial support for
Rocky Linux and their analysis seems to
point to the fact that a
frozen kernel is less secure
and a lot less secure than the stable
kernel the Linux team
makes and it's also buggier
and it does tend to get
buggier and buggier over time.
So they have a white paper on the topic
and as with all white
papers, take it with a grain
of salt, there's generally an ulterior
motive, it's made by a
company, it's why it's not
called a research paper, it's a white
paper, it's made by a
commercial for-profit company.
But the data behind it seems sound, they
took the example of Red
Hat Enterprise Linux 8
and they tracked the number of open bugs
against their fixed
kernel version and that number
has been increasing to insane numbers and
we're not talking about general bugs that
everyone has, we're talking about bugs
that do have fixes in the
more recent stable kernels
as well, meaning that the fixed version
of the kernel shipped
by Red Hat Enterprise
Linux 8 just did not get bug fixes that
could have been there
for more than 4000 bugs and
if they had just used the latest kernel
available, then they would
have had those bug fixes.
Now first, CIQ is a company built by the
person who created Rocky
Linux, they provide commercial
support for Rocky Linux and their
incentive here is probably
to just say hey, you know
what, we're making clones of Red Hat
Enterprise Linux 8, we have
to be fully compatible with
it, so we have to have the same problems
they have with their
kernel, so maybe if we say,
if we tell people that hey, you know
what, this kernel is
really buggy and problematic,
maybe at some point it will change and we
also will be able to ship better versions
of the kernel.
And also they are advantages for a
distribution, to stick to a
specific kernel version, you
have a stable platform, your users know
that their hardware
support isn't going to break
from under them, a specific feature will
also not break along
the line of the distro.
This is a good thing, sometimes when you
update to the newer
version of the stable kernel,
some drivers are removed or some drivers
just flat out don't work
with your older hardware,
it happens sometimes, but also this
research seems to indicate
that the frozen kernel model
is really not a rock solid option that
most distributions want
to emphasize, and it's
pretty understandable, if you have bugs
that you have to backport
fixes for every single day,
you basically need as many developers as
the Linux kernel has to
maintain your own version
of the kernel, you have to track every
single day which patch has
been applied and backport it,
except if your current version of the
kernel is end of life,
you probably have to modify
these patches as well and change the code
to make it fit to the older
version that you're using,
that is no longer officially supported.
The new patches are for the newer
versions of the kernel
that you're not using, so you probably
have to rewrite a bunch of
that stuff and that's probably
why a bunch of those bugs never get fixed
on the kernel, they're marked as low
priority and they're
never dealt with, and they will only
concern themselves with
real big security problems and
bugs that impact a lot of their users,
meaning that the end
experience is still worse than just
using the latest stable kernel, so I can
definitely understand
CIQ's point of view, I can also
understand why distros would want to
stick to a specific kernel
version, but personally I will
always prefer using a distro that has the
latest stable kernel or at
least is not too far from the
latest stable kernel, instead of using a
distro that has a fixed
version that they backport fixes
to, because it's just worse for
everybody, except maybe for the distro.
Now Mozilla also published
this week a roadmap for the features they
would like to implement in
Firefox over the course of
the year and probably into the first half
of 2025. Among these
features there's a big one which is
tab grouping and vertical tabs, this is
something a lot of other
browsers have, you can just create
tab groups, reopen those tab groups
automatically, you can display your tabs
vertically instead of on
the top of the window, which is also
pretty useful in some use cases, those
are good solid features,
you could already do them with
extensions, but having the
browser do it by default is cool.
Now to balance these very useful features
they will also add a
pretty meaningless one which is
tab wallpapers, you will be able to
change the wallpaper inside
of your empty tabs I presume,
who cares about this, menus in Firefox
will also be streamlined to
make the more important items
more visible and more accessible, I don't
really know if that one is
very important because Firefox
only has a hamburger menu, unless you're
using it on MacOS where
you have a menu bar, and that
hamburger menu is already pretty
barebones so I don't really see what
needs to be improved here,
but why not, and more importantly they
will focus on speed,
performance and compatibility,
they apparently already have something
that should improve
responsiveness by 20%, although
reading the article I'm not sure if this
is supposed to already
have landed in Firefox
or if this is something that is ready but
hasn't been shipped yet, and
Firefox has also been working
with the Interop project which aims to
make it easier for
developers to make websites that work
on more than just chromium but also
support Firefox and Safari and other
alternative browser rendering
engines. Finally, and because it's 2024,
Mozilla and Firefox will
also be working on AI features,
but fortunately they will run locally
without infringing on your privacy,
one example at least that they give was
generating alt text for images inserted
into PDFs that you're
reading in your browser, this is a pretty
useful accessibility
feature for people who need what's
on screen to be described or read to
them, having those alt text being
automatically generated
could be good for documents that haven't
had this alt text added
manually. And it all depends on
whether you stand on Firefox, personally
I'm getting a bit tired
of it being way slower than
other things I talked about it in my
weekly Patreon cast, but
basically on my gaming computer
I used Nobara, it came with chromium by
default and I was just
blown away by the speed of that
web browser compared to Firefox on the
same machine, a lot of
patrons and youtube members
told me they didn't have any speed
difference, I tried with a brand new
install, with a brand
new profile without extensions and yes
Firefox is absolutely slower than
anything chromium based on
multiple computers for me, at least for
the websites that I
visit, so if Mozilla delivers
this 20% responsiveness increase, this
might be enough for me to
stick with Firefox, if they
already have delivered this 20% increase
then I haven't seen it or
felt it at all in the past
two years or three years that I've been
using Firefox, oh no it's
way more than that five or
six years, I haven't noticed it at all,
so I don't know I hope this
is something that is coming and
not something that has already been
shipped, I wasn't really able to
understand that from their
blog post but let me know if you know,
apart from that this
roadmap is pretty limited, I mean
I don't know if Firefox really lacks that
many features but the
only big thing is like an
accessibility feature which is certainly
needed but not necessarily
insanely useful for everyone
and also tab grouping and vertical tabs
that's cool but that's it,
like it's not a lot, it's
really not a lot, when you compare it
with what the Thunderbird team has been
doing on Thunderbird
and how fast they've been moving with
updates, I don't know if
Firefox is still a priority for
Mozilla these days, it feels weird. Okay
let's finish this with the
gaming news, first we have
an interesting project called Netris,
it's basically Stadia but it's
self-hostable and it's open source,
it's focused on using Steam so it works
with your existing game library, you
don't have to buy games
from a specific store and you can then
stream your Steam games to
other devices either from
a server that they host or from your own
server because you can
self-host it, it offers Steam
library sharing, it offers HD streams,
parental controls and
apparently low latency thanks to
something they called Quick, QUIC, which
apparently helps with streaming quality
and with responsiveness.
Now this solution doesn't support games
that use a third-party
launcher like EA Games, Ubisoft,
Rockstar or Battle.net through their own
launchers, I suppose Epic
Games, they are not sighted but
they're probably also like not gonna work
if you use heroic or
something and they do use Proton GE
to run games so you will only be
streaming and playing games
that work on Linux obviously,
it is still experimental and if you want
to self-host it, it
does require an NVIDIA GPU,
probably because their streaming
improvements rely on CUDA but it is still
an interesting project,
if you like cloud gaming but you would
like to have actual control
over the game library that
you have, you don't want to pay monthly
for access to a catalog, you want to own
your games, you want
to be able to play them on other devices
when you're not streaming, I
think it's a nice solution,
the NVIDIA requirement will probably be
an issue for a lot of people,
it is not the most loved GPU
manufacturer out there on Linux but yeah
it seems like an
interesting project nonetheless.
And since we're on the topic of NVIDIA,
they released a beta for their
proprietary drivers,
it's version 555 this time and it is a
big one, it is not the
version where they will switch to
their open source modules by default for
recent GPUs, this is
probably going to be with version
560 but 555 is the version where they add
support for explicit sync
on Wayland, meaning that with
these drivers, most if not all of the
graphical glitches,
slowdowns, artifacts, latency and
performance problems you have with recent
NVIDIA GPUs on Wayland should have
disappeared. At least
for all NVIDIA GPUs that will be
supported by version 555, if you were
already stuck on drivers
390 or 435 then obviously you are not
going to get any improvements anymore,
these are unsupported
GPUs from NVIDIA, it sucks but that's
just the reality of how hardware works
these days unfortunately,
but if your GPU has access to the latest
NVIDIA drivers then you're
getting those improvements,
meaning that you will get a decent
Wayland experience at last.
These drivers will also move
to the GPU system processor firmware by
default for RTX cards,
that's something I thought was
already the case but apparently not and
you will need at least the
kernel 4.15 to use these drivers
which should not be a problem because if
your GPU can use those drivers you
probably want to run
that on a distro that is not stuck to the
kernel version 4, it's
already very very old and the new
drivers also implement support for 10 bit
per component over HDMI and
a few more Wayland related
features. So obviously these are still
beta quality drivers, if you absolutely
need to run Wayland right
now with NVIDIA you can definitely get
them from NVIDIA's website and jump
through all the usual
hoops to install those from this .run
package. Personally I will
wait for my distro to package
and release them because I have not had
many or any problems with
NVIDIA on Wayland but if you
have and you really want to use Wayland
well you can try and jump
in on those beta drivers.
Okay so this will conclude this episode
of the show, I hope you
enjoyed listening to it,
as always if you want to dive deeper into
any of these topics all the
links that I used are in the
show notes and if you want to get a daily
version of these news you can
also become a Patreon member
at any tier you will get a daily show 5
to 10 minutes from Monday
to Friday with basically
everything that is in here is in the
daily shows plus a few things
that I just cut out of those
weekly episodes because they would be way
too long if not so if that's
something that interests you
the link to the Patreon page is in the
show notes as well. So thank
you all for listening and I
guess you will hear me in
the next one next week. Bye!