Tasker: Total Automation for Android

The Register

China and Taiwan aren't great friends. Zoom sends chats through China. So Taiwan’s banned Zoom

7 hours 36 minutes ago
Government and local business told to buy local, but slum it with Google or Microsoft if you must

A parliamentary order issued yesterday says the nation’s Department of Cyber Security (DCS) has decided that when government agencies, and some private entities, use videoconferencing: “The underlying video software to be used should not have associated security or privacy concerns, such as the Zoom video communication service.”…

Planet Ubuntu

Ubuntu Blog: Simplify NFV adoption – Charmed OSM and Managed Apps

6 hours 44 minutes ago

Charmed OSM and Managed Apps let telecom operators accelerate adoption of NFV. This is needed because the way we consume data has changed. We want data at a cheaper price with faster speeds and in larger quantities. To meet the challenge, telecom operators are changing the underlying network infrastructure that delivers data. Software-defined networking (SDN) and network function virtualisation (NFV) are enabling this by lowering costs and improving infrastructure flexibility. But how can telecom operators make sure their deployment of NFV is successful? How can they deploy faster and with less risk?

Last week Canonical announced Managed Apps – a managed service that lets enterprises have their apps deployed and operated by Canonical. One of the ten apps that Managed Apps launched with, was Open Source MANO (OSM) – the NFV management and orchestration stack. Let’s look at what OSM is, how Managed Apps for Charmed OSM works and why you should use it. For a detailed understanding, sign up to this webinar on the benefits of Managed Apps.

What is Charmed OSM?

Telecom operators are migrating how they process data in a network from hardware-based to software-based services. Instead of using specialised hardware like firewalls or routers they are running their workloads in a cloud. Network services are easier to deploy, change and upgrade as they are “softwarised”. Importantly, the workload is now defined in software, instead of being defined by the specialised hardware.

Managing the software-based network services requires a software stack, and it is these management functions that OSM provides. It covers features like:

  • Lifecycle management: software installation, updates, upgrades and scaling workloads out
  • Configuration management: setting initial configuration parameters and changing them post-deployment as the service is used
  • Operations: not limited to backup, monitor, debug, add users, groups, policies, manage certificates/keys
  • Software integration: for example, integrating logging, monitoring and alerting applications with the rest of the network services. Or adding software that assists in data backups

Charmed OSM uses Juju charms to fully automate its installation process and simplify post-deployment operations. Juju also simplifies integration with other critical network and cloud infrastructure like OpenStack and Kubernetes.

What does Managed Apps for Charmed OSM include?

For a detailed look into what Managed Apps offer sign-up to this webinar on the benefits of Managed Apps. In brief, Managed Apps and Charmed OSM let telecom operators benefit from the cutting-edge NFV management and orchestration that the app provides, but relying on Canonical to do the background app management. This includes updating, bug-fixing, securing, actively monitoring, scaling the app as demand grows, and ensuring high availability.

  • Updating and bug-fixing: After making the initial investment, operators want to stay at the cutting edge of NFV technology. However, updating in a production environment is challenging and this can also mean upstream bugs are introduced. Canonical provides a safe way to update applications, letting organisations decide when it is performed – e.g. during downtime. If any upstream bugs causes issues with the cloud, Canonical’s engineers are ready to solve the issues
  • Active monitoring and high availability: Operators need a stable network – this is requirement number 1 for any telecom provider. Managed Apps have experienced Canonical engineers monitoring OSM 24/7 so any performance or capacity issues can be troubleshot immediately. Managed OSM improves network resilience as it is highly available – built with redundant pods that can be relied on if anything breaks
  • Scaling the app as demand grows: VNFs (virtual network functions) increases flexibility. Naturally, this should extend to OSM, and Managed Apps for Charmed OSM allows operation at any scale with on-request node scaling.
Why use Managed Apps for Charmed OSM?

Managed Apps for Charmed OSM accelerate the use of NFV and OSM because telecom operators can focus on VNF workloads instead of day-to-day OSM operations. With many telecom operators running trials and proof of concepts with NFV in general and OSM specifically, anyone that understands how OSM works should be at the strategic and commercial edge of the business. With Managed Apps, Canonical does the background work so your teams can maintain focus and be forward-looking. 

While organisations have the infrastructure to deploy, maintain and assist technologies already in use, they are typically less familiar and able to assist with NFV technologies. Canonical, as a founding member of OSM, is in a unique position to provide the support in deploying and operating OSM.

Finally, Managed Apps comes with the stability assurance that telecom operators need. They are covered by an SLA for uptime, and Canonical’s managed services are MSPAlliance CloudVerify certified* – which is equivalent to SOC 2 Type2, ISO 27001 / ISO 27002, and GDPR compliance. Managed Apps comes with an integrated logging, monitoring and alerting stack so that telecom operators do not lose visibility over their mission-critical operations. 

Conclusion

Telecom providers are changing how their networks are run to adjust to the changes in consumer data needs. To allow VNFs to run, NFV is required and OSM allows users to deploy and manage VNFs. As telecom providers build their NFV capabilities, Canonical will support them by managing OSM. This removes one source of complexity and lets telecom providers focus on how they can meet changing consumer demands.

Your next step to simplify and accelerate NFV is to contact us. You can learn more by watching Canonical’s webinar on the benefits of Managed Apps.

*Final certification due April 2020.

Ubuntu Blog: The Wellcome Sanger Institute: sharing genomic research worldwide securely with supported Ceph

20 hours 1 minute ago

A world-leading genomic research centre, the Wellcome Sanger Institute uses advanced DNA sequencing technology for large-scale studies that surpass the capabilities of many other organisations. Among other works, the Institute is currently heading the UK-wide Darwin Tree of Life Project to map the genetic code of 60,000 complex species. It is also working with expert groups across Britain to analyse the genetic code of COVID-19 samples, helping public health agencies to combat this now widespread virus.

For advanced research, genomic scientists need to use and access a vast amount of data. They then need to be able to share this data with other scientists worldwide in a secure and reliable manner. To meet this data storage and retrieval challenge, the Institute opted for Ceph on Ubuntu as an on-premise solution offering superior robustness and scalability. Authorised users internal and external to the Institute can store and retrieve any volume of data from any location via the S3 protocol. 

Dr Matthew Vernon, Principal Systems Administrator at the Institute says, “We have about 55 petabytes of data on the campus of which now about 20 petabytes or so are stored in our Ceph clusters. A lot of that is data that we want to share securely with collaborators via our S3 service. We were looking for support and stability for the infrastructure that our scientists could rely upon”. 

After evaluating a range of providers, the Wellcome Sanger Institute chose Canonical for Ceph support.

As Matthew explains, “We have quite a lot of Ceph expertise on site, but we needed someone with really in-depth knowledge of the system. We saw that Canonical could provide this expertise for us and we were already using Ubuntu for the operating system and the Ceph packages. So, that made it natural to look at Canonical as our support provider.”

With the IT infrastructure at the Wellcome Sanger Institute a key factor in pushing back the boundaries of science, Dr Peter Clapham, Informatics Support Group Team Leader says, “With Canonical, we have a platform in place for meeting leading edge requirements, ensuring resilience, and making sure that as it grows, the Institute has a provider that can grow with it and its support needs.” He adds, “We’ve engaged with Canonical for the confidence that we’re not just meeting challenges from today, but that we’re also looking to the future and the continuity of our technical solutions.”

To see more about how the Institute is meeting new pharmaceutical and technical goals, the full interview can be viewed below. Alternatively, learn more in the case study.

The Fridge: Ubuntu Weekly Newsletter Issue 625

1 day 13 hours ago

Welcome to the Ubuntu Weekly Newsletter, Issue 625 for the week of March 29 – April 4, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Jonathan Carter: Free Software Activities for 2020-03

1 day 19 hours ago

DPL Campaign 2020

On the 12th of March, I posted my self-nomination for the Debian Project Leader election. This is the second time I’m running for DPL, and you can read my platform here. The campaign period covered the second half of the month, where I answered a bunch of questions on the debian-vote list. The voting period is currently open and ends on 18 April.

Debian Social

This month we finally announced the Debian Social project. A project that hosts a few websites with the goal to improve communication and collaboration within the Debian project, improve visibility on the work that people do and make it easier for general users to interact with the community and feel part of the project.

Some History

This has been a long time in the making. From my side I’ve been looking at better ways to share/play our huge DebConf video archives for the last 3 years or so. Initially I was considering either some sort of script or small server side app that combined the archives and the metadata into a player, or using something like MediaDrop (which I was using on my highvoltage.tv website for a while). I ran into a lot of MediaDrop’s limitations early on. It was fine for a very small site but I don’t think it would ever be the right solution for a Debian-wide video hosting platform, and it didn’t seem all that actively maintained either. Wouter went ahead and implemented a web player option for the video archives. His solution is good because it doesn’t rely on any server side software, so it’s easy to mirror and someone who lives on an island could download it and view it offline in that player. It still didn’t solve all our problems though. Popular videos (by either views or likes) weren’t easily discoverable, and the site itself isn’t that easy to discover.

Then PeerTube came along. PeerTube provides a similar type of interface such as MediaDrop or YouTube that gives you likes, viewcount and comments. But what really set it apart from previous things that we looked at was that it’s a federated service. Not only does it federate with other PeerTube instances, but the protocols it uses means that it can connect to all kinds of other services that makes up an interconnected platform called the Fediverse. This was especially great since independent video sites tend to become these lonely islands on the web that become isolated and forgotten. With PeerTube, video sites can subscribe to similar sites on the Fediverse, which makes videos and other video sites significantly more discoverable and attracts more eyeballs.

At DebConf19 I wanted to ramp up the efforts to make a Debian PeerTube instance a reality. I spoke to many people about this and discovered that some Debianites are already making all kinds of Debian videos in many different languages. Some were even distributing them locally on DVD and have never uploaded them. I thought that the Debian PeerTube instance could not only be a good platform for DebConf videos, but it could be a good home for many free software content creators, especially if they create Debian specific content. I spoke to Rhonda about it, who’s generally interested in the Fediverse and wanted to host a instances of Pleroma (microblogging service) and PixelFed (free image hosting service that resembles the Instagram site), but needed a place to host them. We decided to combine efforts, and since a very large amount of fediverse services end with .social in their domain names, we ended up calling this project Debian Social. We’re also hosting some non-fediverse services like a WordPress multisite and a Jitsi instance for video chatting.

Current Status

Currently, we have a few services in a beta/testing state. I think we have most of the kinks sorted out to get them to a phase where they’re ready for wider use. Authentication is a bit of a pain point right now. We don’t really have a single sign-on service in Debian, that guest users can use, or that all these services integrate with. So for now, if you’re a Debian Developer who wants an account on one of these services, you can request a new account by creating a ticket on salsa.debian.org and selecting the “New account” template. Not all services support having dashes (or even any punctuation in the username whatsoever), so to keep it consistent we’re currently appending just “guest” to salsa usernames for guest users, and “team” at the end of any Debian team accounts or official accounts using these services

Stefano finished uploading all the Debconf videos to the PeerTube instance. Even though it’s largely automated, it ended up being quite a big job fixing up some old videos, their metadata and adding support for PeerTube to the DebConf video scripts. This also includes some videos from sprints and MiniDebConfs that had video coverage, currently totaling 1359 videos.

Future plans

This is still a very early phase for the project. Here are just some ideas that might develop over time on the Debian Social sites:

  • Team accounts. Some Debian teams already have accounts on a myriad of other platforms. For example, the Debian Med team has a blog on blogspot and the Debian Publicity team has an account on framapiaf.org. I’d really like to make our Debian Social platforms (like our WordPress multisite instance and Pleroma) a place where Debian teams can trust to host their updates on. It would also be nice to have more teams use these that don’t have a particularly big online presence right now, like Debian women or a DPL team account.
  • Developer demos. I enjoy the videos that the GNOME project makes that demos the new features in every release, as they’ve done for the 3.36 release. I think it would be great if people in Debian could make some small videos to demo the things that they’ve been working on. It doesn’t have to be as flashy or elaborate as the GNOME video I’ve linked to, but sometimes just a minute long demo can be really useful to convey a new idea or feature or to show progress that has been made.
  • User participation. YouTube is full of videos that review Debian or demo how to customise it. It would be great if we could get users to post such videos to PeerTube. For Pixelfed, I’d like to try out projects like users posting pictures of their computers with freshly installed Debian systems with a hashtag like #WeInstallDebian, then at the end of the year we could build a nice big mosaic that contains these images. Might make a cool poster for events too.
  • DebConf and other Debian events. We used to use a Gallery instance to host DebConf photos, but it’s always been a bit cumbersome managing photos there and Gallery hasn’t updated it’s UI much over the years causing it to fall a bit out of favour with attendees at these events. As a result, photos end up getting lost in WhatsApp/Telegram/Signal groups, Twitter, Facebook, etc. I hope that we could get enough users signed up on the Pixelfed instance that it could become the de facto standard for posting Debian event photos to. Having a known central place to post these make them easier to find as well.

If you’d like to join this initiative and help out, please join #debian-social on oftc. We’re also looking for people who can help moderate posts on these sites.

Debian packaging

I had the sense that there were fewer upstream releases this month. I suspect that everyone was busy figuring out how to cope during Covid-19 lockdowns taking place all over the world.

2020-03-02: Upload package calamares (3.2.10-1) to Debian unstable.

2020-03-10: Upload package gnome-shell-extension-dash-to-panel (29-1) to Debian unstable.

2020-03-10: Upload package gnome-shell-extension-draw-on-your-screen (5.1-1) to Debian unstable.

2020-03-28: Upload package gnome-shell-extension-dash-to-panel (31-1) to Debian unstable.

2020-03-28: Upload package gnome-shell-extension-draw-on-your-screen (6-1) to Debian unstable.

2020-03-28: Update package python3-flask-autoindexing packaging, not releasing due to licensing change that needs further clarification. (GitHub issue #55).

2020-03-28: Upload package gamemode (1.5.1-1) to Debian unstable.

2020-03-28: Upload package calamares (3.2.21-1) to Debian unstable.

Debian mentoring

2020-03-03: Sponsor package python-jaraco.functools (3.0.0-1) (Python team request).

2020-03-03: Review python-ftputil (3.4-1) (Needs some more work) (Python team request).

2020-03-04: Sponsor package pythonmagick (0.9.19-6) for Debian unstable (Python team request).

2020-03-23: Sponsor package bitwise (0.41-1) for Debian unstable (Email request).

2020-03-23: Sponsor package gpxpy (1.4.0-1) for Debian unstable (Python team request).

2020-03-28: Sponsor package gpxpy (1.4.0-2) for Debian unstable (Python team request).

2020-03-28: Sponsor package celery (4.4.2-1) for Debian unstable (Python team request).

2020-03-28: Sponsor package buildbot (2.7.0-1) for Debian unstable (Python team request).

Jonathan Riddell: OpenUK Awards

1 day 21 hours ago

The OpenUK Awards are now open for nominations.

The First Edition of the OpenUK Awards is being held in London on 20 October 2020, 6pm and celebrates Open Technology, being Open Source Software, Open Source hardware and Open Data with 5 Awards.
  • Individual
  • Young person (being 25 or under on 30 March 2020)
  • Open Data – company or project
  • Open Source hardware – company or project
  • Open Source software – company or project

We are looking for the best in open source, hardware and data in the UK.  Who had achieved something great? Who has not been recognised? Which company or project are doing fabulous work that needs exposure?

Nominations are open until 15 June 2020 but don’t delay nominate today

The awards final will be 20 October 2020 – 6pm, Unilever Building London, either in person or over video as appropriate.

Lubuntu Blog: Lubuntu 20.04 LTS Beta Released!

2 days 14 hours ago
Your Lubuntu team has been hard at work, and has now released the beta version of Lubuntu 20.04 LTS. This will be our 18th release of Lubuntu, our fourth LTS release, but is our first LTS with the new LXQt desktop. Between April 2nd and April 23rd, all efforts will be focused on testing our […]

Ubuntu Podcast from the UK LoCo: S13E02 – Walking under ladders

3 days 1 hour ago

This week we’ve been live streaming Ubuntu development and replacing VirtualBox with Bash. We discuss Mark’s new Linux Steam PC set-up, bring you some musical command-line love and go over all your feedback!

It’s Season 13 Episode 02 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

sudo snap install ncspot ncspot
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!
  • Image credit: Miguel Orós

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Simos Xenitellis: A network-isolated container in LXD

3 days 12 hours ago

In this post we see how to get different types of network-isolated containers in LXD. Even if you are not interested in such things, doing this tutorial will help you understand better LXD proxy devices.

LXD container with no networking

To get a LXD container without networking, you omit the networking configuration in the profile that is used to create it. Therefore, we create such a profile and then use it for all our containers that have no networking.

Creating the nonetwork profile

First, we copy the default LXD profile as nonetwork profile, then edit nonetwork to remove the networking bits. We use this profile from now on to create container with no networking support.

$ lxc profile copy default nonetwork $ lxc profile show nonetwork config: {} description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: default type: disk name: nonetwork used_by: [] $ lxc profile device list nonetwork root eth0 $ lxc profile device remove nonetwork eth0 Device eth0 removed from nonetwork $ lxc profile show nonetwork config: {} description: Default LXD profile devices: root: path: / pool: default type: disk name: nonetwork used_by: [] $

As a side-note, I would like to change the description of the profile to something like Profile without networking. There is no direct command for this yet, and we need to edit the whole configuration with lxc profile edit. To do so, run EDITOR=nano lxc profile edit nonetwork and change the text of the description. Save, exit, and you are done. Here is the final profile for nonetwork.

$ lxc profile show nonetwork config: {} description: Profile without networking devices: root: path: / pool: lxd type: disk name: nonetwork used_by: [] $ Creating a nonetwork container

We can now create a container that uses the nonetwork profile. When we run lxc launch, we specify the nonetwork profile, and use the default ubuntu container image (ubuntu:) which is currently Ubuntu 18.04 LTS. In a few months this will switch to Ubuntu 20.04 LTS. We are happy with any LTS container image. If you wanted to specify specifically Ubuntu 18.04 LTS, replace ubuntu: with ubuntu:18.04. Finally, we give the name withoutnetworking. Once the container is created, we lxc list it to verify there is no IP address and finally we get a shell into it with lxc ubuntu containername.

$ lxc launch --profile nonetwork ubuntu: withoutnetworking Creating withoutnetworking The instance you are starting doesn't have any network attached to it. To create a new network, use: lxc network create To attach a network to an instance, use: lxc network attach Starting withoutnetworking $ lxc list withoutnetworking +-------------------+---------+------+-----------+ | NAME | STATE | IPV4 | TYPE | +-------------------+---------+------+-----------+ | withoutnetworking | RUNNING | | CONTAINER | +-------------------+---------+------+-----------+ $ lxc ubuntu withoutnetworking ubuntu@withoutnetworking:~$

What’s the state of networking in this container? Only loopback is there, no routes.

ubuntu@withoutnetworking:~$ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever ubuntu@withoutnetworking:~$ ip route ubuntu@withoutnetworking:~$

We have created a container without any networking. We can still get a shell into it with lxc exec (or the handy alias lxc ubuntu). We can move files and programs into and out of this container with lxc file push and lxc file pull. By doing so, we can be sure that whatever runs in this container, cannot be communicated through the network.

How to enable networking with SOCKS5

There is no networking in the container but how do we install packages? How can we add networking temporarily in some controlled way? One way is to attach a network device using lxc commands, then remove it. Another is to use a proxy. The benefit with using a proxy is that, depending on your needs, you can switch to one that provides fine-grained control on what is being accessed. For this tutorial, we are using SOCKS5 and a SOCKS5 server running on the host. The container communicates with this proxy server over a LXD proxy device.

Creating the SOCK5 server on the host

We are using a SOCK5 server written in the Go language. Install golang on the host, then run the following command to setup and run the server. Grab this Go file for a minimal SOCKS5 server. The filename is `simplesocke5proxyserver.go`.

$ sudo snap install go $ go run simplesocke5proxyserver.go Listening on 0.0.0.0:10080... Press Ctrl+C to interrupt:

Leave this program running as long as you want the proxy server running. This specific server is unauthenticated (anyone can connect), which means that anyone on the local LAN of the host is able to use this service as an open proxy.

To verify that the SOCKS5 server is running, use the following command. It is a curl command that connects to ubuntu.com. The command is successful if you get any output.

$ curl -x socks5h://127.0.0.1:10080/ https://www.ubuntu.com <html> <head><title>301 Moved Permanently</title></head> <body> <center><h1>301 Moved Permanently</h1></center> <hr><center>openresty/1.15.8.2</center> </body> </html> $ Creating the LXD proxy device

The following command adds a proxy device to the container. The device is called socks5port10080 (arbitrary name). It connects to the loopback interface on port 10080 (where the SOCKS5 server is active) and it listens (binds) for connections on the loopback interface on port 1080. We specify that the listen/bind will happen in the container, hence the connect will be on the host. We need to specify bind=container because if we omit it, the default is bind=host.

$ lxc config device add withoutnetworking socks5port10080 proxy connect=tcp:127.0.0.1:10080 listen=tcp:127.0.0.1:1080 bind=container Device socks5port10080 added to withoutnetworking $

To verify that there are open ports on both the host and the container, use ss -tuna (lsof -i cannot see the port!?!). It should show an open port on loopback on the host at port 10080, and an open port in the container on port 1080. Here is how it looks.

$ lxc ubuntu withoutnetworking ubuntu@withoutnetworking:~$ sudo lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME systemd-r 207 systemd-resolve 12u IPv4 915825 UDP localhost:domain systemd-r 207 systemd-resolve 13u IPv4 915826 TCP localhost:domain (LISTEN) sshd 275 root 3u IPv4 914329 TCP *:ssh (LISTEN) sshd 275 root 4u IPv6 914340 TCP *:ssh (LISTEN) ubuntu@withoutnetworking:~$ ss -tuna Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* tcp LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:* tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* tcp LISTEN 0 128 127.0.0.1:1080 0.0.0.0:* tcp LISTEN 0 128 [::]:22 [::]:* ubuntu@withoutnetworking:~$ exit $ Configuring the container to use a SOCKS5 proxy

Get a shell into the container and add the proxy to the APT configuration.

$ lxc ubuntu withoutnetworking ubuntu@withoutnetworking:~$ echo 'Acquire::http::Proxy "socks5h://localhost:1080/";' | sudo tee /etc/apt/apt.conf.d/12proxy Acquire::http::Proxy "socks5h://localhost:1080/"; ubuntu@withoutnetworking:~$

Now we can use apt in the container. Other parts of the container cannot get access to the network unless they are configured to use a SOCKS5 client. Here we are running apt update. Note that the command mentions the use of the proxy.

ubuntu@withoutnetworking:~$ sudo apt update 0% [Connecting to SOCKS5h proxy (socks5h://localhost:1080)] ...

In case of an error, test with the following. We connect to ubuntu.com using curl and specifying the SOCKS5 proxy directly in the command line. Just like we did earlier on the host.

$ curl -x socks5h://localhost:1080/ https://www.ubuntu.com <html> <head><title>301 Moved Permanently</title></head> <body> <center><h1>301 Moved Permanently</h1></center> <hr><center>openresty/1.15.8.2</center> </body> </html> $

At this point, the container has access to the Internet only through port 1080 (service SOCKS5). When we terminate the SOCKS5 server, the access if lost.

How to setup a Web server

apt is working from the previous step. Let’s install nginx, then tear apart the SOCKS5 proxy, and finally create a LXD proxy device to access the Web server. By doing so, it is somewhat similar to setting up a firewall in the container (disallow traffic originating from inside the container), without using one.

We install the nginx Web server.

ubuntu@withoutnetworking:~$ sudo apt update ubuntu@withoutnetworking:~$ sudo apt install nginx -y

Then, on the host we create a LXD proxy device to expose the Web server as port 8880 on the host (you may change this to 80, if there is no Web server already running on that port on the host). We create a proxy device called nonetwebserver, that listens for connections on port 8880 on the host and connects to port 80 in the container. The service is in the container, therefore it binds to the container (bind=host). We could omit bind=host as it is the default to listen on the host.

ubuntu@withoutnetworking:~$ exit $ lxc config device add withoutnetworking nonetwebserver proxy listen=tcp:127.0.0.1:8880 connect=tcp:127.0.0.1:80 bind=host Device nonetwebserver added to withoutnetworking

If you want to expose the Web server to your LAN, then you can replace listen=tcp:127.0.0.1:8880 with listen=tcp:0.0.0.0:8880.

Here is a screenshot of the website. Note that I took the liberty to edit /var/www/html/index.nginx-debian.html as shown below.

Screenshot of the web server running in a LXD container without network connectivity.

You may tear up the SOCKS5 server now. Remove the proxy device and stop the SOCKS5 server by pressing Control+C. The web server (or other service you may setup) will continue to work as long as it does not require connectivity to the Internet.

$ lxc config device remove withoutnetworking socks5port10080 Device socks5port10080 removed from withoutnetworking $ go run simplesocks5proxyserver.go Listening on 127.0.0.1:10080... Press Ctrl+C to interrupt: ^Csignal: interrupt $ Summary

We have created a LXD container that has no Internet connectivity. We then provided temporary Internet connectivity using a SOCKS5 proxy in order to install the nginx web server. We could have added temporarily a network interface instead, but for the purpose of this tutorial, we went full SOCKS5 proxy. You can replace our SOCKS5 proxy with another that allows to inspect the network traffic in detail.

Doing all these steps is sort of like a poor-man’s firewall. You can assume that we have setup a firewall on the container so that no incoming or outgoing traffic is allowed. A SOCKS5 proxy can selectively bypass the firewall. A proxy device may allow selective incoming traffic to the container.

If none of these interest you, you may replicate this tutorial anyway in order to practice using LXD proxy devices.

blog.simos.info/

Simos Xenitellis: How to get LXD containers get IP from the LAN with routed network

3 days 13 hours ago

You are using LXD containers and you want a container (or more) to get an IP address from the LAN (or, get an IP address just like the host does).

LXD currently supports four ways to do that, and depending on your needs, you select the appropriate way.

  1. Using macvlan. See https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/
  2. Using bridged. See https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
  3. Using routed. It is this post, read on.
  4. Using ipvlan. This tutorial is pending.

For more on the routed network option, see the LXD documentation on routedand this routed thread on the LXD discussion forum.

Why use the routed network?

You would use the routed network if you want to expose containers to the local network (LAN, or the Internet if you are using an Internet server, and have allocated several public IPs).

Any containers with routed will appear on the network to have the MAC address of the host. Therefore, this will work even when you use it on your laptop that is connected to the network over WiFi (or any router with port security). That is, you can use routed when macvlan and bridged cannot work.

You have to use static network configuration for these containers. Which means,

  1. You need to make sure that the IP address on the network that you give to the routed container, will not be assigned by the router in the future. Otherwise, there will be an IP conflict. You can do so if you go into the configuration of the router, and specify that the IP address is in use.
  2. The container (i.e. the services running in the container) should not be performing changes to the network interface, as it may mess up the setup.
Requirements for Ubuntu containers

The default network configuration in Ubuntu 18.04 or newer is to use netplan and get eth0 to use DHCP for the configuration. The way netplan does this, messes up with routed, so we are using a workaround. This workaround is required only for the Ubuntu container images. Other distributions like CentOS do not require it. The workaround is based on cloud-init, so it is the whole section for cloud-init in the profile below.

The routed LXD profile

Here is the routed profile. Create a profile with this name. Then, for each container that uses the routed network, we will be creating a new individual profile based on this initial profile. The reason why we create such individual profiles, is that we need to hard-code the IP address in them. Below, in bold, you can see the values that can be changes, specifically, the IP address (in two locations, replace with your own public IP addresses), the parent interface (on the host), and the nameserver IP address (that one is a public DNS server from Google). You can create an empty profile, then edit it and replace the existing content with the following (lxc profile create routed, lxc profile edit routed).

config: user.network-config: | version: 2 ethernets: eth0: addresses: - 192.168.1.200/32 nameservers: addresses: - 8.8.8.8 search: [] routes: - to: 0.0.0.0/0 via: 169.254.0.1 description: Default LXD profile devices: eth0: ipv4.address: 192.168.1.200 nictype: routed parent: enp6s0 type: nic name: routed_192.168.1.200 used_by:

We are going to make copies of the routed profile to individual new ones, one for each IP address. Therefore, let’s create the LXD profiles for 192.168.1.200 and 192.168.1.201. When you edit them

$ lxc profile copy routed routed_192.168.1.200 $ EDITOR=nano lxc profile edit routed_192.168.1.200 $ lxc profile copy routed routed_192.168.1.201 $ EDITOR=nano lxc profile edit routed_192.168.1.201

We are ready to test the profiles.

Using the routed network in LXD

We create a container called myrouted using the default profile and on top of that the routed_192.168.1.200 profile.

$ lxc launch ubuntu:18.04 myrouted --profile default --profile routed_192.168.1.200 Creating myrouted Starting myrouted $ lxc list -c ns4t +------+---------+----------------------+-----------+ | NAME | STATE | IPV4 | TYPE | +------+---------+----------------------+-----------+ | myr..| RUNNING | 192.168.1.200 (eth0) | CONTAINER | +------+---------+----------------------+-----------+ $

According to LXD, the container has configured its IP address that was packaged into the cloud-init configuration.

Get a shell into the container and ping

  1. your host
  2. your router
  3. an Internet host such as www.google.com.

All of the above should work. Finally, ping from the host to the IP address of the container. It should work as well.

Conclusion

You have configured routed in LXD so that one or more containers can get IP addresses from the network. Using a profile helps to automate the process. Still, if you want to setup manually, see the references above for instructions.

blog.simos.info/

Ubuntu Blog: Edge AI in a 5G world – part 4: How your business can benefit from ‘smart cell towers’

4 days 22 hours ago

This is part of a blog series on the impact that 5G and GPUs at the edge will have on the roll out of new AI solutions. You can read the other posts here.

Recap

In part 1 we talked about the industrial applications and benefits that 5G and fast compute at the edge will bring to AI products. In part 2 we went deeper into how you can benefit from this new opportunity. In part 3 we focused on the key technical barriers that 5G and Edge compute remove for AI applications. In this part we will summarise the IoT use cases that can benefit from smart cell towers and how they will help businesses focus their efforts on their key differentiating advantage. 

Photo by Joey Kyber IoT use cases

Furthermore, the distributed nature of IoT products means that businesses, system integrators a network operators can configure Edge GPUs to help offload the computation from a wide array of IoT devices. 

Orchestration of such distributed parts of the AI system is often referred to as ‘ML Ops’ and tools such as Kubeflow can help manage a complex AI pipeline with one single developer interface. 

Domain driven IoT development

The possibility of  renting Edge compute resources means that businesses looking to launch an AI product do not all have to build a fully vertically-integrated solution and can instead focus on their domain expertise, e.g. produce an outstanding robot arm or connected camera and simply renting compute time from an Edge GPU server provider. 

Customers are embracing this compute-offloading architecture to bring AI products to smartphones, IoT devices and robotics. 

The ‘5G + Edge GPU’ marriage will unlock AI solutions that will have previously not been possible. 

Application domains

The Industries that will benefit from this are; 

  • Smart cities 
  • Agriculture,
  • Manufacturing
  • Transportation
  • Retail and
  • Call centers

Overall, all of these applications will benefit from;

  1. having computing at the edge, 
  2. a good 5G network to connect them
  3. and great software to manage the whole fleets and estates of IoT devices.
Key players

Such changes in multiple aspects of the AI, telco and IoT industry will not be brought about by one company alone, as such various industries need to work together, namely:

  • Silicon vendors such as NVIDIA, Intel and AMD. 
  • Network specialists such as Juniper Networks, Arista and Cisco.
  • 5G providers such as Ericsson, Huawei and Nokia.
  • IoT and robotics manufacturers such as Bosch, Riggado and Siemens.
  • Public cloud providers such as Amazon Web Services, Azure and Google Cloud.

In line with its core mission, Canonical is keen to enable and accelerate the adoption of open source software.We are very excited by the prospect of helping enterprises in this space to reach their goals in the most effective, dependable and secure way possible.

Kubuntu General News: Kubuntu Focal Fossa (20.04 LTS) Beta Released

4 days 22 hours ago
The Plasma Desktop Environment

The beta of Focal Fossa (to become 20.04 LTS) has now been released, and is available for download.

User of Kubuntu, Ubuntu, and other flavours are invited to take part in #UbuntuTestingWeek.

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of the Focal Fossa are not recommended for:

  • Anyone needing a stable system
  • Regular users who are not aware of pre-release issues
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

They are, however, recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers
  • Other Ubuntu flavour developers

The Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

You can:

The Ubuntu Focal Fossa Release notes will give more details of changes to the Ubuntu base:

Ubuntu Blog: LXD 4.0 LTS stable release is now available

5 days 1 hour ago

The stable release of LXD, the machine container hypervisor, is now available. LXD 4.0 is the third LTS release for LXD and will be supported for 5 years, until June 2025. This version comes with a significant amount of new features including adding virtual machines (VMs) support, the introduction of projects and improved networking, storage and security capabilities. 

What’s new in LXD 4.0 LTS?

LXD can now run both containers and virtual machines. VM images are now available for the most commonly used Linux distributions and more will be added in the future. The latest addition to the VM support feature set is the backup via import/export commands. LXD aims to provide a similar user experience regardless if a user wants to spin up a container or a virtual machine.

Another significant improvement from LXD 3.0 is the concept of projects that help users better organize their containers and VMs. Projects help group relevant instances, images, profiles and storage volumes by segmenting the LXD server. Project-based restrictions, access control and resource quota configuration are also available.

On the networking side, LXD 4.0 brings API modifications that enable network status reporting to provide better network monitoring capabilities. DHCP leases, support for nftables, NAT source address and MAC address configuration are also new features of LXD, that enhance network configuration capabilities for containers and VMs. The latest version increment that is included in LXD 4.0 LTS adds container support for ipvlan and routed NIC types for IPv4 and IPv6.

Furthermore, LXD’s storage layer has been modified entirely from the previous LTS release to improve latency and flexibility. As a result, it is very easy to add support for new storage backends in LXD. Cephfs is the latest addition, enabling the last missing storage interface of the highly-popular software defined storage solution. As of LXD 4.0 LTS, you can also separate metadata and data pools using Ceph as a LXD backend. 

LXD 4.0 LTS comes with security enhancements, such as support for role-based access control that is made available through the use of Canonical RBAC and cgroup v2 support, to securely distribute system resources to processes. 

Finally, a lot of improvements were made on the snapshot management side, notably the ability to copy or move container instances between storage pools and the exposure of every individual snapshot size through the API. You can find the full list of changes on the LXD blog.

Why LXD?

If you have yet to familiarise yourself with LXD and machine containers, you should know that they provide a fully-functional OS that is running on the filesystem. They bring the same performance and latency as application containers, but with increased security and have optimised resource consumption and better latency than virtual machines. LXD’s  main goal is to streamline lift and shift for traditional, monolithic applications running on virtual machines or bare metal and enable microservice application development. It can run several thousand containers and virtual machines on a single machine, offers a REST API and can easily be clustered for large scale deployments.

You can try LXD on any Ubuntu machine as it comes pre-installed with all Ubuntu LTS releases. Follow the get started guide for all major Linux distributions, Windows and MacOS or try it online.

Learn more on the LXD website.

Ubuntu Blog: The State of Robotics – March 2020

5 days 1 hour ago

Damn it March. 2020 was doing so well. The biggest news last month was the dramatic escalation of COVID-19. We won’t go into any detail, I’m sure you’re seeing enough of that. But due to the outbreak, the state of robotics this March has been, heartwarming. We have seen a surge in online learning platforms, companies, startups and communities rising to the challenge. Members of open-source communities across the world are doing great things, with and without robotics, to support whoever they can. In this blog, we first want to highlight at a few responses to COVID-19 using robotics. And then it’s back to usual programming, highlighting robotics work and projects we have seen or done in March. If we have missed something in particular, please reach out to robotics.community@canonical.com and let us know.   

COVID-19, robots and us – A Discussion

Let’s start with awareness. Silicon Valley Robotics and the CITRIS People and Robots Initiative are hosting a weekly “COVID-19, robots and us” online discussion with experts from the robotics and health community on Tuesdays at 7pm (California time – PDT) and you can sign-up for the free event here. Each week they host different expert special guests to talk about the problem and the possible solutions. 

The two communities they are promoting are Open Source COVID-19 Medical Supplies Group, a rapidly growing Facebook group formed to evaluate, design, validate, and source the fabrication of open-source emergency medical supplies. And Helpful Engineering, another rapidly growing global network created to design, source and execute projects that can help people suffering from the COVID-19 crisis worldwide. I heavily encourage you to take a look and if you are able, get involved. 

April 7th thumbnail with the Willow Garage PR2 serving tea Fighting SARS-CoV-2 with light and robots

In 2011, the terrible Fukushima meltdown cast a shadow of disillusion over the state of robotics for the public, and, importantly over roboticists themselves. The technology simply wasn’t up to the (incredibly) difficult tasks we wished robots could tackle for us. In 2020, as the world is facing a global pandemic, robots are assisting in all kinds of tasks. They are handling room service in isolation centres, patrolling streets and entertaining the elderly. Robots are once again under the spotlight. Some robots, *are* spotlights; UV spotlights. UVD Robots, a company founded in 2016 by BlueOcean Robotics, produces a mobile-base robot mounted with powerful UV lights. Its task is clear, kill 99.99% of pathogens where it operates. Needless to say that this help is more than welcome in hospitals currently.

UBTECH Robots in hospitals 

UBTECH is an AI and humanoid robotics company that works on robots from industrial service robots to STEM educational build kits. Recently, the Third People’s Hospital of Shenzhen (TPHS), the only hospital treating COVID-19 in Shenzen despite its population of more than 12.5 million people, have enlisted the help of UBTECH robots. UBTECH have sent in three types, ATRIS, AIMBOT and Cruzr to monitor body temp, detect people without masks and spray disinfectant, respectively. We’re not sure whether they use ROS or Ubuntu, but its another case of robots rising to the occasion.

GitHub Actions for ROS 2

In non-COVID news, the ROS 2 Tooling Working group put out a set of GitHub Actions for setting up ROS on a runner and automatically building and testing packages. Setting up a CI is easier than ever, and Ubuntu Robotics’ own Ted Kern put together a primer on how to get your pipeline set up quickly and easily. 

A new opportunity to learn ROS

Last month The Construct took the opportunity to broadcast a week-long series of free classes for learning how to program with ROS.  The five classes are targeted at beginners with no knowledge of ROS, Linux or python, as all these are covered in the series.  Each class is two hours long and the recorded videos are available on YouTube. We read about it originally in the ROS discourse, for more information head there. 

ROS on a Container

Is it possible to develop ROS applications in a container?  Absolutely! There are a number of reasons such as testing different releases of ROS, or to develop on multiple isolated projects.  Ted Kern, a member of Canonical’s robotics team, explains how to set up a ROS development environment in LXD in this blog post. The post covers setting up a workspace, mapping your robot devices into the container, and enabling a graphical development environment.

SpotMicroAI

On another cheery note, we stumbled across an open-source project this month called SpotMicroAI. A GitLab project that teaches you how to build a miniature version of Boston Dynamics’ robotic dog product. The one Adam Savage is testing on YouTube. It’s a long term kind of project and looks quite complicated at a glance but if you can get one running to take for walks I’ll be mighty jealous. If we find some time maybe we’ll get one cracking on Ubuntu Core.

Outro

At Canonical we are privileged to have a well-established pattern of remote work and have been glad to help others by sharing our experience of distributed collaboration and operations. Our official commercial stance is please lean on us, we stand ready to help. And for the community, for projects and initiatives we can lend our help to, the same message applies. So if you have a project for us to talk about or you know of a project we missed in this article, get in touch through robotics.canonical@canonical.com and I’ll get it to the right people. Stay safe in April. 

Josh Powers: Ubuntu 20.04 LTS Beta

5 days 12 hours ago
As announced the Ubuntu 20.04 LTS (Focal Fossa) beta images are now available! Those of you subscribed to ubuntu-server mailing list can clearly see the hard work that has gone on this cycle to get the latest and greatest software to our users. Check out the initial release notes for more details and please help us by testing the beta version of Ubuntu Focal! Setting up a test system and upgrading from Ubuntu 18.

Simos Xenitellis: Re: 30 Things to do After Installing Ubuntu 18.04 LTS (all-in-one video)

5 days 13 hours ago

Average Linux User has created a Youtube video on 30 This to do After Installing Ubuntu 18.04 LTS. It is a well-prepared, high-quality, informative video. I recommend watching it. There are a couple of nitpicks though, and in this post I go into detail about them. See the Discussion below.

First, here is the Youtube video,

30 This to do After Installing Ubuntu 18.04 LTS, by Average Linux User (ALU) Discussion

We comment on the 30 things to do after installing Ubuntu 18.04 LTS.

  1. The Canonical Partners repository is indeed not really used anymore. Correctly, it has just the Adobe Flash plugin, the Google Cloud SDK and IBM Java 8.0.
    The Adobe Flash plugin, as provided there, can be used in Mozilla Firefox and Chromium. Chrome keeps (and updates) its own copy of the Adobe Flash plugin. I suggest to install Chrome if you require to visit a website that requires the Flash plugin. It is not worth in terms of security to use a browser that has always-on the feature to run Adobe Flash. Adobe is stopping the support of the Flash plugin by the end of 2020.
    The Google Cloud SDK has not been updated since 2018. It is probably better to retrieve it from the source than use the packaged version.
    IBM Java 8.0 is being updated, though you must have very specific needs to use it.
    Therefore, enable the Canonical Partner repository only if you really need any of the above. Traditionally, software like Skype used to be provided in this repository, but not anymore. Those are now provided from the Snap Store.
  2. The Linux kernel contains all the necessary device drivers. Contrary to Windows where you install drivers for most hardware, in Ubuntu you are shown here to install only closed-source/proprietary drivers that cannot be included in the Linux kernel. In practice, you will see here the NVidia graphics driver (AMD and Intel produce a free/open-source driver, hence it is included already in the Linux kernel).
  3. I do not recommend installing the Synaptic Package Manager (caveat: some things cannot be done in 18.04’s Ubuntu Software). According to the source code, it is not developed anymore. The chances in making a mistake, if you are a new user, are too damn high. The common mistake is to remove a package that somehow pulls a lot of other packages and makes your Ubuntu unable to start again. And when you want to install a package, how do you decide which package name is the most appropriate?
  4. Regarding the additional restricted video codecs, it is better not to install them. But have them installed on demand, when you really have such a video to play. The Video player in Ubuntu (totem) has been adapted so that when you try to play a video with an unknown codec, it will look for, and request to install the restricted codecs package. Same with DVD player support.
    The package flashplugin-install is not the package that comes from the Canonical Partner repository. The package from the Canonical Partner repository is called adobe-flashplugin (last update 11 March 2020), and this package has both the NPAPI and PPAPI versions of the plugin. The flashplugin-install just grubs the NPAPI version of the plugin from the Canonical Partner repository and installs it specifically for Firefox. More on sorting out the Flash Plugin mess in Ubuntu. As above, if you really need Flash, I suggest to put up with having Chrome installed on your Ubuntu, and use it for those websites that happen to require Flash.
  5. The CPU microcode installer package should have been installed and updated automatically. If it is not installed automatically, then it is a bug. Both the Intel and AMD packages should have been installed automatically, and the Ubuntu should auto-detect the CPU. See /etc/kernel/preinst.d/ and verify that the appropriate script for your CPU is correctly selected. If you install Ubuntu in a virtual machine, then the microcode package is not needed, and is not installed.
  6. The click-to-minimize tip for the icons on the dock is handy. The default is to show the list of open windows of that program, therefore, if you have more than one browser open and click on the browser icon on the dock, it shows thumbnails of those windows. So, if you enable click-to-minimize, and want to switch between the open windows of an application, you need to right-click on the icon, then go to the menu item All windows.
  7. In general, a low value for swappiness is specifically useful to servers, such as database servers. On a desktop system the applications do not tend to get swapped out if you have lots of memory anyway. It would be good to check this one in practice.
    I did not know feature, instead of gksudo gedit /etc/passwd (is there gksudo anymore?), you can do instead gedit admin:///etc/passwd (you are asked for the password, note that there are three /).
  8. You can check whether the WriteCache is enabled on a disk by running sudo hdparm -i /dev/sda. I think the default is enabled. The Disks utility has the option grayed out to indicated that it is not handling it. If unsure, check with hdparm for the value of WriteCache before making a change in Disks.
  9. placeholder (i.e., cool info and I have nothing else to add, but adding this keeps the numbers in line with the video).
  10. placehoder
  11. placeholder
  12. placeholder
  13. placeholder
  14. placeholder
  15. placeholder (cool!)
  16. You can now install deb packages with sudo apt install ./mypackage.deb. That is, apt can now install directly deb packages and sort out at the same time any dependency. You need to put the slashdot ./ though.
  17. placeholder
  18. placeholder
  19. The package libreoffice-style-sifr cannot be installed in the Ubuntu Software (née GNOME Software). In the original Ubuntu Software (by Canonical), there was a link there in the search results if you were actually searching for non-GUI packages. In that respect, it is handy to have the Synaptic Package Manager as the only GUI package manager available. In Ubuntu 20.04 LTS there will likely be the Snap Store (based on GNOME Software but forked) and I wonder whether such a feature will get reintroduced.
  20. placeholder
  21. placeholder
  22. placeholder
  23. placeholder
  24. The universe repository of bleachbit for Ubuntu 18.04 LTS has BleachBit 2.0. The current version (4/2020) is BleachBit 3.2.
  25. When you visit apt://vlc in the browser, it installs the package from the repositories. Same as sudo apt install vlc or install from a GUI package manager. The VLC package from the universe repository is updated often, and it is good to use. The alternative would be to install the snap package of VLC. However, there is not version difference, therefore, it is fine to use the deb package of VLC. You would probably prefer the snap package if you also wanted to easily test the development package of VLC 4.0.
  26. Apart from Chrome, there is also Chromium, the free/open-source distribution. Chrome is based on Chromium but has more media codecs, Flash Player, PDF viewer and other nice things. Chromium can be installed either with apt://chromium-browser or sudo apt install chromium or from the Ubuntu Software package manager.
  27. placeholder
  28. Skype is now available only as a snap package. It is slow to start for the first time due to how snap packages are implemented. This issue is being addressed. Microsoft is developing Skype as a snap package themselves.
  29. Same with Spotify, it is developed directly by Spotify as a snap package.
  30. placeholder
Conclusion

Thanks for creating this video for Ubuntu 18.04 LTS. It is immensely useful to both new and experienced users. I am looking forward to seeing the new video for Ubuntu 20.04. I think that Ubuntu Software (which is very closely related to GNOME Software but with snap packages) is to be replaced with the Snap Store (also based on GNOME Software + snap packages, but with some more changes). Currently, the beta version of Ubuntu 20.04 has both of them installed.

There is work to increase the performance of GNOME Shell. In the Ubuntu 20.04 iteration, the focus is on performance on high-end systems. For Ubuntu 20.10, the focus is not low-end systems. While testing the development version of Ubuntu 20.04, it feels more responsive/snappy. However, it is important to view benchmarks in order to make the case.

blog.simos.info/

Podcast Ubuntu Portugal: Ep 84 – Zoom out

5 days 14 hours ago

Apesar do isolamento a que o momento obriga, o duo maravilha preparou mais uma edição do vosso podcast preferido. UBports, Jitsi, impressões 3D e muito mais. Já sabem: oiçam, comentem e partilhem!

  • https://www.vice.com/en_us/article/k7e599/zoom-ios-app-sends-data-to-facebook-even-if-you-dont-have-a-facebook-account
  • https://meet.jit.si/
  • https://octoprint.org
  • https://www.omgubuntu.co.uk/2020/03/tracktion-waveform-free-for-linux/
  • https://ubports.com/pt_PT/blog/ubports-blog-1/post/tax-exempt-donations-over-sepa-bank-wire-transactions-269
  • https://www.humblebundle.com/books/coding-starter-kit-no-starch-press-books?partner=PUP
  • https://www.humblebundle.com/books/software-development-oreilly-books?partner=PUP
Apoios

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Dustin Kirkland: How We've Adapted Ubuntu's Time-based Release Cycles to Fintech and Software-as-a-Service

5 days 23 hours ago

The Spring launch of the Apex 20a platform on March 26, 2020, marks the beginning of an exciting, new era of product development at Apex Clearing.  We have adopted a number of product management best practices from the software industry and adapted them to address some of the unique challenges within financial services and software-as-a-service.  In the interest of transparency, I’m pleased to share our new processes, what we’ve learned along the way, and where we’re headed next. 
Elsewhere in the Software Industry I joined Apex Clearing last year, having spent the previous 20 years as a software engineer, product manager, and executive, mostly around open source software, including UbuntuOpenStack, and Kubernetes.  Albeit IBM, Canonical and Google differ from fintech on many levels, these operating systems and cloud infrastructure technology platforms share a number of similarities with Apex's software-as-a-service platform.  Moreover, there also exists some literal overlap: we’re heavy users of both Ubuntu and Kubernetes here at Apex. 
Ubuntu, OpenStack, and Kubernetes all share similar, predictable, time-based release cycles.  Ubuntu has released every April and October, since October of 2004 – that's 32 major software platform releases, on time, every time, over 16 years.  Ubuntu has set the bar for velocity, quality, and predictability in the open source world.  OpenStack’s development processes have largely mirrored Ubuntu’s, with many of the early project leaders having been ex-Ubuntu engineers and managers.  OpenStack, too, has utilized a 6-month development cycle, since 2010, now on its 20th release.  Kubernetes came along in 2014, and sought to increase the pace a bit, with quarterly release cycles.  Kubernetes is a little bit looser with dates than Ubuntu or OpenStack, but has generally cranked out 4 quality releases per year, over the last 6 years.  I’ve been involved in each of these projects at some level, and I’ve thoroughly enjoyed coaching a number of early stage start-ups on how to apply these principles to their product development methodologies. 
Across the board, users, and especially enterprise customers, appreciate the predictability of these release cycles.  Corporate IT managers are better able to plan, test, and roll out technology changes to their environments, with less risk and disruption.  From a commercial perspective, these methodologies drove many wins against legacy, less predictable platforms.  
The Key Principles of Coordinated Cycles As a product development team, you have: 
  1. Time 
  2. Work to complete 
  3. Resources to perform work 
To succeed, you must ensure that two of these three are fixed; and only one may vary.  In the Ubuntu methodology – time and resources are assumed to be fixed.  Time is fixed, in that the product will release, on time.  Resources are fixed, in that it takes many months to recruit and hire and on-board new resources.  We hire new people, of course, but we’ll assume those resources will be productive in a subsequent cycle.  Hence, it’s the amount of work that we will complete, which varies.  We will drop or defer commitments, but we won’t change the release dates, and we won't assume additional resources will have meaningful impact within a cycle. 
Within Ubuntu, OpenStack, and Kubernetes, each cycle would “kick-off” with a summit or conference, that brought together hundreds of developers and leaders from around the industry, to discuss and debate designs for the release.  Anyone who’s participated in an Ubuntu Developer Summit, an OpenStack Design Summit, or a KubeCon Design Summit can tell you how essential these gatherings are, to the success of the project.  Within Canonical, we also held a “Mid-Cycle Summit”, exactly at the half-way point of the cycle.  We used this checkpoint, as product and engineering teams, to right-size the scope, and ensure that we hit our release dates, and with the highest quality standards.  Inevitably, new requirements and priorities would emerge, or some committed work proved more complicated than anticipated.  This checkpoint was critical to the success of each launch, as we adjusted the targeted scope for the remainder of the release.Adapting these Processes to Apex When I arrived at Apex in September 2019 to lead the product organization, I inherited an excellent team of product and project managers, peered with high-quality engineering teams.  Products and projects, however, were managed pretty asynchronously, hence release timelines and new feature commitments were unpredictable.  Of course, I had seen this before at a number of the start-ups that I’ve advised, so the model was quite familiar. Launch Cycles When adapting the coordinated, time-based release cycle to a given organization, the first thing to consider is the time frame.  After talking to all of our stakeholders, 6-month releases, like Ubuntu or OpenStack felt a little too long, a little too slow, for Apex and our customers.  Most of the engineering teams were already quite agile and utilizing 2-week sprints, so getting product requirements for 26-weeks (13 sprints) seemed a little unwieldy.  Quarterly cycles, however, can be pretty tough to see through, for anything but the smallest individual projects (frankly, Kubernetes struggles with the pace at times).  Moreover, all of the projects I’ve been involved in, have struggled with the end of year holidays, in November, December, and January.  Thus, we actually settled on a 16-week cycle, which amounts to roughly 4-month cycles.  That translates to 3 full cycles per year, with 48 weeks of development, while still allowing for 4 weeks of holidays. 
Our cycles are named for the year in which it will complete (launch), and with a letter as an iterator.  In 2020, we launch Apex 20a (March), Apex 20b (July), and Apex 20c (November), and looking forward to 2021, we should see Apex 21a (March), Apex 21b (July), and Apex 21c (November).  The “c” cycles are a few extra weeks, to account for the holidays near the end of the year.  These aren’t really “versions”, as Apex is more like “software as a service”, rather than “delivered software”, like Ubuntu, OpenStack, and Kubernetes.  Also, conversationally, we're referring to the cycles with the season -- so Apex 20a is our "Spring" launch, 20b will be our "Summer" launch, and 20c will be our "Autumn" launch.
Summits Each cycle involves 3 key summits.  As much as possible, these summits are in-person meetings (at least until our travel paused along with the rest of the world).  At this point, we’re proceeding quite seamlessly with virtual summits, instead.  Our summits are recorded in Zoom, and we always take extraordinarily detailed notes in internally shared documents. 
  1. Prioritization Summit 
  2. Planning Summit 
  3. Mid-cycle Summit 

Prioritization The Prioritization Summit brings together our product managers with all of our key stakeholders – sales executives working with new business prospects, our client relationship managers working with existing customers, as well as our own IT, operations, and site reliability engineers tasked with keeping Apex running on a day to day basis.  Each product manager works with their stakeholders to gather CUJs (critical user journeys) and map those into patterns of similar, weighted product requests. Product Managers generally spend about 2 weeks on that work, which culminates in a session where the Product Manager presents the consensus priorities for their product area for review by the broader product team.  Based on this work, each product manager then starts working on their PRDs (product requirements documents), for the next 3-4 weeks.  Our Prioritization Summit is about a dozen, hour-long sessions, spread over three days in the same week.  We exit the Prioritization Summit with clear stakeholder consensus on stack-ranked priorities for each product family. 
Planning  The Planning Summit signals the end of the PRD-writing period, during which product managers worked closely with their engineering counterparts, digesting all of those product requests and priorities, and turn those into product requirements written in RFC2119-style language (must, should, may, etc.).  At the end of that process, each Product Manager and their technical counterpart lead an hour-long session with their plan for the next cycle, including fairly detailed commitments as to the major changes we should expect to be delivered.  Our Planning Summit is about a dozen, hour long sessions, spread over three days in the same week.  We exit the Planning Summit with clear product and engineering consensus on work commitments across the product portfolio, for the upcoming cycle.  This marks the beginning of the development portion of the cycle. 
For the next few weeks, product managers spend the majority of their time with Apex customers and prospects.  Each of us on the product team carry specific OKRs (objectives and key results), to spend meaningful time with our existing correspondents and prospects, communicating our product roadmaps and gathering feedback on their experiences.  We take detailed notes, and all of this data filters directly into our future Prioritization Summits. 
Mid-cycle At the middle of our release cycle (week 8 of 16), we bring together the same Product Managers and technical leads to report on status of the first 8 weeks (4 sprints), and recalibrate the remaining work for the cycle.  Without exception, there are always new, late-breaking product requests or requirements that emerge, after the prioritization and planning summits.  Some of these are urgent, and we must accommodate, which usually means something else gets deferred to the next cycle.  Sometimes, we were a little too optimistic with our work estimates, and again we need to adjust.  Occasionally, we’re ahead of schedule and we can cherry-pick some other bite-sized items to bring into scope.  In any case, we will exit the Mid-cycle Summit with a very clear line-of-sight on our deliverables by the end of the cycle. 
With any scope adjustments well understood, the product team shifts into “go-to-market" mode.  Over these next 3 weeks, Product Managers are working with our Marketing counterparts, writing release notes, creating marketing content, educating our sales teams, and working through signoffs on our launch checklists. 
At this point, the cycle repeats itself.  Once our go-to-market activities are complete, Product Managers shift back into prioritization mode, working with our stakeholders, while the engineering team completes their work and the marketing team publishes the launch. 
Speaking of...let’s talk about the Apex 20a Release. 
The Apex 20a and 20b Releases Apex 20a launched on March 31, 2020, as our first release using the methodologies described above.  Apex clients can find detailed release notes in the Apex Developer Portal.  This cycle began with a Prioritization Summit in October 2019, a Planning Summit in November 2019, and a Mid-cycle Summit in January 2020.   This cycle involved 17-weeks of development. 
Our work on Apex 20b is already well underway, having held our Prioritization Summit in February 2020, and we’re holding our Planning Summit this week (March 2020).  Our Mid-cycle Summit will be held in May 2020, and we will launch Apex 20b in July 2020. 
It’s important to note that although we do have a very specific “launch date”, which signals the end of the development cycle, each of our engineering teams have developed, tested, and deployed to production hundreds of changesets during the cycle.  Thus, we maintain our agile CI/CD (continuous integration / continuous deployment) systems within every product and engineering team.  To be clear, we don’t “hold” anything specifically until launch date.  This is a very specific differentiation from Ubuntu, OpenStack, and Kubernetes, which are “shipped software”, as opposed to Apex technologies, which amount to “software as a service”.  For these reasons, we try to use the term “launch”, rather than “release”, when we talk about the “launch date” at the end of the cycle.  All, that said, we have found the processes described here very useful in our planning and communications about Apex technology to our customers. 
In Conclusion Apex 20a is the first of many coordinated product launch cycles our customers will experience.  We’ve adapted many of the best practices utilized by the open source software industry as well as Silicon Valley, and those practices are helping us work more effectively with our tech-savvy client base.  Apex will have 3 launches in 2020 (20a, 20b, 20c), and at least 3 launches in 2021.  By openly sharing our product stages and delivering a consistent, predictable, and reliable schedule, there are now ample opportunities for both customer input and detailed review and oversight by our leaders, which culminates in secure and stable products for our industry.  We’re delighted at the engagement thus far, and really look forward to more collaboration in the future. 
On behalf of the Apex product team,:-Dustin
Dustin Kirkland

Jonathan Riddell: KDE on Instagram

6 days ago

If you’re feeling stuck indoors during the lock down you can browse some happy pretty pictures on KDE’s new Instagram account.

Instagram is one of those social medium services and is run by everyone’s favourite Facebook.  The good side of it is that it’s based on happy pretty pictures rather than angry people (Twitter) or political disinformation (Facebook) but the bad side of that is it is common to feel inferior because you’re not as good looking as the people in the pictures.  Well that’s not a problem because everyone using KDE or helping out the community is automatically good looking.

It’s being run by me and Niccolò Venerandi (veggero) for now but if you want to help out give us a ping.  And if you have pretty pictures to go on there send them over to us.

Ubuntu Blog: Edge AI in a 5G world – part 2: Why make the cell tower smart?

6 days 2 hours ago

This is part of a blog series on the impact that 5G and GPUs at the edge will have on the roll out of new AI solutions. You can read the other posts here.

Recap

In part 1 we talked about the industrial applications and benefits that 5G and fast compute at the edge will bring to AI products. In this part we will go deeper into how you can benefit from this new opportunity.

Photo by NASA Embedded compute vs Cost

AI training & ML operationsDecades of Moore’s Law have given us smartphones at a price we’re willing to pay but IoT devices need to be much cheaper than that. Adding today’s fastest CPUs or GPUs to IoT devices costs a significant amount which put a hard limit on what the market is currently willing to buy at scale.

The IoT devices that are currently on the market are usually underpowered and have limited connectivity. With 5G connectivity and shared compute resources at the Edge these constrained devices will soon be able to do much more.

For instance, adding a GPU to each IoT device for the purposes of AI model inference would mean a significant increase in the hardware bill of materials. This cost would be passed onto the consumer and because it is more expensive would drastically reduce the target audience. Instead, 5G allows for heavy computation to be offloaded to nearby shared GPUs and get a response with minimal latency.

We will dive into this approach in the next section.

AI training & ML operations

Creating a new AI product has two engineering aspects to it, namely; 

  1. Model training and
  2. Inference

Model training refers to the machine learning that is usually done with ‘labelled data’ or simulations. This has big data and compute requirements.

Once the model has been trained, the implementation and operations of the inference is where much of the complexity appears. This is where we will focus most on this post, and in particular on real-time AI solutions.

During this blog series we will keep these two in mind given that the input data of today needs to be kept for it to be used as the training data of tomorrow. 

To illustrate this further in the next blog we will do a gap analysis of the technical requirements for model training, AI operations, as well as new techniques available to overcome these.


Ubuntu MATE: Ubuntu MATE 20.04 Release Notes

6 days 12 hours ago

We are preparing Ubuntu MATE 20.04 (Focal Fossa) for distribution on April 23rd, 2020. With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.

Ubuntu MATE 20.04 - Welcome now offers a few buckets of paint!

Ubuntu Testing Week

The Ubuntu MATE team is proud to be a part of Ubuntu Testing Week which starts today (2 April) until 8 April along with the other flavours of the Ubuntu family. Please join us in testing the beta of Ubuntu MATE 20.04 (Focal Fossa) as well sharing the :green_heart: by testing out some of the other flavours.

Unsure of how to test? Alan Pope, the co-founder of Ubuntu MATE, has made an outstanding video showing how easy it is to do.

If you need some help, join us in our beta testing thread. Did you find a :bug:? We’ve got a helpful guide on how to report it.

There are many way ways of testing. A spare machine, a secondary hard drive, a live USB, or using a VM (Virtual Machine). If you are planning on using a VM, you may be interested in trying quickemu which allows you to easily manage QEMU VM’s with a shell script.

We hope you will join in and help us make Ubuntu MATE 20.04 and all of it’s family a success :tada:

What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who requires a stable system
  • Anyone uncomfortable running a system that can often be broken
  • Anyone in a production environment with data or workflows that need reliability

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers
What changed since the Ubuntu MATE 18.04 LTS and 19.10?

Those of you who follow the desktop Linux news will know that upstream MATE Desktop recently released version 1.24.

Ubuntu MATE 20.04 is shipping with MATE Desktop 1.24.

Thus, all of the improvements in MATE Desktop 1.24 will be present in Ubuntu MATE 20.04.

Since the last LTS we worked on the following:

  • Added experimental ZFS :file_folder: install option.
  • Fixed rendering window controls on HiDPI :mag: displays.
  • Fixed irregular icon sizes :straight_ruler: in MATE Control Center and made them render nicely on HiDPI displays.
  • Fixed unresponsive Caja :file_folder: extensions.
  • Fixed mate-power-manager :electric_plug: to use upower-glib get_devices2().
  • Fixed unresponsive Pluma :notebook: plugins.
  • Fixed a crasher :bomb: in MATE Dock Applet due to an Attribute error in adjust_minimise_pos().
  • Fixed auto-start errors in mate-session-manager.
  • Gave Ubuntu MATE Welcome a fresh coat of :paintbrush:.
  • Updated the Ubuntu MATE Guide :question:
  • Updated the Ubiquity Slideshow :performing_arts:
Firmware updater

We’ve add a GTK front end for fwupd This application can:

  • Upgrade, Downgrade, & Reinstall firmware on devices supported by fwupd.
  • Unlock locked fwupd devices
  • Verify firmware on supported devices
  • Display all releases for a fwupd device

Ubuntu MATE 20.04 - Features an LVFS compatible Firmware management utility

Window Manager improvements

Marco is the Window Manager for MATE Desktop and in Ubuntu MATE 20.04 it brings a number of new features and fixes.

XPresent support is properly fixed, which means that screen tearing is now a thing of the past and invisible window corners are finally here! Invisible window corners mean that windows can be easily resized :straight_ruler: without having to precisely grab the window corners. HiDPI rendering improvements fix a number of rendering problems that were present in various themes and components. Most notably, windows controls are now HIDPI aware.

  • Magnus (see below) provides screen magnification
  • Marco supports invisible windows borders
  • Marco has improved Alt+Tab behaviour
  • Marco is free from screen tearing
  • Marco frame performance when gaming is further improved

Minimized Application Preview

Minimized applications in the window list now present a thumbnail preview.

Alt+Tab navigation makes it possible to traverse the application switcher via keyboard and mouse. Alt + Tab.

Workspace Switcher allows you to switch between workspaces using a the keyboard and mouse. Alt + Tab + Ctrl.

Compiz and Compton have been removed from the default Ubuntu MATE install. The fundamental reasons for including them no longer exist.

Only having one window manager to target means we can promptly deliver new features and minimise development effort. Which brings us to…

New Key-bindings

The key-bindings for window tiling have only worked on full keyboards :keyboard: with a 10-key pad. Few laptops :computer: have a 10-key pad and not all keyboards have a 10-key either. There are some well known key-bindings from other platforms that were not recognised in Ubuntu MATE. So, we’ve had a think :think: and have come up with this:

  • Maximise Window: Super + Up
  • Restore Window: Super + Down
  • Title Window right: Super + Right
  • Title Window left: Super + Left
  • Center Window: Alt + Super + c
  • Title Window to upper right corner: Alt + Super + Right
  • Title Window to upper left corner: Alt + Super + Left
  • Title Window to lower right corner: Shift + Alt + Super + Right
  • Title Window to lower left Corner: Shift + Alt + Super + Left
  • Shade Window: Control + Alt + s

It is now possible to tile a window to all screen quadrants :triangular_ruler: using any keyboard form factor.

We updated the application launcher key-bindings, some of these have existed in Ubuntu MATE for a while:

  • Cycle external displays: Super + P
  • Lock Screen: Super + L
  • Screenshot a rectangle: Shift + PrintScr
  • Open File Manager: Super + e
  • Open Terminal: Super + T
  • Open Control Center: Super + I
  • Open Search: Super + S
  • Open Task Manager: Control + Shift + Escape
  • Open System Information: Super + Pause

The key-bindings compliment existing well established alternatives. So if Ctrl + Alt + T (Terminal) and Ctrl + Alt + L (Lock Screen) are ingrained in your muscle :muscle: memory 🧠 they are still available too. You can find all the keyboard shortcuts documented in the Getting Started section of Ubuntu MATE Welcome.

Brisk Menu

Brisk Menu is under the Solus GitHub organisation, but it’s been a couple of years since it had a new release. The Solus Project gave me administrative access :trident: to the Brisk Menu repo and I’ve made a new release. Thanks to the efforts of a couple of Ubuntu MATE contributors, several bug :bug: fixes have landed too, which includes resolving frequent crashers in Brisk Menu, preventing a scrollbar always appearing in the category column of the menu and silencing sounds firing as you rollover menu entries.

MATE Panel

MATE Panel has had a long-standing bug fixed that caused it to crash :boom: when the panel was reset or replaced. This was most noticeable when switching panel layouts via MATE Tweak and could result in the panel layout being left incomplete or entirely absent. This bug is now fixed! MATE Tweak has been updated to neatly integrate with with fixed MATE Panel behaviour so that layout switching is now 100% reliable.

Indicators

A bug which resulted in oversized icons in indicators is finally resolved.

Before :poop: After :heart_eyes:

However, it turned out some of the bugs were due to the icons :art: themselves. Over :100: icons have been refactored :paintbrush:️ to correct their resolutions or aspect ratio; as a result the panel and indicators both scale correctly.

A race condition that could result in two network status icons being displayed is fixed, and when connected via VPN, lock icons are now overlayed on the Network Indicator. The battery :battery: indicator is improved and now has a larger charging symbol while charging.

We’ve added the Date/Time Indicator and integrated it with MATE Desktop and it now replaces the MATE clock applet which corrects the placement of the clock and session indicators.

We’ve finally addressed a long standing issue which has been around since Ubuntu MATE 14.10 🕸️: some of the monochrome symbolic icons used in the indicators were also used in applications. The presented a couple of issues:

  • In certain cases, you couldn’t easily see the icons against the window base colour.
  • The mix of monochrome and full colour icons in applications looked inconsistent.

This issue is now resolved; monochrome symbolic icons are only used for indicators and full colour icons are used in the Control Center, Sound Preferences, Bluetooth, OSD, etc.

MATE Window Applets

MATE Window Applets have received a number of bug fixes and new features from a community contributor. Window control icons now dynamically load from the currently selected theme, rather than requiring manual user configuration. A number of bugs (including significant memory leaks) have also been resolved.

Notification Center

Ubuntu MATE 20.04 includes a new Indicator that provides a “notification center” :bell: We worked with the upstream developer to add new features to indicator-notifications and integrate it with MATE Notifications Daemon.

We now have a notification center that also offers a “do not disturb” :red_circle: feature. When do not disturb is enabled, notifications will be muted and captured in the notification center for review. It’s also possible to blacklist some notifications, so they are never stored by the notification center. I’ve created an icon theme for the notification center so it fits the look and feel of the default Ubuntu MATE theme. Notification hints are also fixed so any notifications supplying additional media, such as sounds or icons, now work.

Evolution replaces Thunderbird

The Ubuntu MATE development team discussed the pros and cons of switching the default mail :email: client in Ubuntu MATE to Evolution. Here is a summary of our assessment:

  • Thunderbird does not integrate as well with the desktop.
    • For example, theme integration, font integration, compatibility with HUD (which is increasingly difficult to support in Thunderbird), notifications with action buttons, locale and spell checking.
  • Evolution integrates well with MATE Desktop given that both use GTK3.
  • Evolution includes interoperability with LibreOffice, for which Ubuntu MATE is already shipping the required components.
  • Evolution has superior integration with Google Mail and Exchange, including calendar, contacts, tasks, and memos.

Indicator Date/Time also integrates with Evolution. It is fully functional, including all the features of creating new events or opening upcoming events from the indicator. Clicking on an individual day in the month displays the events for that day, etc.

For the many people who use web-mail exclusively, this change will have no impact, but for those who use desktop mail we feel these productivity :chart_with_upwards_trend: improvements are significant.

For those of you who love :two_hearts: Thunderbird and wish to continue using it, we will continue to offer Thunderbird in the Software Boutique for a one-click install. Likewise, Evolution is now in the Software Boutique, and can be installed/removed with one click.

Magnus

Most desktop environments are lacking a screen magnifier, which is an essential application for visually impaired :eyeglasses: computer users, as well as accurate graphical design or detail work. One of the reasons we ship Compiz in Ubuntu MATE is because it has an excellent screen magnifier and was our solution for people who need magnification :mag:

Martin and Stuart Langridge collaborated to create Magnus; a very simple desktop magnifier, showing the area around the mouse pointer in a separate window magnified two, three, four, or five times. Magnus is now shipped :ship: by default in Ubuntu MATE 20.04.

Ubuntu MATE Themes

Dozens of theme-related bugs have been fixed. The Ubuntu MATE themes have been added to the gtk-common-themes used by snaps, so snapped applications are now themed correctly for Ubuntu MATE users. This change is already available all the way back to Ubuntu MATE 16.04.

The most noticeable resolved theme issues are sensibly sized expanders in tree view (they were so tiny) that are easily clickable, window controls are correctly proportioned on CSD windows and we’ve add a splash of Chelsea Cucumber :bug: to the Ubuntu MATE logo on the menu. Everything the QA team highlighted has been fixed :hammer:

MATE Tweak and Ubuntu MATE Welcome

MATE Tweak now preserves user preferences when switching between custom layouts thanks to a community contribution.

If you’re familiar with MATE Tweak, you’ll know it can switch panel layouts to somewhat mimic other platforms and distros 🐧 We have now integrated a graphical layout switcher in Ubuntu MATE Welcome to better promote the feature and make it more accessible. We have actually had this feature since 18.04, but the bugs in MATE Panel I mentioned earlier meant it didn’t work. With all the associated panel bugs fixed :wrench: we now have this:

NVIDIA drivers

If you’ve been following the news surrounding Ubuntu you’ll know that Ubuntu will be shipping :ship: the NVIDIA proprietary drivers on the ISO images. Anyone selecting the additional 3rd party hardware drivers during installation without an Internet connection will have the drivers available in offline scenarios.

Post-install, Ubuntu MATE users with computers that support hybrid graphics will see the MATE Optimus hybrid graphics applet displaying the NVIDIA logo.

We have given MATE Optimus an update. MATE Optimus adds support for NVIDIA On-Demand and will now prompt users to log out when switching the GPU’s profile. MATE, XFCE, Budgie, Cinnamon, GNOME, KDE and LXQt are all supported. Wrappers, called offload-glx & offload-vulkan, can be used to easily offload games/apps to the PRIME renderer. I’m also delighted to see Ubuntu Budgie 20.04 are shipping MATE Optimus too!

The NVIDIA drivers are now going to receive updates via the official Ubuntu software repository. So no need to add a PPA to get updates and more importantly, the NVIDIA drivers are signed (which is not supported for drivers distributed via PPA) so you can keep Secure Boot enabled.

Remote Desktop Awareness

Our MATE Desktop 1.24 packages ship support for Remote Desktop Awareness (RDA). RDA makes MATE Desktop more aware of its execution context, so it behaves differently when run inside a remote desktop session compared to when running on local hardware. Different remote technology solutions support different features and they can now be queried from within MATE components. The inclusion of RDA offers the option to suspend your remote connection, supports folder sharing in Caja and MIME type bindings for SSHFS shares, and allows session suspension via the MATE screensaver.

ZFS on root

Support for ZFS as the root filesystem is added as an experimental feature in 20.04. The ZFS file system and partitioning layout is handled automatically directly via the installer.

You can read more details on Didier Roche’s blogs:

Download Ubuntu MATE 20.04 Beta

Notice anything different? We’ve overhauled the website to make things easier to discover!

Download Known Issues

Here are the known issues.

Ubuntu MATE Ubuntu family issues

This is our known list of bugs that affects all flavours.

You’ll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

Command Line