Tasker: Total Automation for Android

The Register

Planet Ubuntu

Ubuntu Blog: Ubuntu Server development summary – 16 October 2019

3 hours 52 minutes ago
Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: Ubuntu 19.10 (Eoan Ermine) release imminent

The final testing and certification of Ubuntu 19.10 (Eoan Ermine) are nearly complete! Check out the release notes for a preview of what will be avialble shortly.

cloud-init
  • Publish cloud-init update 19.2-36-g059d049c-0ubuntu3 to Ubuntu Eoan
  • Publish cloud-init SRU to Xenial, Bionic, Disco: 19.2-36-g059d049c-0ubuntu2
  • net: handle openstack dhcpv6-stateless configuration [Harald Jensås] (LP: #1847517)
  • Add .venv/ to .gitignore [Dominic Schlegel]
  • Small typo fixes in code comments. [Dominic Schlegel]
  • cloud_test/lxd: Retry container delete a few times
  • Add Support for e24cloud to Ec2 datasource. (LP: #1696476)
  • Add RbxCloud datasource [Adam Dobrawy]
  • get_interfaces: don’t exclude bridge and bond members (LP: #1846535)
  • Add support for Arch Linux in render-cloudcfg [Conrad Hoffmann]
  • util: json.dumps on python 2.7 will handle UnicodeDecodeError on binary (LP: #1801364)
  • debian/ubuntu: add missing word to netplan/ENI header (LP: #1845669)
  • ovf: do not generate random instance-id for IMC customization path
  • sysconfig: only write resolv.conf if network_state has DNS values (LP: #1843634)
  • sysconfig: use distro variant to check if available (LP: #1843584)
  • systemd/cloud-init.service.tmpl: start after wicked.service [Robert Schweikert]
  • docs: fix zstack documentation lints
  • analyze/show: remove trailing space in output
  • Add missing space in warning: “not avalid seed” [Brian Candler]
  • pylintrc: add ‘enter_context’ to generated-members list
  • Add datasource for ZStack platform. [Shixin Ruan] (LP: #1841181)
  • docs: organize TOC and update summary of project [Joshua Powers]
  • tools: make clean now cleans the dev directory, not the system
  • docs: create cli specific page [Joshua Powers]
  • docs: added output examples to analyze.rst [Joshua Powers]
  • docs: doc8 fixes for instancedata page [Joshua Powers]
  • docs: clean up formatting, organize boot page [Joshua Powers]
  • net: add is_master check for filtering device list (LP: #1844191)
  • docs: more complete list of availability [Joshua Powers]
  • docs: start FAQ page [Joshua Powers]
  • docs: cleanup output & order of datasource page [Joshua Powers]
  • Brightbox: restrict detection to require full domain match .brightbox.com
  • VMWware: add option into VMTools config to enable/disable custom script. [Xiaofeng Wang]
  • net,Oracle: Add support for netfailover detection
  • atomic_helper: add DEBUG logging to write_file (LP: #1843276)
  • doc: document doc, create makefile and tox target [Joshua Powers]
  • .gitignore: ignore files produced by package builds
  • docs: fix whitespace, spelling, and line length [Joshua Powers]
  • docs: remove unnecessary file in doc directory [Joshua Powers]
  • Oracle: Render secondary vnic IP and MTU values only
  • exoscale: fix sysconfig cloud_config_modules overrides (LP: #1841454)
  • net/cmdline: refactor to allow multiple initramfs network config sources
  • ubuntu-drivers: call db_x_loadtemplatefile to accept NVIDIA EULA (LP: #1840080)
  • Add missing #cloud-config comment on first example in documentation. [Florian Müller]
  • ubuntu-drivers: emit latelink=true debconf to accept nvidia eula (LP: #1840080)
  • DataSourceOracle: prefer DS network config over initramfs
  • format.rst: add text/jinja2 to list of content types (+ cleanups)
  • Add GitHub pull request template to point people at hacking doc
  • cloudinit/distros/parsers/sys_conf: add docstring to SysConf
  • pyflakes: remove unused variable [Joshua Powers]
  • Azure: Record boot timestamps, system information, and diagnostic events [Anh Vo]
  • DataSourceOracle: configure secondary NICs on Virtual Machines
  • distros: fix confusing variable names
  • azure/net: generate_fallback_nic emits network v2 config instead of v1
  • Add support for publishing host keys to GCE guest attributes [Rick Wright]
  • New data source for the Exoscale.com cloud platform [Chris Glass]
  • doc: remove intersphinx extension
  • cc_set_passwords: rewrite documentation (LP: #1838794)
curtin
  • storage_config: interpret value, not presence, of DM_MULTIPATH_DEVICE_PATH [Michael Hudson-Doyle]
  • vmtest: Add skip_by_date for test_ip_output on eoan + vlans
  • block-schema: update raid schema for preserve and metadata
  • dasd: update partition table value to ‘vtoc’ (LP: #1847073)
  • clear-holders: increase the level for devices with holders by one (LP: #1844543)
  • tests: mock timestamp used in collect-log file creation (LP: #1847138)
  • ChrootableTarget: mount /run to resolve lvm/mdadm issues which require it.
  • block-discover: handle multipath disks (LP: #1839915)
  • Handle partial raid on partitions (LP: #1835091)
  • install: export zpools if present in the storage-config (LP: #1838278)
  • block-schema: allow ‘mac’ as partition table type (LP: #1845611)
  • jenkins-runner: disable the lockfile timeout by default [Paride Legovini]
  • curthooks: use correct grub-efi package name on i386 (LP: #1845914)
  • vmtest-sync-images: remove unused imports [Paride Legovini]
  • vmtests: use file locking on the images [Paride Legovini]
  • vmtest: enable arm64 [Paride Legovini]
  • Make the vmtests/test_basic test suite run on ppc64el [Paride Legovini]
  • vmtests: separate arch and target_arch in tests [Paride Legovini]
  • vmtests: new decorator: skip_if_arch [Paride Legovini]
  • vmtests: increase the VM memory for Bionic
  • vmtests: Skip Eoan ZFS Root tests until bug fix is complete
  • Merge branch ‘fix_merge_conflicts’
  • d/control: update Depends for new probert package names [Dimitri John Ledkov]
  • util: add support for ‘tbz’, ‘txz’ tar format types to sanitize_source (LP: #1843266)
  • net: ensure eni helper tools install if given netplan config (LP: #1834751)
  • d/control: update Depends for new probert package names [Dimitri John Ledkov]
  • vmtest: fix typo in EoanBcacheBasic test name
  • storage schema: Update nvme wwn regex to allow for nvme wwid format (LP: #1841321)
  • Allow EUI-64 formatted WWNs for disks and accept NVMe partition naming [Reed Slaby] (LP: #1840524)
  • Makefile: split Python 2 and Python 3 unittest targets apart
  • Switch to the new btrfs-progs package name, with btrfs-tools fallback. [Dimitri John Ledkov]
Contact the Ubuntu Server team Bug Work and Triage Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 7

Uploads Released to the Supported Releases

Total: 82

Uploads to the Development Release

Total: 142

Ubuntu Blog: Ansible vs Terraform vs Juju: Fight or cooperation?

18 hours 7 minutes ago

Ansible vs Terraform vs Juju vs Chef vs SaltStack vs Puppet vs CloudFormation – there are so many tools available out there. What are these tools? Do I need all of them? Are they fighting with each other or cooperating?

The answer is not really straightforward. It usually depends on your needs and the particular use case. While some of these tools (Ansible, Chef, StaltStack, Puppet) are pure configuration management solutions, the others (Juju, Terraform, CloudFormation) focus more on services orchestration. For the purpose of this blog, we’re going to focus on Ansible vs Terraform vs Juju comparison – the three major players which have dominated the market.

Ansible

Ansible is a configuration management tool, currently maintained by Red Hat Inc. Although the core project is open-source, some commercial extensions, such as Ansible Tower, are available too. By supporting a variety of modules, Ansible can be used to manage both Unix-like and Windows hosts. Its architecture is serverless and agentless. Instead of using proprietary communication protocols, Ansible relies on SSH or remote PowerShell sessions to perform configuration tasks.

The tool implements an imperative DevOps paradigm. This means that Ansible users are responsible for defining all of the steps required to achieve their desired goal. This includes writing instructions on how to install applications, preparing templates of configuration files, etc. All these steps are usually implemented in a form of so-called playbooks, however, users can execute ad hoc commands too. Once written, the playbooks can be used to automate configuration tasks across multiple machines in various environments.

Although perfectly suited for traditional configuration management, Ansible cannot really orchestrate services. It was just designed for different purposes, with automation being in the core. Moreover, some of its modules are cloud-specific which makes a potential migration from one platform to the other difficult. Finally, due to its imperative nature, Ansible does not scale in large environments consisting of various interconnected applications. 

Terraform

In turn, Terraform is an open-source IaC (Infrastructure-as-Code) solution that was developed by HashiCorp. It allows users to provision and manage cloud, infrastructure, and service resources using simple, human-readable configuration language called HCL (HashiCorp Configuration Language). The resources are delivered by so-called providers. At the moment Terraform supports over 200 providers, including public clouds, private clouds and various SaaS (Software-as-a-Service) providers, such as DNS, MySQL or Vault.

Terraform uses a declarative DevOps paradigm which means that instead of defining exact steps to be executed, the ultimate state is defined. This is a huge progress compared to the traditional configuration management tools. However, Terraform’s declarative approach is limited to providers only. The applications being deployed still have to be installed and configured using traditional scripts and tools. Of course, pre-built images can be used too, when deploying applications in cloud environments. Those can be later customized according to the users’ requirements.

In addition to the initial deployment, Terraform can also be used to orchestrate deployed workloads. This functionality is provided by its execution plans and resource graphs. Thanks to the execution plans users can define exact steps to be performed and the order in which they will be executed. In turn, resource graphs allow to visualise those plans. Again, this is much more than what Ansible can do.

Juju

Contrary to both Ansible and Terraform, Juju is an application modelling tool, developed and maintained by Canonical. You can use it to model and automate deployments of even very complex environments consisting of various interconnected applications. Examples of such environments include OpenStack, Kubernetes or Ceph clusters. Apart from the initial deployment, you can also use Juju to orchestrate deployed services too. Thanks to Juju you can backup, upgrade or scale-out your applications as easily as executing a single command. 

Like Terraform, Juju uses a declarative approach, but it brings it beyond the providers up to the applications layer. You can not only declare a number of machines to be deployed or number of application units, but also configuration options for deployed applications, relations between them, etc. Juju takes care of the rest of the job. This allows you to focus on shaping your application instead of struggling with the exact routines and recipes for deploying them. Forget the “How?” and focus on the “What?”.

The real power of Juju lies in charms – collections of scripts and metadata which contain a distilled knowledge of experts from Canonical and other companies. Charms contain all necessary logic required to install, configure, interconnect and operate applications. Canonical maintains a Charm Store with over 400 charms, but you can also write your own charms. This is because the whole framework and ecosystem is fully open-source.

While Juju’s role is to deploy and orchestrate applications, like Terraform it relies on a variety of providers to spin up machines (bare metal, VMs or containers) for hosting those applications. The supported providers include leading public clouds (AWS, Google Cloud, Azure, etc.) and various on-premise providers: LXD, MAAS, VMware vSphere, OpenStack and Kubernetes. In a very rare case, when your cloud environment is not natively supported by Juju, you can use a manual provider to let Juju deploy applications on top of your manually provisioned machines.

Ansible vs Terraform vs Juju

Now, as we’ve arrived at the last section of this blog, could we somehow compare Ansible vs Terraform vs Juju? The answer is short – we cannot. This is because all of them were designed for different purposes and with a different focus in mind. It is fair to say that in some way they formed an evolution path of lifecycle management frameworks. It is really hard to perform Ansible vs Terraform vs Juju comparison then, as each of them is absolutely different.

Thus, if we cannot compare them, let’s maybe get back to the original questions and try to answer them instead.

Do I need all of those tools?

Well, it really depends on your use case, so let’s try to sum up what these tools are for. Ansible is a configuration management tool and fits very well wherever traditional automation is required. On the other hand, Terraform focuses more on infrastructure provisioning, assuming that applications will be delivered in a form of pre-built images. Finally, Juju takes a completely different approach by using charms for applications deployments and operations.

Are they fighting with each other or cooperating?

There are definitely areas in which they cooperate. For example, Juju charms can use Ansible playbooks to maintain configuration files. Or you can use Juju-deployed applications (e.g. OpenStack) as a provider for Terraform. As data centers are becoming more and more complex, there’s definitely space for all of them. This is because all of them are great in what they are doing and what they were designed for.

I want Juju, what next?

If you want to evolve your DevOps organisation and benefit from model-driven, declarative approach to applications deployments and operations, Juju is the answer. Simply visit the Juju website, watch the “Introduction To Juju: Automating Cloud Operations” webinar or contact us directly. Canonical’s DevOps experts are waiting to help you to move forward with the transformation of your organisation.

Ubuntu Blog: Grace Hopper Conference 2019

1 day 6 hours ago

We are so excited about what just happened that we felt we should tell everyone about it!

A group of 24 of us at Canonical from various teams including Sales, HR and Engineering, attended the Grace Hopper Celebration in Orlando, Florida. This year, it was an epic gathering of more than 26,000 people from all over the globe interested in tech. Despite its start as women’s work, the tech industry has gained a reputation of being dominated by and mostly suited for men. In reality, this only made the Grace Hopper conference feel more impactful, especially knowing that in its very first edition in 1994, only 500 women were present at the event. The Grace Hopper Conference was an awesome celebration of women; diverse, multi-talented, and deeply skilled!

Both women and men, mostly students, interested in everything from security to machine learning came by the Canonical booth to hear about Ubuntu. We brought along an Orange box so we could demo MAAS, OpenStack, and other incredible technologies happening on Ubuntu at Canonical.

We rotated attending informative and inspiring sessions; exploring an exhibition hall pulsating with energy and booths as far as the eye can see; and discussed Canonical offerings and job opportunities at our Canonical booth.

There were so many best parts to the week. We discussed various technologies with others in the industry, scoped out exceptional talent for Canonical job opportunities, visited various booths and found out who uses Ubuntu and what for. We also gave out Ubuntu trinkets and collected bags of trinkets from others. Perhaps our favourite was just hanging out and getting to know fellow Canonical’ers on the various teams and what they worked on.

All of us had the opportunity to share what we do and what we love about working for Canonical, the company behind Ubuntu. It was interesting for us that most of the people we met did not know the name ‘Canonical’, but knew and worked regularly with Ubuntu. Someone even said: “Ubuntu is the reason I chose this career!” and were very excited to talk to the people behind it.

Meeting that many smart women in tech made us realise that we are not alone. Every one of us has the capacity to contribute and drive change. #WeWill make a difference. See you next year at GHC 2020!

Costales: Ubucon Europe 2019 | Sintra edition

1 day 8 hours ago
¡Y comienza una nueva Ubucon Europea! En esta ocasión en Sintra, Portugal.¡Bienvenidos!

Llegué el día anterior justo a tiempo para una cena de bienvenida organizada en un vivero de empresas extravagante: Chalet 12. Allí unas 25 personas compartimos momentos entrañables con una cena cocinada por la propia organización.Marco | Costales | Tiago | OliveLo cierto es que la organización Ubuntu-PT estuvo realizando actividades y visitas toda la semana para los miembros de la comunidad que iban llegando con antelación, todo un detallazo.Día 1Llegué de los primeros al Centro Cultural Olga Cadaval, un edificio que se divide en dos alas principales con grandes espacios abiertos. A parte de las conferencias, había un stand de UBPorts y Libretrend. Incluso café gratis durante toda la jornada. En el stand de UBPorts pude probar el Pinebook con Ubuntu Touch.
Pinebook

Tras recoger mi identificación y un paquete de bienvenida (camiseta, pings, pegatinas...) comenzó en el auditorio la presentación de esta nueva edición por parte de Tiago Carrondo. 
Conferencia de apertura
Acto seguido, el mismo Tiago nos anunció el 15 cumpleaños de Ubuntu, algo en lo que no había caído y moló, repasando los momentos más importantes de Ubuntu en su corta pero intensa vida.Conferencia 15 Cumpleaños

Yo puse mi granito de arena con dos conferencias, la primera por la mañana, rodeado de arte (cuadros de Nadir Afonso) analicé los peligros concernientes a nuestra privacidad online y cómo podemos mejorarla.Privacy on the Net

En cuanto finalicé mi conferencia, acudí a ver la mitad de la conferencia de Rudy sobre "Events in your Local Community Team", donde repasa los logros de Ubuntu Paris, con sus Ubuntu Party y WebCafe.
Events in your Local Community Team
A las 13:15 nos fuimos a comer unos cuantos a un restaurante cerca de la estación. 
Comida

Yo impartía un workshop de dos horas a las 3 (o eso pensaba) sobre cómo desarrollar una aplicación nativa para Ubuntu Touch. Salimos Mateo Salta y yo un poco antes para llegar a tiempo, pero me estaba buscando Tiago, que la conferencia comenzaba a las 14:30 y había personas esperando desde entonces. Vaya vergüenza y desde aquí pedir disculpas a la organización y a los asistentes a mi conferencia por ese retraso. En el workshop mostré cómo realizar una linterna en QML para Ubuntu Touch, algo que maravilló a los asistentes por la sencillez y pocas líneas de código.
Creating an Ubuntu Phone app

El día lo finalizamos yendo a una cervecería para calentar motores
Saloon 2

Y posteriormente cenar todos juntos al restaurante O Tunel, donde degustamos platos tradicionales que estaban exquisitos. Estos momentos son los mejores (en mi opinión) pues es cuando realmente se crea y convive en comunidad.
Cena
Día 2Día largo por delante, con 4 conferencias simultáneas.
Yo me decanté por la de Jesús Escolar y su conferencia Applied Security for Containers, una conferencia donde te das cuenta de los peligros que rodean todas las plataformas y servicios.Applied Security for Containers

Después conocí a Vadim, desarrollador web profesional que nos mostró su flujo de trabajo y pequeños trucos para ganar tiempo desarrollando.Scripts de Vadim

Tras Vadim, Marius Quabeck mostró los pasos para crear un podcast. Apunté algún programa que comentó para editar el podcast de Ubuntu y otras hierbas.
Quabeck mostrando cómo crear y editar un podcast

La comida no fue organizada y nos juntamos todos, por lo que costó encontrar un restaurante para tanta gente.En la tarde, Joan CiberSheep comenzó las conferencias enseñándonos las posibilidades para crear una aplicación de Ubuntu Touch. Yo me quedé un poco anclado en el tiempo con los comandos y workflow de Canonical y UBPorts ha evolucionado muchísimo la programación del móvil con Ubuntu.Joan

Finalmente, Simos nos mostró las bondades de LXC con su conferencia Linux Containers LXC/LXD.Linux Containers

Destacar aquí la gifbox que montó Rudy y Olive, una cámara que junta una secuencia de fotografías en un gif, siendo muy divertido e inesperado el resultado final de cada uno que se fotografía.



Al atardecer, el plan fue juntarnos en una cervecería de las afueras. Tras unas tapas, el dueño nos mostró el proceso de elaboración de la cerveza en su pequeña bodega. 
Explicándonos la fabricación de cerveza

El plato principal fue un bacalao a la brasa junto a una degustación de cervezas. Este evento estaba subvencionado parcialmente por un mecenas anónimo, así que mil gracias desde este humilde post.Cervecería

Como broche final, Jaime preparó una sorpresa que me entusiasmó, una bandina de 2 gaitas y un tambor nos amenizó y animó a bailar en una fiesta que duró hasta la media noche.
¡Fiesta! :)

Día 3Hoy nos depara una fiesta del 15 aniversario de Ubuntu, estamos todos ansiosos de cómo será :PHoy podríamos decir que es la 'UBPortsCON', pues habrá un montón de conferencias sobre el estado de Ubuntu Touch.
Precisamente la primera de todas es de Jan Sprinz, repasando el pasado, mostrándonos el presente y analizando hacia dónde se encamina este interesante proyecto que nos otorga una alternativa libre a los todo poderosos Android e iOS.Jan Sprinz narrando la historia de Ubuntu Touch

El mismo Jan nos enseñó uno de los bastiones de UBPorts, el instalador que automatiza y convierte en un juego de niños instalar Ubuntu Touch en nuestro móvil, siempre que sea uno de los dispositivos compatibles a los que ha sido portado.
Tras la conferencia de Jan, Rudy me avisó para ir a la Ubuntu Europe Federation Board Open Meeting, una federación creada precisamente para facilitar a organizadores realizar eventos ubunteros como este.
Finalizando la mañana, Joan CiberSheep nos explicó las guías de usabilidad y diseño de Ubuntu Touch.Usabilidad y diseño de Ubuntu Touch

En esta ocasión comimos por grupos en distintos restaurantes y volvimos puntuales para realizar la fotografía de grupo.
Después el gran Martin Wimpress nos narró la historia de la paquetería snap y los motivos de Canonical para crearla.Martin Wimpress

Una conferencia muy interesante fue la de Dario Cavedon, que enlazó de forma poco habitual su afición por correr con la privacidad.Dario Cavedon

Escogí como última conferencia la de Rute Solipa, que nos explicó el proceso y las dificultades de migrar a software libre el municipio portugués de Seixal.
Migración de Seixal

En la noche, acudimos al mismo bar bar, cenando y celebrando a ritmo de gaita el 15 aniversario de Ubuntu :))
Fiesta de cumpleaños
Día 4Último día de la Ubucon :'( Yo quiero más jejejejeEscogí la conferencia de Michal Kohutek, quien nos mostró cómo mejorar los materiales educativos analizando con sensores el seguimiento ocular del lector.Michal y Jesús Escolar con reconocimiento ocular

Marco Trevisan nos mostró la transición a GNOME del escritorio de Ubuntu y qué nos depara la futura versión LTS.Futura Ubuntu 20.04

Y para finalizar, Tiago Carrondo, quien abrió el primer día, cerró el evento explicando qué es necesario para realizar una Ubucon, las dificultades para organizar esta edición y estadísticas de asistencia. Fue emotivo cuando todos los voluntarios subieron al escenario.
El final

Para la comida fuimos en grupos a distintos restaurantes, nosotros finalizamos en una cafetería con un café y pastel.
Comida

En la tarde había pensado pasear y conocer un poco mejor Sintra, pero con Joan, una conversación deriva a la siguiente, así que la tarde transcurrió en el mismo bar que cenamos los días anteriores. A la hora de la cena se juntó más gente y acabó dándonos la una de la madrugada mientras intentábamos arreglar el mundo :)
Los últimos supervivientes
El resumen
La Ubucon Europea se consolida año tras año. La organización este año ha sido muy buena, con muchas conferencias y actividades extra.
Sintra ha sido una buena elección, una ciudad acogedora, con buenas infraestructuras que permitiesen desarrollar un evento de estas características.
Y ha sido una muestra más de que lo mejor de Ubuntu es su comunidad.
 ¡Hasta el próximo año!

Parece que resuenan rumores de que el próximo año será en Italia... ¡Quién sabe, ojalá! :)
Ya en el recuerdo queda el haber disfrutado con evento único, el haber aprendido un poco en cada una de las conferencias y especialmente, el volver a ver a los amigos que se van formando en ediciones previas y que son los que realmente hacen que la Ubucon Europea sea entrañable.
Costales

Ubuntu Blog: Design and Web team summary – 11 October 2019

1 day 13 hours ago

This was a fairly busy two weeks for the Web & design team at Canonical. This cycle we had two sprints. The first was a web performance workshop run by the amazing Harry Roberts. It was a whirlwind two days where we learned a lot about networking, browsers, font loading and more. We also spent a day working on implementing a lot of the changes. Hopefully our sites will feel a bit faster.  More updates will be coming over the next few months. The second sprint was for the Brand and Web team, where we looked at where the Canonical and Ubuntu brands need to evolve. Here are some of the highlights of our completed work.

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical

Takeovers and engage pages

This iteration we built two webinars with engage pages and two more case study engage pages.

Deep Tech webinar

We built a new homepage takeover along with an engage page to learn more about the webinar.

Intro to edge computing webinar series

We created a homepage takeover that leads to an engage page with a series of webinars people can watch about computing at the edge.

Yahoo! Japan case study

We posted a new case study about how Canonical works with Yahoo! Japan and there IaaS platform. 

Domotz case study

We posted a new case study about how Canonical has helped Domotz with their IoT strategy.

Base

Base is the team that underpins our toolsets and architure of our projects. They maintain the CI and deployment of all websites we maintain. 

HTTP/2 and TLS v1.3 for Kubernetes sites

Back in August, a number of vulnerabilities were discovered in HTTP/2, which opened up some DOS possibilities. In response to this, we disabled HTTP/2 for our sites until the vulnerabilities were fixed.

This iteration, the NGINX Ingress controller on our k8s cluster was updated, updating our sites to be served with the latest version of openresty, which includes all relevant fixes for these earlier vulnerabilities. In response we’ve re-enabled HTTP/2, which was also a strong performance recommended by Harry during the workshop.

Another recommendation was that we switch to the latest TLS v1.3, which also carries significant performance benefits, so we switched this on for the whole cluster this iteration.

IRC bot migrated to our kubernetes cluster

We maintain a Hubot-based IRC bot for alerting us to new pull-requests and releases on our projects. Up until now, this has been hosted externally on Heroku.

This iteration, we added a Dockerfile so it could be built as an image and the configs to host it on Kubernetes. We’ve released it so now our IRC bot is hosted in-house on Kubernetes 🎉.

image-template v1

Our canonicalwebteam.image-template module provides a template function which outputs <img> element markup in a recommended format for performance.

The performance workshop highlighted a number of best practices which we used to improve the module and release v1.0.0:

Request latency metrics in Graylog

Many of our sites (particularly snapcraft.io, jaas.ai, ubuntu.com/blog and certification.ubuntu.com) rely heavily on pulling their data from an API. For these sites, the responsiveness of those APIs is central.

Talisker, our Gunicorn-based WSGI server, can output latency stats for outgoing API requests as either Prometheus metrics or just in logs.

This iteration, we have enhanced our Graylog installation to read these metrics from logs and output beautiful graphs of our API.

MAAS

The MAAS squad develop the UI for the maas project.

Our team continues with the work of separating the UI from maas-core. We have very nearly completed taking the settings section to React and are also working on converting the user preferences tab to React as well. 

We are also progressing with the work on network testing. The core functionality is all complete now and we’re ironing out some final details.

As part of the work on representing NUMA topology in MAAS, we completely redesigned the machine summary page, which was implemented this iteration.

We are also experimenting with introducing white background to MAAS as well as the rest of the suite of websites and applications we create. This work is ongoing.

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

The team continued working on the new JAAS dashboard, moving forward the design with explorations on responsiveness, interactions, navigation, and visuals.

The team also continued working on Juju website, and the alignment between the CLI commands of Juju, Snap, Charm and Snapcraft. CharmHub wise, the team explored the home page of the new website charmhub.io, to start defining the content and the user experience of the page and navigation.

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

The headline story from the last iteration is the improvement to overall page load times, but specifically the store page. With some code organisation, and the aforementioned image-template module, we’ve managed to drop the initial load time of the store page from an average of ~15s to ~5s (or quicker, as in the video above).

Faster Snap browsing for everyone!

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2019

1 day 18 hours ago


Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Adrian Bunk did nothing (and got no hours assigned), but has been carrying 26h from August to October.
  • Ben Hutchings did 20h (out of 20h assigned).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 30h (out of 23.75h assigned and 5.25h from August), thus anticipating 1h from October.
  • Hugo Lefeuvre did nothing (out of 23.75h assigned), thus is carrying over 23.75h for October.
  • Jonas Meurer did 5h (out of 10h assigned and 9.5h from August), thus carrying over 14.5h to October.
  • Markus Koschany did 23.75h (out of 23.75h assigned).
  • Mike Gabriel did 11h (out of 12h assigned + 0.75h remaining), thus carrying over 1.75h to October.
  • Ola Lundqvist did 2h (out of 8h assigned and 8h from August), thus carrying over 14h to October.
  • Roberto C. Sánchez did 16h (out of 16h assigned).
  • Sylvain Beucler did 23.75h (out of 23.75h assigned).
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).
Evolution of the situation

September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.

For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!

This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the reduced involvment of another long term sponsor, we are still at the same funding level (roughly 216 hours sponsored by month).

The security tracker currently lists 32 packages with a known CVE and the dla-needed.txt file has 37 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

The Fridge: Ubuntu Weekly Newsletter Issue 600

2 days 4 hours ago

Welcome to the Ubuntu Weekly Newsletter, Issue 600 for the week of October 6 – 12, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Ubucon Europe 2019: Ubucon Europe 2019 in local media

3 days 4 hours ago

Remember Marta, our volunteer from the registration booth? She took care of the translation of the article written by Fátima Caçador for SAPO Tek:

Ubucon Europe: What is the Ubuntu community doing in Sintra? Sharing technical knowledge and tightening connections

News from the new Ubuntu distribution, the exploration of the several platforms and many “how to”, rule the 4-days agenda where the open source and open technologies are in the air.

The Olga Cadaval Cultural centre in Sintra, is the main stage of a busy agenda filled with several talks and more technical sessions, but at Ubucon Europe there’s also room for networking and cultural visits, a curious fusion between spaces full of history, like the Pena Palace or the Quinta da Regaleira, and one of the youngest “players” in the world of software.

For 4 days, the international Ubuntu Community gathers in Sintra for an event open to everyone, where the open source principles and open technology are dominating. The Ubucon Europe Conference begun Thursday, October 10th, and extends until Sunday, October 13th, keeping an open doors policy to everyone who wants to

Afterall, what is the importance of Ubucon? The number of participants, which should be around 150, doesn’t tell the whole story of what you can learn during these days, as the SAPO TEK had the opportunity to check this morning.

Organised by the Ubuntu Portugal Community, with the National Association for Open Software, the Ubuntu Europe Federation and the Sintra Municipality, the conference brings to Portugal some of the biggest open source specialists and shows that Ubuntu is indeed alive, even if not yet known by most people, and still far from the “world domain” aspired by some.

15 years of Ubuntu

This year is Ubuntu’s 15th birthday after its creation in 2004 by Mark Shuttleworth who gathered a team of Debian developers and founded Canonical, in South Africa, with the purpose of developing a Linux distribution easy to use. He called it Ubuntu, a word that comes from the Zulo and Xhosa languages meaning “I am because we are” which shows its social dimension.

The millionaire Mark Shuttleworth declared at the time “my motivation and goal is to find a way of creating a global operating system for desktops which is free in every way, but also sustainable and with a quality comparable to any other that money can buy”.

And in the last 15 years Ubuntu hasn’t stop growing, following trends and moving from the desktop and servers to the Cloud, the IoT and even phones. Canonical ended up withdrawing from this last one, leaving the development on UBport’s hands.

“Ubuntu has never been better”, states Tiago Carrondo, head of the Ubuntu Portugal Community, explaining that Cloud usage is growing every month and the same is happening on the desktop. “The community has proved being alive and participative” and Ubucon is an example of that capacity to deliver and to be involved in projects.

A new version of Ubuntu is going to be launched in two weeks (October 19th) and in April, next year, it’s time for Ubuntu 20.04, the new LTS version which is generating expectations and it’s the focus of several talks during Ubucon.

An operating system not just for ‘geeks’

But is this a subject just for some “geeks” who don’t mind getting their hands dirty and mess with coding to adapt the operating system to their needs? Gustavo Homem, CTO of Ângulo Sólido, ensures Ubuntu is increasingly being used by companies and in the cloud Azure, AWS and DigitalOcean is among the most used operating systems, highlighting the ease of use, flexibility and security.

The Ângulo Sólido uses Ubuntu internally and with their clients, from desktops to routers and Cloud solutions, and during Ubucon it presented the more and the least expected uses for Ubuntu, where some hacks with mixing desks take part.

It’s in the Cloud where Ubuntu has grown the most, due to the freedom of the operating system, because at the level of computer’s desktops and laptops it depends on the manufactures willingness to sell devices with a pre-installed operating system, or without any, leaving room for ubuntu’s using.

However, even if it’s easy and more and more prepared to connect to every peripherals and it supports most of the software on the market, Ubuntu is far from being recognised by the majority of computer users, so its use is reserved to a restrict group of people with more technical training and knowledge.

In Cell phones, where there was a movement for creating an operating system in 2014 which could be an alternative to android and IOS, the abandonment of the project by Canonical didn’t help creating a mass movement involving manufactures. The UBports community continues developing the concept and coding, and during Ubucon showed some news and developments with Fairphone and Pine64, but it’s still far from becoming a solid operating system, in which you can fully trust, as Jan Sprinz admitted.

In the audience of the talk which SAPO TEK attended, there were many Ubuntu Touch users, the mobile operating system, but with doubts and concerns, such as the availability of the most used apps. Nevertheless the operating system is cherished, and there was even someone comparing it to a pet, which may destroy the leaving room and chew the shoes, but the owner never stops loving it.

How do you do an Ubucon?

“We wanted to make a memorable Ubucon”, explains Tiago Carrondo, the face of the organisation who, during the last few months dedicated much of his time to the preparation of all the logistics, part of a very small but very committed team, as he stated to SAPO TEK.

The European event is now on its 4th edition and it arose spontaneously, inside the community, and after Germany (Essen), France (Paris) and Spain (Xixón), Portugal is the 4th country hosting the community with the purpose of “having an Ubucon without rain” and from here, the community goes, in 2020 to a new location, which should be revealed this week but now still a well-kept secret.

Characterising Ubuntu Portugal as a community of people, Tiago Carrondo explains that companies are “friends”, and appear as associates and sponsors for the event, where there are also connections with educational institutes.

The centre of the organisation and purpose of Ubucon are the people, so there’s a very big social component, allowing volunteers working in Ubuntu’s projects during the entire year to meet face to face and share experiences and knowledge. For that reason, the schedule was designed to start a little later than usual, around 10 am, and to finish early with a long pause for lunch.

The conference ends tomorrow, but those who want to attend the last presentations in Olga Cadaval Cultural Centre in Sintra, can still do it. By registering or by simply showing up at the venue, because the organisation policy is open doors and respect for privacy.

Those who didn’t have the chance to assist will be able to watch everything in video over the next few weeks. Tiago Carrondo explains that they didn’t want to stream it, but everything is being recorded to be edited and will be available soon.

Ubuntu Blog: Onboarding edge applications on the dev environment

5 days 3 hours ago

Adoption of edge computing is taking hold as organisations realise the need for highly distributed applications, services and data at the extremes of a network. Whereas data historically travelled back to a centralised location, data processing can now occur locally allowing for real-time analytics, improved connectivity, reduced latency and ushering in the ability to harness newer technologies that thrive in the micro data centre environment.

In an earlier post, we discussed the importance of choosing the right primitives for edge computing services. When looking at use-cases calling for ultra-low latency compute, Kubernetes and containers running on bare metal are ideal for edge deployments because they offer direct access to the kernel, workload portability, easy upgrades and a wide selection of possible CNI choices.

While offering clear advantages, setting up Kubernetes for edge workload development can be a difficult task – time and effort better spent on actual development. The steps below walk you through an end-to-end deployment of a sample edge application. The application runs on top of Kubernetes with advanced latency budget optimization.  The deployed architecture includes Ubuntu 18.04 as the host operating system, Kubernetes v1.15.3 (MicroK8s) on bare-metal, MetalLB load balancer and CoreDNS to serve external requests.

Let’s roll

Summary of steps:

  1. Install MicroK8s
  2. Add MetalLB
  3. Add a simple service – Core DNS
Step 1: Install MicroK8s

Let’s start with the development workstation Kubernetes deployment using MicroK8s by pulling the latest stable edition of Kubernetes.

$ sudo snap install microk8s --classic
microk8s v1.15.3 from Canonical✓ installed
$ snap list microk8s
Name      Version Rev  Tracking Publisher   Notes
microk8s  v1.15.3 826  stable canonical✓  classic

Step 2: Add MetalLB

As I’m deploying Kubernetes on the bare metal node, I chose to utilise MetalLB, as I won’t be able to rely on the cloud to provide LBaaS service. MetalLB is a fascinating project supporting both L2 and BGP modes of operation, and depending on your use case, it might just be the thing for your bare metal development needs. 

$ microk8s.kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
namespace/metallb-system created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created

Once installed, you need to make sure to update the iptables configuration to allow IP forwarding and configure your metalLB with networking mode and address the pool you want to use for load balancing. The config files need to be created manually, please see listing 1 below for reference.

$ sudo iptables -P FORWARD ACCEPT

Listing 1 : MetalLB configuration (metallb-config.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.2.32/28

Step 3: Add a simple service

Now that you have your config file ready, you continue with CoreDNS sample workload configuration. Especially for edge use cases, you usually want to have fine-grained control over how your application is exposed to the rest of the world. This includes ports as well as the actual IP address you would like to request from your load balancer. For the purpose of this exercise, I use .35 IP addresses from 10.0.2.32/28  subnet and create Kubernetes service using this IP.

Listing 2: CoreDNS external service definition (coredns-service.yaml)

apiVersion: v1
kind: Service
metadata:
  name: coredns
spec:
  ports:
  - name: coredns
    port: 53
    protocol: UDP
    targetPort: 53
  selector:
    app: coredns
  type: LoadBalancer
  loadBalancerIP: 10.0.2.35

For the workload configuration itself, I use a simple DNS cache configuration with logging and forwarding to Google’s open resolver service.

Listing 3: CoreDNS ConfigMap (coredns-configmap.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
data:
  Corefile: |
    .:53 {
     forward . 8.8.8.8
     cache
     log
    }

Finally, the description of our Kubernetes deployment calling for 3 workload replicas, latest CoreDNS image and configuration I’ve defined in our ConfigMap.

Listing 4: CoreDNS Deployment definition  (coredns-deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns-deployment
labels:
app: coredns
spec:
replicas: 3
selector:
matchLabels:
app: coredns
template:
metadata:
labels:
app: coredns
spec:
containers:
- name: coredns
image: coredns/coredns:latest
imagePullPolicy: IfNotPresent
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile

Deploy

With all the service components defined, prepared and configured, you’re ready to start the actual deployment and verify the status of Kubernetes pods and services.

$ microk8s.kubectl apply -f metallb-config.yaml
configmap/config created
$ microk8s.kubectl apply -f coredns-service.yaml
service/coredns created
$ microk8s.kubectl apply -f coredns-config.yaml
configmap/coredns created
$ microk8s.kubectl apply -f coredns-deployment.yaml
deployment.apps/coredns-deployment created
$ microk8s.kubectl get po,svc --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/coredns-deployment-9f8664bfb-kgn7b 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-lcrfc 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-n4ht6 1/1 Running 0 10s
metallb-system pod/controller-7cc9c87cfb-bsrwx 1/1 Running 0 4h8m
metallb-system pod/speaker-s9zz7 1/1 Running 0 4h8m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/coredns LoadBalancer 10.152.183.89 10.0.2.35 53:31338/UDP 34m
default service/kubernetes ClusterIP 10.152.183.1 443/TCP 4h29m

Once all the containers are fully operational, you can evaluate how your new end to end service is performing. As you can see, the very first request takes around 50ms to get answered (which aligns with usual latency between my ISP access network and Google DNS infrastructure), however, subsequent requests provide significant latency reduction as expected from a local DNS caching instance.

$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 50 ms
$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 0 ms
$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 1 ms

CoreDNS is an example of a simple use case for distributed edge computing, proving how network distance and latency can be optimised for better user experience by changing service proximity. The same rules apply to exciting services such as AR/VR, GPGPU-based inference AI and content distribution networks.

The choice of proper technological primitives, flexibility to manage your infrastructure to meet service requirements and process to manage distributed edge resources in scale will become critical factors for edge cloud adoption. This is where MicroK8s comes in, to reduce the complexity and cost of development and deployment without sacrificing quality.

End Note

So you’ve just on-boarded an edge application, now what? Take MicroK8s for a spin with your use case(s) or just try to break stuff. If you’d like to contribute or request features/enhancements, Please shout out on our Github, Slack #MicroK8s or Kubernetes forum.

Laura Czajkowski: FOSDEM Community Devroom 2020 CFP open

5 days 14 hours ago

We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February 2020 at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.

== tl;dr ==

  • Community DevRoom takes place on Sunday, 2nd February 2020
  • Submit your papers via the conference abstract submission system, Pentabarf, at https://penta.fosdem.org/submission/FOSDEM20
  • Indicate if your session will run for 30 or 45 minutes, including Q&A. If you can do either 30 or 45 minutes, please let us know!
  • Submission deadline is 27 November 2019 and accepted speakers will be notified by 11 December 2019
  • If you need to get in touch with the organizers or program committee of the Community DevRoom, email us at community-devroom@lists.fosdem.org
== IN MORE DETAIL ==

We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.

The Community DevRoom will take place on Sunday 2nd February 2020.

Our goals in running this DevRoom are to:

* Connect folks interested in nurturing their communities with one another so they can share knowledge during and long after FOSDEM

* Educate those who are primarily software developers on community-oriented topics that are vital in the process of software development, e.g. effective collaboration

* Provide concrete advice on dealing with squishy human problems

* To unpack preconceived ideas of what community is and the role it plays in human society, free software, and a corporate-dominated world in 2020.

We would seek proposals on all aspects of creating and nurturing communities for free software projects. 

 

== TALK TOPICS ==

Here are some topics we are interested in hearing more about this year:

 

1) Is there any real role for community in corporate software projects?

Can you create a healthy and active community while still meeting the needs of your employer? How can you maintain an open dialog with your users and/or contributors when you have the need to keep company business confidential? Is it even possible to build an authentic community around a company-based open source project? Have we completely lost sight of the ideals of community and simply transformed that word to mean “interested sales prospects?”

 

2) Creating Sustainable Communities

With the increased focus on the impact of short-term and self-interested thinking on both our planet and our free software projects, we would like to explore ways to create authentic, valuable, and lasting community in a way that best respects our world and each other.  We would like to hear from folks about how to support community building in person in sustainable ways, how to build community effectively online in the YouTube/Instagram era, and how to encourage corporations to participate in community processes in a way that does not simply extract value from contributors. If you have recommendations or case studies on how to make this happen, we very much want to hear from you.

 

We are particularly interested to hear about academic research into FOSS Sustainability and/or commercial endeavors set up to address this topic.

 

3) Bringing free software to the GitHub generation

Those of us who have been in the free and open source software world for a long time remember when the coolest thing you could do was move from CVS to SVN, Slack ended in “ware”, IRC was where you talked to your friends instead of IRL (except now no one talks in IRL anyway, just texts), and Twitter was something that birds did. Here we are in 2020, and clearly things have changed.

How can we bring more younger participants into free software communities? How do we teach the importance of free software values in an era where freely-available code is ubiquitous? Will the ethical underpinnings of free software attract millenials and Gen Z to participate in our communities when our free software tends to require lots of free time? 

We promise we are not cranky old fuddy duddies. Seriously. It’s important to us that the valuable experiences we had in our younger days working in the free software community are available to everyone. And we want to know how to get there.

 

4) Applying the principles of building free software communities to other endeavors

What can the lessons about decentralization, open access, open licensing, and community engagement teach us as we address the great issues of our day? We have left this topic not well defined because we would like people to bring whatever truth they have to the question. Great talks in this category could be anything from “why to never start a business in Silicon Valley” to “working from home is great and keeps C02 out of the air.” Let your imagination take you far  – we are excited to hear from you.

 

5)  How can free software protect the vulnerable

At a time when some of the best accessibility features are built as proprietary products, at a time when surveillance and predictive policing lead to persecution of dissidents and imprisonment of those who were guilty before proven innocent, how can we use free software to protect the vulnerable? What sort of lobbying efforts would be required to make certain free software – and therefore fully auditable – code becomes a civic requirement? How do we as individuals, and actors at employers, campaign for the protection of vulnerable people – and other living things – as part of our mission of software freedom. 

 

6) Conflict resolution

How do we continue working well together when there are conflicts? Is there a difference in how types of conflicts best get resolved, e.g. ”this code is terrible” vs. “we should have a contributor agreement”? We are especially interested in how tos / success stories from projects that have weathered conflict. 

We are now at 2020 and this issue still comes up semi-daily. Let’s share our collective wisdom on how to make conflict less painful and more productive. 

 

Again, these are just suggestions. We welcome proposals on any aspect of community building!

 

== PREPARING YOUR SUBMISSION & DEADLINES ==

 

=== LENGTH OF PRESENTATION ===

We are looking for talk submissions between 30 and 45 minutes in length, including time for Q&A. In general, we are hoping to accept as many talks as possible so we would really appreciate it if you could make all of your remarks in 30 minutes – our DevRoom is only a single day –  but if you need longer just let us know.

=== ANYTHING EXTRA YOU WOULD LIKE US TO KNOW ===

Beyond giving us your speaker bio and paper abstract, make sure to let us know anything else you’d like to as part of your submission. Some folks like to share their Twitter handles, others like to make sure we can take a look at their GitHub activity history – whatever works for you. We especially welcome videos of you speaking elsewhere, or even just a list of talks you have done previously. First time speakers are, of course, welcome!

=== SUBMISSION INSTRUCTIONS === == KEY DATES ==
  1. CFP opens 11 October 2019
  2. Proposals due in Pentabarf 27 November 2019
  3. Speakers notified by 11 December 2019
  4. DevRoom takes place 2 February 2020at FOSDEM

Community DevRoom Mailing List: community-devroom@lists.fosdem.org

 

Didier Roche: Ubuntu ZFS support in 19.10: ZFS on root

5 days 17 hours ago
ZFS on root

This is part 2 of our blog post series on our current and future work around ZFS on root support in ubuntu. If you didn’t yet read the introductory post, I strongly recommend you to do this first!

Here we are going to discuss what landed by default ubuntu 19.10.

Upstream ZFS On Linux

We are shipping ZFS On Linux version 0.8.1, with features like native encryption, trimming support, checkpoints, raw encrypted zfs transmissions, project accounting and quota and a lot of performance enhancements. You can see more about 0.8 and 0.8.1 released on the ZOL project release page directly. 0.8.2 didn’t make it on time for a good integration and tests in Eoan. So, we backported some post-release upstream fixes as they fit, like newer kernel compatibility, to provide the best user experience and reliability. Some small upstream fixes and feedback were contributed by our team to upstream ZFS On Linux project.

Any existing ZFS on root user will automatically get those benefits as soon as they update to Ubuntu 19.10.

Installer integration

The ubiquity installer is now providing an experimental option for setting up ZFS on root on your system. While ZFS has been a a mature product for a long time, the installer ZFS support option is in alpha and people opting in should be conscious about it. It’s not advised to run this on production system or a system wherre you have critical data (apart if you have regular and verified backups, which we all do, correct?). To be fully clear, there may be breaking changes in the design as long as the feature is experimental, and we may, or may not, provide transition path to the next layout.

With that being said, what does ZFS on root means? It means that most of your system will run on ZFS. Basically even your “/” directory, is installed on ZFS.

Ready to jump in, despite all those disclaimers? If so, you download an ubuntu 19.10 ISO and you will see that the disk partitioning screen in Ubiquity has an additional option (please read the Warning!):

Yes, the current experimental support is limited right now to a whole disk installation. If you have multiple disks, the next screen will ask you to pick which one:

You will then get the “please confirm we’ll reformat your whole disk” screen.

… and finally the installation will proceed as usual:

In case you didn’t notice yet, this is experimental (what? ;)) and we have some known quirks, like the confirmation screen showing that it’s going to format and create an ext4 partition. This is difficult to fix for ubuntu 19.10 (for the technical users interested in details, what we are actually doing is creating multiple partitions in order to let partman handle the ESP, and then, overwrite the ext4 partition with ZFS, so it’s technically not lying ;)). It’s something we will fix before getting out of the experimental phase, hopefully next cycle.

Partitions layout

We’ll create the following partitions:

rpool

One ZFS partition for the “rpool” (as root pool), which will contain your main installation and user data. This is basically your main course and the one we’ll detail the dataset layout in the next article as we have a lot to say about it.

bpool

Another ZFS partition for your boot pool named “bpool”, which contains kernels and initramfs (basically your /boot without the EFI and bootloader). We have to separate this from your main pool as grub can’t support all ZFS features that we want to enable on the root pool, and so, your pool would be otherwise unreadable by your boot loader, which will sadly result in unbootable system! Consequently, this pool runs a different version of ZFS pool version (right now, version 28, but we are looking for next cycle to upgrade to version 5000, with some features disabled). Note that due to this, even if zpool status proposes that you to upgrade your bpool, you should never do that or you won’t be able to reboot. We will work on a patch to prevent this to happen.

ESP partition

There is the ESP partition (mounted as /boot/efi). Right now, it’s only created if you have a UEFI system, but we might get it created in Ubiquity systematically in the future, so that people who disabled secure boot and enable it later on can have a smooth transition.

grub partition

A grub partition (mounted as /boot/grub), which is formatted as ext4. This partition isn’t a ZFS one because it’s global to your machine, so the content and state should be shared between multiple installations on the same system. In addition, we don’t want to reference a grub menu which can be snapshotted and roll backed, as it means the grub menu won’t give access to “future system state” after a particular revert. If we succeed in having an ESP partition systematically created in the future, we can move grub itself to it unconditionally next cycle.

Continuing work on pure ZFS system

We are planning to continue reporting feedback upstream (probably post 19.10 release, once we have more room for giving detailed information and various use-case scenarios) as our default dataset layout is quite advanced (more on that later) and current upstream mount ordering generator doesn’t really cope with it. This is the reason why we took the decision to disable our revert GRUB feature for pure ZFS installation (but not Zsys!) in 19.10, as some use case could lead to unbootable systems. This is a very alpha experiment, but we didn’t want to risk user’s data on purpose.

But this is far from being the end of our road to our enhanced ZFS support in Ubuntu! Actually, the most interesting and exciting part (from a user’s perspective) will come with Zsys.

Zsys, ZFS System handler

Zsys is our work in progress, enhanced support of ZFS systems. It allows running multiple ZFS installations in parallel on the same machine and managing complex ZFS dataset layouts, separating user data from system and persistent data. It will soon provide automated snapshots, backups, system managements.

However, as we first wanted to have feedback in Ubuntu 19.10 about pure ZFS systems, we didn’t seed it by default. It’s available though an apt install zsys for the adventurous audience, and some Ubuntu flavors already jumped on the band wagon where it will be installed by default! Even if you won’t immediately see differences, this will unleash some of our grub, adduser and initramfs integration that are baked right in 19.10.

The excellent Ars Technica review by Jim Salter was wondering about the quite complex dataset layout we are setting up. We’ll shed some light on this on the next blog post which will explain what Zsys is really, what it does bring to the table and what our future plans are.

The future of ZFS on root on Ubuntu is bright, I’m personally really excited about what this is going to bring to both server and desktop users! (And yes, we can cook up some very nice features for our desktop users with ZFS)!

If you want to join the discussion, feel free to hop in our ubuntu discourse dedicated topic.

Ubuntu Podcast from the UK LoCo: S12E27 – Exile

6 days 11 hours ago

This week we’ve been playing LEGO Worlds and tinkering with Thinkpads. We round up the news and goings on from the Ubuntu community, introduce a new segment, share some events and discuss our news picks from the tech world.

It’s Season 12 Episode 27 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Ubuntu Blog: Chromium in Ubuntu – deb to snap transition

6 days 16 hours ago

We have recently announced that we are transitioning the Chromium deb package to the snap in Ubuntu 19.10. Such a transition is not trivial, and there have been many constructive discussions around it, so here we are summarising why we are doing this, how, and the timeline.

Why

Chromium is a very popular web browser, the fully open source counterpart to Google Chrome. On Ubuntu, Chromium is not the default browser, and the package resides in the ‘universe’ section of the archive. Universe contains community-maintained software packages. Despite that, the Ubuntu Desktop Team is committed to packaging and maintaining Chromium because a significant number of users rely on it. 

Maintaining a single release of Chromium is a significant time investment for the Ubuntu Desktop Team working with the Ubuntu Security team to deliver updates to each stable release. As the teams support numerous stable releases of Ubuntu, the amount of work is compounded.

Comparing this workload to other Linux distributions which have a single supported rolling release misses the nuance of supporting multiple Long Term Support (LTS) and non-LTS releases.

Google releases a new major version of Chromium every six weeks, with typically several minor versions to address security vulnerabilities in between. Every new stable version has to be built for each supported Ubuntu release − 16.04, 18.04, 19.04 and the upcoming 19.10 − and for all supported architectures (amd64, i386, armhf, arm64).

Additionally, ensuring Chromium even builds (let alone runs) on older releases such as 16.04 can be challenging, as the upstream project often uses new compiler features that are not available on older releases. 

In contrast, a snap needs to be built only once per architecture, and will run on all systems that support snapd. This covers all supported Ubuntu releases including 14.04 with Extended Security Maintenance (ESM), as well as other distributions like Debian, Fedora, Mint, and Manjaro.

While this change in packaging for Chromium can allow us to focus developer resources elsewhere, there are additional benefits that packaging as a snap can deliver. Channels in the Snap Store enable publishing multiple versions of Chromium easily under one name. Users can switch between channels to test different versions of the browser. The Snap Store delivers snaps automatically in the background, so users can be confident they’re running up to date software without having to manually manage their updates. We can also publish specific fixes quickly via branches in the Snap Store enabling a fast user & developer turnaround of bug reports. Finally the Chromium snap is strictly confined, which provides additional security assurances for users.

In summary: there are several factors that make Chromium a good candidate to be transitioned to a snap:

  • It’s not the default browser in Ubuntu so has lower impact by virtue of having a smaller user-base
  • Snaps are explicitly designed to support a high frequency of stable updates
  • The upstream project has three release channels (stable, beta, dev) that map nicely to snapd’s default channels (stable, beta, edge). This enables users to easily switch release of Chromium, or indeed have multiple versions installed in parallel
  • Having the application strictly confined is an added security layer on top of the browser’s already-robust sand-boxing mechanism
How

The first release of the Chromium snap happened two years ago, and we’ve come a long way since then. The snap currently has more than 200k users across Ubuntu and more than 30 other Linux distributions. The current version has a few minor issues that we’re working hard to address, but we felt it’s solid and mature enough for a transition. We feel confident that it is time to start transitioning users of the development release (19.10) of Ubuntu to it. We are eager to collect feedback on what works and what doesn’t ahead of the next Long Term Support release of Ubuntu, 20.04.

In 19.10, the chromium-browser deb package (and related packages) have been made a transitional package that contains only wrapper scripts and a desktop file for backwards compatibility. When upgrading or installing the deb package on 19.10, the snap will be downloaded from the Snap Store and installed. 

Special care has been taken to not break existing workflows and to make the transition as seamless as possible:

  • When running the snap for the first time, an existing Chromium user profile in $HOME/.config/chromium will be imported (provided there is enough disk space)
  • The chromium-browser and chromedriver executables in /usr/bin/ are wrappers that call into the respective snap executables
  • chromedriver has been patched so that existing selenium scripts should keep working without modifications
  • If the user has set Chromium as the default browser, the chromium-browser wrapper will take care of updating it to the Chromium snap
  • Similarly, existing pinned entries in desktop launchers will be updated to point to the snap version (implemented for GNOME Shell and Unity only for now, contributions welcome for other desktop environments)
  • The apport hook has been updated to include relevant information about the snap package and its dependencies
When

If you’re experimenting with Ubuntu 19.10 then you can try Chromium as a snap and test the transition from the deb package right now. However, you don’t need to wait until the release on the 17th of October to start using the snap and sharing your feedback. Simply run the following commands to be up and running:

snap install chromium
snap run chromium

Once 19.10 is released, we will carefully consider extending the transition to other stable releases, starting with 19.04. This won’t happen until all the important known issues are addressed, of course.

Now is the perfect time to put the snap to the test and report issues and regressions you encounter.

We appreciate all the feedback and commentary we’ve been sent over the last few months as we announced this project. We honestly believe delivering applications as snaps provides significant advantages both to developers and users. We know there may be some rough edges as we work towards the future and will continue to listen to our users as we chart this new journey.

Command Line