Tasker: Total Automation for Android

The Register

Two billion years ago, snowball Earth was defrosted in huge asteroid crash – and it's been downhill ever since

2 hours 47 minutes ago
Space prang rose temperatures, melted glaciers, influenced climate, next thing we know: we're sharing AI-filtered selfies on Insta

Pic Scientists studying minerals in the Yarrabubba crater in Western Australia have confirmed the giant pit was formed when an asteroid struck Earth 2.229 billion years ago, making it the oldest impact site yet found on our planet.…

If you never thought you'd hear a Microsoftie tell you to stop using Internet Explorer, lap it up: 'I beg you, let it retire to great bitbucket in the sky'

6 hours 55 minutes ago
We say take off and nuke the entire codebase from orbit. It's the only way to be sure

To mark the arrival of the Chromium-based Microsoft Edge browser, Microsoft software engineer Eric Lawrence, who helped shift Edge to its Google-driven open source foundation, issued a plea to Windows users to let go of Internet Explorer.…

Planet Ubuntu

Ubuntu Blog: Migrating to enterprise servers with Ubuntu on IBM Z

12 hours 11 minutes ago

For mission-critical applications, security, reliability, and efficiency are essential. Linux excels in these areas, which is why it has become a highly popular platform for supporting key enterprise software. And for businesses looking to push the security and performance of their Linux-based applications even further, the next step is enterprise server computing.

Enterprise servers offer secure and robust platforms for mission-critical workloads – however, it has historically been difficult to migrate Linux applications from x86 architectures to the IBM Z architecture. IBM and Canonical have worked together to solve this problem by porting Ubuntu to work on both IBM Z and  IBM LinuxONE enterprise servers – including the recently released IBM z15 and LinuxONE III

With Ubuntu on IBM Z and LinuxONE, users can leverage the same tools and languages on IBM Z  as they do on all of their other Ubuntu systems. Not only does this provide businesses with a smooth migration path, it also enables developers to go from the desktop to a highly secure and reliable cloud with a seamless, agile working environment. Typical workloads include databases with sensitive personal information, as well as new solutions such as blockchain and digital asset custody.

Why migrate to Ubuntu on IBM Z and LinuxONE?

IBM Z and LinuxONE servers offer a range of benefits over x86 systems that make them uniquely suited for running mission-critical Linux workloads:

Security: Following the introduction of GDPR and in the wake of numerous high-profile breaches, it has never been more important to protect and encrypt data – especially customers’ personal information. In the past, most companies have relied on software encryption, but these solutions can carry a considerable overhead. Because software encryption takes time, users must decide which data to encrypt, creating a risk that some important information might be missed. 

IBM Z, on the other hand, supports hardware encryption which is included on every processor chip. Thanks to the speed of hardware encryption, it is completely viable for a business to encrypt ALL of its data – reducing risk, saving time, and making it easy to demonstrate regulatory compliance. What’s more, crypto keys can be stored in tamper-responsive Hardware Security Modules, where they are robustly protected if a bad actor attempts to gain access.

IBM Z and LinuxONE servers running Ubuntu can also offer a secure environment for executing applications. Once prepared and launched, these apps and their data are protected and cannot be accessed other than through the applications – not even by sysadmins. 

Agility: Running Ubuntu on IBM Z and LinuxONE provides developers with a consistent working environment from desktop to cloud. This consistency – with the same look and feel, tools, and libraries across platforms – empowers users to work more productively, accelerating development timelines.

Cloud capabilities at memory speed: With an enterprise server, organisations can deploy cloud-based applications on the same system where their data is already located. By eliminating the need to connect to an offsite, online cloud, these applications can access data far more securely and quickly.

Scalability: Public clouds and other x86 systems typically scale horizontally. That is to say, they scale out to support larger workloads through the addition of extra servers. This approach offers excellent flexibility, but with databases shared across multiple systems and with network delays between nodes, problems can arise for mission-critical databases that need to be always up-to-date and consistent. 

While IBM Z can deliver horizontal scalability through virtualisation, it also offers vertical scalability for large databases and applications. Rather than adding new machines, vertical scalability enables businesses to scale up by committing additional resources from the existing hardware. Keeping everything on the same machine cuts complexity and ensures that there is no network delay, which is invaluable in situations where databases need to be in-sync at all times and delivering a single source of truth.

Reliability: For businesses across industries, it is becoming more and more important to have applications available 24/7. IBM Z architecture is designed for continuous service delivery. It offers 99.999% or greater availability, and sophisticated disaster recovery concepts minimise the duration and impact of downtime.

Overcoming the traditional barriers to mainframe migration

In the past, moving Linux workloads from x86 to IBM Z has sometimes been a daunting prospect. The need to recompile applications and hire mainframe specialists was often enough to deter organisations from migrating. Ubuntu on IBM Z and LinuxONE takes the complexity out of the process by enabling businesses to move from Linux to Linux. 

Applications written in interpreted languages such as Java or Python are especially easy to migrate, as they can utilise the same source code on IBM Z as they do on x86 systems simply by changing interpreter. IBM has already ported a large number of open source infrastructure components and languages to IBM Z – including Go, Swift, Python, and MongoDB, to name a few – and moving to the IBM Z and LinuxONE is only getting easier as more tools continue to be made available. 

Similarly, employee skills are largely transferable between x86 and IBM Z, since users will still be working in a familiar Ubuntu environment. Getting the most out of IBM Z only requires specialist expertise at set-up, and assistance is readily available from IBM and its business partners.

Case study: Digital Asset Custody Services (DACS) 

With digital asset technology rapidly becoming mainstream, DACS saw a gap in the market for a highly secure, convenient solution for digital asset transactions and data. The startup set out to build a new platform that would enable corporations and individuals to store and transfer digital assets securely – without the delays inherent in existing, cold storage options.

DACS worked with IBM to develop the new platform, hosted on IBM LinuxONE servers running Ubuntu. Leveraging IBM Crypto Express 6S Hardware Security Module for pervasive encryption of all application data, as well as IBM Secure Service Container software to provide a secure computing environment, the IBM LinuxONE servers running Ubuntu enable DACS to deliver end-to-end security without compromising customer convenience.

To learn more about the technical side of running Ubuntu on IBM enterprise servers, check out Elizabeth K. Joseph’s blog post, where she takes a detailed look at Ubuntu on the new IBM LinuxONE III. And, sign up for our upcoming webinar, “How to protect your data, applications, cryptography and OS – 100% of the time”.

<Register for webinar>

Ubuntu Blog: Ubuntu Server development summary – 21 January 2020

1 day 13 hours ago
Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: cloud-init 19.4

On the very last days of 2019 we released version 19.4 of cloud-init. This new upstream release is currently available on the supported LTS releases of Ubuntu (Xenial and Bionic) and in the development version of the next LTS release, Focal Fossa. For a list of features released, see the full ChangeLog on GitHub. The 19.4 cloud-init release was the last release to support python 2.7. Any new commits to cloud-init will not require python 2 support.

Spotlight: Ubuntu Pro for AWS

Ubuntu Pro is a premium Ubuntu image designed to provide the most comprehensive feature set for production environments running in the public cloud. Ubuntu Pro images based on Ubuntu 18.04 LTS (Bionic Beaver) are now available for AWS as an AMI through AWS Marketplace

Spotlight: Speed up project bug triage with grease monkey

Bryce Harrington, on the Ubuntu Server team, has written up an excellent post on how to speed up bug triage responses with grease monkey. It simplifies the inclusion of frequent responses the team uses for various projects when maintaining bugs in Launchpad for multiple Ubuntu packages. Thanks Bryce!

  • Add Rootbox & HyperOne to list of cloud in README (#176) [Adam Dobrawy]
  • docs: add proposed SRU testing procedure (#167)
  • util: rename get_architecture to get_dpkg_architecture (#173)
  • Ensure util.get_architecture() runs only once (#172)
  • Only use gpart if it is the BSD gpart (#131) [Conrad Hoffmann]
  • freebsd: remove superflu exception mapping (#166) [Gonéri Le Bouder]
  • ssh_auth_key_fingerprints_disable test: fix capitalization (#165) [Paride Legovini]
  • util: move uptime’s else branch into its own boottime function (#53) [Igor Galić] (LP: #1853160)
  • workflows: add contributor license agreement checker (#155)
  • net: fix rendering of ‘static6’ in network config (#77) (LP: #1850988)
  • Make tests work with Python 3.8 (#139) [Conrad Hoffmann]
  • fixed minor bug with mkswap in cc_disk_setup.py (#143) [andreaf74]
  • freebsd: fix create_group() cmd (#146) [Gonéri Le Bouder]
  • doc: make apt_update example consistent (#154)
  • doc: add modules page toc with links (#153) (LP: #1852456)
  • Add support for the amazon variant in cloud.cfg.tmpl (#119) [Frederick Lefebvre]
  • ci: remove Python 2.7 from CI runs (#137)
  • modules: drop cc_snap_config config module (#134)
  • migrate-lp-user-to-github: ensure Launchpad repo exists (#136)
  • docs: add initial troubleshooting to FAQ (#104) [Joshua Powers]
  • doc: update cc_set_hostname frequency and descrip (#109) [Joshua Powers] (LP: #1827021)
  • freebsd: introduce the freebsd renderer (#61) [Gonéri Le Bouder]
  • cc_snappy: remove deprecated module (#127)
  • HACKING.rst: clarify that everyone needs to do the LP->GH dance (#130)
  • freebsd: cloudinit service requires devd (#132) [Gonéri Le Bouder]
  • cloud-init: fix capitalisation of SSH (#126)
  • doc: update cc_ssh clarify host and auth keys [Joshua Powers] (LP: #1827021)
  • ci: emit names of tests run in Travis (#120)
  • Release 19.4 (LP: #1856761)
  • rbxcloud: fix dsname in RbxCloud [Adam Dobrawy] (LP: #1855196)
  • tests: Add tests for value of dsname in datasources [Adam Dobrawy]
  • apport: Add RbxCloud ds [Adam Dobrawy]
  • docs: Updating index of datasources [Adam Dobrawy]
  • docs: Fix anchor of datasource_rbx [Adam Dobrawy]
  • settings: Add RbxCloud [Adam Dobrawy]
  • doc: specify _ over – in cloud config modules [Joshua Powers] (LP: #1293254)
  • tools: Detect python to use via env in migrate-lp-user-to-github [Adam Dobrawy]
  • Partially revert “fix unlocking method on FreeBSD” (#116)
  • tests: mock uid when running as root (#113) [Joshua Powers] (LP: #1856096)
  • cloudinit/netinfo: remove unused getgateway (#111)
  • docs: clear up apt config sections (#107) [Joshua Powers] (LP: #1832823)
  • doc: add kernel command line option to user data (#105) [Joshua Powers] (LP: #1846524)
  • config/cloud.cfg.d: update README [Joshua Powers] (LP: #1855006)
  • azure: avoid re-running cloud-init when instance-id is byte-swapped (#84) [AOhassan]
  • fix unlocking method on FreeBSD [Igor Galić] (LP: #1854594)
  • debian: add reference to the manpages [Joshua Powers]
  • ds_identify: if /sys is not available use dmidecode (#42) [Igor Galić] (LP: #1852442)
  • docs: add cloud-id manpage [Joshua Powers]
  • docs: add cloud-init-per manpage [Joshua Powers]
  • docs: add cloud-init manpage [Joshua Powers]
  • docs: add additional details to per-instance/once [Joshua Powers]
  • Merge pull request #96 from fred-lefebvre/master [Joshua Powers]
  • Update doc-requirements.txt [Joshua Powers]
  • doc-requirements: add missing dep [Joshua Powers]
  • Merge pull request #95 from powersj/docs/bugs [Joshua Powers]
  • dhcp: Support RedHat dhcp rfc3442 lease format for option 121 (#76) [Eric Lafontaine] (LP: #1850642)
  • one more [Joshua Powers]
  • Address OddBloke review [Joshua Powers]
  • network_state: handle empty v1 config (#45) (LP: #1852496)
  • docs: Add document on how to report bugs [Joshua Powers]
  • Add an Amazon distro in the redhat OS family [Frederick Lefebvre]
  • Merge pull request #94 from gaughen/patch-1 [Joshua Powers]
  • removed a couple of “the”s [gaughen]
  • docs: fix line length and remove highlighting [Joshua Powers]
  • docs: Add security.md to readthedocs [Joshua Powers]
  • Multiple file fix for AuthorizedKeysFile config (#60) [Eduardo Otubo]
  • Merge pull request #88 from OddBloke/travis [Joshua Powers]
  • Revert “travis: only run CI on pull requests”
  • doc: update links on README.md [Joshua Powers]
  • doc: Updates to wording of README.md [Joshua Powers]
  • Add security.md [Joshua Powers]
  • setup.py: Amazon Linux sets libexec to /usr/libexec (#52) [Frederick Lefebvre]
  • Fix linting failure in test_url_helper (#83) [Eric Lafontaine]
  • url_helper: read_file_or_url should pass headers param into readurl (#66) (LP: #1854084)
  • dmidecode: log result after stripping n [Igor Galić]
  • cloud_tests: add azure platform support to integration tests [ahosmanmsft]
  • set_passwords: support for FreeBSD (#46) [Igor Galić]
  • vmtests: skip Focal deploying Centos70 ScsiBasic
  • vmtests: fix network mtu tests, separating ifupdown vs networkd
  • doc: Fix kexec documentation bug. [Mike Pontillo]
  • vmtests: Add Focal Fossa
  • centos: Add centos/rhel 8 support, enable UEFI Secure Boot [Lee Trager] (LP: #1788088)
  • Bump XFS /boot skip-by date out a while
  • vmtest: Fix a missing unset of OUTPUT_FSTAB
  • curthooks: handle s390x/aarch64 kernel install hooks (LP: #1856038)
  • clear-holders: handle arbitrary order of devices to clear
  • curthooks: only run update-initramfs in target once (LP: #1842264)
  • test_network_mtu: bump fixby date for MTU tests

The git-ubuntu snap package has been updated to 0.8.0 for the ‘beta’ channel.

The lion’s share of effort since 0.7.4 has gone towards bug fixing and general stabilization. Documentation and tests received a fair share of attention, as did the snap and setup.py packaging.

The importer now uses a sqlite3 database to store persistent information such as the pending package import status.

A new –only-request-new-imports-once option is added for the backend source package importer. This makes the importer exit immediately after entering new imports to the database.

The –deconstruct option has been changed to –split, to prevent confusion that led people to assume –deconstruct meant the opposite of “reconstruct”.

Launchpad object fetches are cached using Python’s cachetools module, as a performance improvement that reduces the excessive number of API calls to the Launchpad service.

Finally, the backend service is now managed using a systemd watchdog daemon. Prior to this the service would need to be manually restarted whenever it hung or crashed, such as due to Launchpad service outages or network instabilities.

Contact the Ubuntu Server team Bug Work and Triage Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 3

Uploads Released to the Supported Releases

Total: 80

Uploads to the Development Release

Total: 129

Ubuntu Studio: New Website!

1 day 13 hours ago
Ubuntu Studio has had the same website design for nearly 9 years. Today, that changed. We were approached by Shinta from Playmain, asking if they could contribute to the project by designing a new website theme for us. Today, after months of correspondence and collaboration, we are proud to unveil... Continue reading

Ubuntu Blog: problem-oriented

1 day 16 hours ago

Once upon a time, Heathkit was a big business.

Yeah, I know I’m dating myself. Meh.

Heathkit kits were great, but honestly, I had an issue with them: They were either too focused on (re-)teaching basic electronics, or they assumed the tinkerer was an EE, so they didn’t give a lot of consideration to explaining what you could do with them. I mean, my first kit was an alarm clock, and it had a snooze button and big, red numbers that kept me waking up all night for a couple weeks to look for the fire trucks. But in general, most of their really cool items — frequency analyzers, oscilloscopes, and so on — didn’t come with much in the way of “how can I use this device?”

That’s why I’m going to start taking the MAAS blog and doc in a little different direction going forward. I want to start using real-world examples and neat networking configurations and other problem-oriented efforts as my baseline for writing. Heck, I’d even like to try using MAAS to control my little Raspberry Pi farm, although that’s probably not the recommended configuration, and I’m not sure how PXE-booting would work yet. (But if I get it going, I promise to blog it.)

Don’t get me wrong; the MAAS doc is pretty solid. I just want to do more with it. As in not just update it for new versions, but make it come alive and show off what MAAS can do. I also want to pick up some of the mid-range applications and situations. MAAS is well-envisioned in large datacentres, and there are obviously hobbyists and small shops tinkering, but that’s not the bulk of people who could genuinely benefit from it. I want to dig into some of the middle-industry, small-to-medium-size possibilities.

Since I already know something about small hospital datacentres, having worked with them for about ten years, that might be a good place to start. Hospitals from 50-200 beds tend to have the same requirements as a full-size facility, but the challenges of a somewhat smaller budget and lower IT headcount. It really feels like a good sample problem for MAAS.

Yeah, I’m gonna sleep on it for a week and tinker a little, so set your Heathkit alarm clock for next Tuesday and check back to see where it’s going. And turn over the other way, so you’re not staring at the bright-red, segmented LEDs all week.

Ubuntu Blog: Anbox Cloud disrupts mobile user experience

1 day 17 hours ago

With the launch of the iPhone in 2007, mobile users were introduced to the smartphone as we still know it today: touchscreen, cameras and app stores. The launch of Android spurred low-cost alternatives to the iPhone, bringing the smartphone to the masses. Popularisation and growth in app consumption drove demand for mobile broadband.

Smartphones, app stores and mobile broadband are the foundations of mobile UX today. However, we’ve been using mobile devices the same way for over a decade now. But, with Anbox Cloud delivered by telcos, this is about to change.

What’s Anbox Cloud?

Anbox Cloud is a mobile cloud computing platform that containerises mobile workloads using Android as a guest operating system. With Anbox Cloud, mobile applications can resort to boundless compute and storage capacity in the cloud. Graphics are streamed to clients running in any web browser, or wrapped into mobile or desktop applications.

Using Anbox Cloud, telecommunication providers can create disruptive mobile user experiences for their 4G, LTE and 5G mobile network customers. Let’s see how.


With Anbox Cloud, applications are ceased to be delivered as locally installed software binaries. Mobile apps become remotely streamed content. Streaming from the cloud frees apps from hardware compatibility constraints.

In a world where apps are streamed, mobile users have access to a much richer selection. As a consequence, apps will be discovered and consumed as seamlessly as media content currently is. Think of an experience akin to Netflix, Spotify or Youtube: recommended systems, subscriptions, advertising and all. 

Anbox Cloud can be hosted within the cloud infrastructure of telco operators. This allows mobile operators to own their own branded distribution channel for apps, thereby breaking away from the Google-Apple duopoly of centralised app stores. Telco owned app catalogs, delivered via Anbox Cloud, would open new avenues for innovative value added services.

Cloud-augmented smartphones

Anbox Cloud gives the flexibility to offload compute, storage and energy intensive applications from mobile devices to hyperscale clouds. What’s more, any number of virtual devices can be instantiated on demand in the elastic cloud.

Offloading and elasticity are orchestrated to augment capability-constrained mobile devices. Any smartphone can be spinned into a hyper-phone, with several clones running in parallel in the cloud. 

Through cloud augmentation of smartphones, telco operators will deliver traditionally device-dependant features from their cloud infrastructure.  This will strengthen their position in mobile telecommunications ecosystems by reducing reliance on mobile OEMs for shaping user experience.

Consistent user experience will be accessible to any user regardless of the device they own. Besides consistency, user experience will become utterly enriched on any phone. Imagine users capable of turning any given smartphone into a gaming console, a workplace device, or even an action camera, at the push of a button, thanks to the cloud.

Democratising wearables and headsets

When it comes to AR/VR headsets and wearables (like smart glasses), first class performance is highly dependent on ultra-powerful hardware. Due to this constraint, highly performant wearables and headsets are neither mobility friendly nor power efficient yet. Most crucially, they are not affordable to the masses. However, the confluence of 5G and Anbox Cloud will change these circumstances.

Offloading graphically intensive processes to the telco edge clouds through Anbox Cloud, frees OEMs from the need to embed such capabilities in devices. This will drive the hardware bill of materials (BOM) cost down, while also easing portability.

5G compatible AR/VR headsets and wearables will, therefore, be more portable and power efficient. Affordability and usability will in turn open up new lines of revenues for telco operators, beyond mobile telephony.

Try-out Anbox Cloud

Telco operators will be granted priority access to the Anbox Cloud demo service. If you are a mobile telco innovator, sign-up today. Evaluation licenses are available for companies that want to go one step further and develop a proof of concept. To accelerate your time to market, Canonical will be by your side for engineering support. Get in touch with us to know more about our terms of commercialisation.

Ubuntu Blog: Canonical introduces Anbox Cloud – scalable Android™ in the cloud

2 days ago

Canonical today announced Anbox Cloud, a platform that containerises workloads using Android1 as a guest operating system enabling enterprises to distribute applications from the cloud. Anbox Cloud allows enterprises and service providers to deliver mobile applications at scale, more securely and independently of a device’s capabilities. Use cases for Anbox Cloud include cloud gaming, enterprise workplace applications, software testing, and mobile device virtualisation.

The ability to offload compute, storage and energy-intensive applications from devices (x86 and Arm) to the cloud enables end-users to consume advanced workloads by streaming them directly to their device. Developers can deliver an on-demand application experience through a platform that provides more control over performance and infrastructure costs, with the flexibility to scale based on user demand.

“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, Director of Product at Canonical. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”

With cloud gaming adoption on the rise, Anbox Cloud enables graphic and memory intensive mobile games to be scaled to vasts amounts of users while retaining the responsiveness and ultra-low latency demanded by gamers. By removing the need to download a game locally on a device, Anbox Cloud creates an on-demand experience for gamers while providing a protected content distribution channel for game developers. 

Anbox Cloud enables enterprises to accelerate their digital transformation initiatives by delivering workplace applications directly to employee’s devices, while maintaining the assurance of data privacy and compliance. Enterprises can reduce their internal application development costs by providing a single application that can be used across different form factors and operating systems.

Developers can also utilise Anbox Cloud as part of their application development process to emulate thousands of Android devices across different test scenarios and for integration in CI/CD pipelines.

Anbox Cloud can be hosted in the public cloud for infinite capacity, high reliability and elasticity or on a private cloud edge infrastructure, where low latency and data privacy are a priority. Public and private cloud service providers can integrate Anbox Cloud into their offering to enable the delivery of mobile applications in a PaaS or SaaS-model. Telecommunication providers can also create innovative value-added services based on virtualised mobile devices for their 4G, LTE and 5G mobile network customers.

Notes to editors:

Anbox Cloud is built on a range of Canonical technologies and runs Android on the Ubuntu 18.04 LTS kernel. Containersation is provided by secure and isolated LXD system containers. LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity. A higher container density drives scalability up and unit economics down. MAAS is utilised for remote infrastructure provisioning and Juju provides automation tooling for easy deployment, management and reduced operational costs. The Ubuntu Advantage support programme is included with Anbox Cloud, providing continuous support and security updates for up to ten years. 

Canonical partners with Packet, the leading cloud computing infrastructure provider, as an option to deploy Anbox Cloud on-premise or at target edge locations in the world. To provide the best experience with Anbox Cloud, Canonical collaborates with Ampere (ARM) and Intel (x86) as silicon partners. These hardware options are optimised to provide the best density, GPU models and cost efficiency to shorten the time to market for customers building their services on top of Anbox Cloud.

Partner quotes:

“As the vast library of Android and Arm-native applications continues to grow, developers need proven systems that provide scalable capacity, reliable performance and deployment flexibility. The combination of Ampere’s Arm-based servers with a provisioned virtualisation solution like Canonical’s Anbox Cloud delivers the flexible, high-performance and secure infrastructure that developers need in order to deliver a better user experience for consumers.”

Jeff Wittich, SVP of Products at Ampere

“Canonical’s inclusion of the Intel Visual Cloud Accelerator Card – Render as part of their newly launched Anbox Cloud solution will enable the delivery of enhanced cloud and mobile gaming experiences on Android devices, supporting an emerging industry opportunity today, and for the upcoming 5G era.”

Lynn Comp, Vice President, Data Platforms Group and General manager of the Visual Cloud Division, Intel

“With Anbox Cloud, Canonical is bringing to market a disruptive product that is both powerful and easy to consume. As small, low-powered devices inundate our world, offloading applications to nearby cloud servers opens up a huge number of opportunities for efficiency, as well as new experiences. We’re excited to support the Anbox Cloud team as they grow alongside the worldwide rollout of 5G.”

Jacob Smith, Co-founder and CMO at Packet <

For more information on Anbox Cloud, visit anbox-cloud.io or click here to download the joint whitepaper with Intel –  Cloud gaming for Android: Building a high performing and scalable platform.

About Canonical  

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

1. Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

Ubuntu Blog: Implementing an Android™ based cloud game streaming service with Anbox Cloud

2 days ago

Since the outset, Anbox Cloud was developed with a variety of use cases for running Android inside containers. Cloud gaming, more specifically for casual games as found on most user’s mobile devices, is the most prominent one and growing in popularity. Enterprises are challenged to find a solution that can keep up with the increasing user demand, provide a rich experience and keep costs affordable while shortening the time to market.

Anbox Cloud brings Android from mobile devices to the cloud. This enables service providers to deliver a large and existing ecosystem of games to more users, regardless of their device or operating system. Existing games can be moved to Anbox Cloud with zero to minimal effort.

Canonical has built Anbox Cloud upon existing technologies that allow for a higher container density compared to traditional approaches, which helps to reduce the overall cost of building and operating a game streaming service. The cost structure of a casual game, based in the cloud, also shows that density is key for profitability margins. To achieve density optimisation, three factors must be considered: container density (CPU load, memory capacity and GPU capacity), profitability and user experience optimisation. Additional considerations include choosing the right hardware to match the target workload, intended rendering performance and the pricing sensitivity of gamers. Finding the optimal combination for these factors and adding a layer of automation is crucial to improve profitability margins and to meet SLAs.

To further address specific challenges in cloud gaming, Canonical collaborates with key silicon and cloud partners to build optimised hardware and cloud instance types. Cloud gaming has a high demand on various hardware components, specifically GPUs which provide the underlying foundation for every video streaming solution. Utilising the available hardware with the highest density for cost savings, requires optimisation on every layer. Anbox Cloud specifically helps to get the maximum out of the available hardware capacity. It keeps track of resources spent by all launched containers and optimises placement of new containers based on available capacity and resource requirements of specific containers.

Next to finding the right software and hardware platform, cloud gaming mandates positioning the actual workload as close to the user as possible to reduce latency and ensure a consistent experience. To scale across different geographical regions, Anbox Cloud provides operational tooling and software components to simplify the deployment without manual overhead and ensures users get automatically routed to their nearest location. By plugging individual regions dynamically into a control plane allows new regions to be easily added on the go without any downtime or manual intervention.

Anbox Cloud builds a high-density and easy-to-manage containerisation platform on top of the LXD container hypervisor which helps to minimise the time to market and reduce overall costs. It reflects Canonical’s deep expertise in cloud-native applications and minimises operational overhead in multiple ways. With the use of existing technologies from Canonical like Juju or MAAS, it provides a solid and proven platform which is easy to deploy and maintain. Combined with the Ubuntu Advantage support program from Canonical, an enterprise can ensure it gets long-term help whenever needed.

As differentiation is key in building a successful cloud gaming platform, Anbox Cloud provides a solid foundation which is extensible and fits into many different use cases. For example, integrating a custom streaming protocol is possible by writing a plug-in and integrating it via provided customising hooks into the containers which power Anbox Cloud. To make this process easy, Canonical provides an SDK, rich documentation with example plugins and engineering services to help with any development around Anbox Cloud.

In summary, Anbox Cloud provides a feature rich, generic and solid foundation to build a state of the art cloud gaming service which provides optimal utilisation of the underlying hardware to deliver the best user experience while keeping operational costs low.

If you’re interested to learn more, please come and talk to us.

Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

The Fridge: Ubuntu Weekly Newsletter Issue 614

2 days 9 hours ago

Welcome to the Ubuntu Weekly Newsletter, Issue 614 for the week of January 12 – 18, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Stuart Langridge: Number word sequences

3 days 10 hours ago

I was idly musing about number sequences, and the Lychrel algorithm. If you don’t know about this, there’s a good Numberphile video on it: basically, take any number, reverse it, add the two, and if you get a palindrome stop, and if you don’t, keep doing it. So start with, say, 57, reverse to get 75, add them to get 57+75=132, which isn’t a palindrome, so do it again; reverse 132 to get 231, add to get 132+231=363, and that’s a palindrome, so stop. There are a bunch of interesting questions that can be asked about this process (which James Grime goes into in the video), among which are: does this always terminate? What’s the longest chain before termination? And so on. 196 famously hasn’t terminated so far and it’s been tried for several billion iterations.

Anyway, I was thinking about another such iterative process. Take a number, express it in words, then add up the values of all the letters in the words, and do it again. So 1 becomes ONE, and ONE is 15, 14, 5 (O is the fifteenth letter of the alphabet, N the fourteenth, and so on), so we add 15+14+5 to get 34, which becomes THIRTY FOUR, and so on. (We skip spaces and dashes; just the letters.)

Take a complete example: let’s start with 4.

  • 4 -> FOUR -> 6+15+21+18 = 60
  • 60 -> SIXTY -> 19+9+24+20+25 = 97
  • 97 -> NINETY-SEVEN -> 14+9+14+5+20+25+19+5+22+5+14 = 152
  • 152 -> ONE HUNDRED AND FIFTY-TWO -> 15+14+5+8+21+14+4+18+5+4+1+14+4+6+9+6+20+25+20+23+15 = 251
  • 251 -> TWO HUNDRED AND FIFTY-ONE -> 20+23+15+8+21+14+4+18+5+4+1+14+4+6+9+6+20+25+15+14+5 = 251

and 251 is a fixed point: it becomes itself. So we stop there, because we’re now in an infinite loop.

Do all numbers eventually go into a loop? Do all numbers go into the same loop — that is, do they all end up at 251?

It’s hard to tell. (Well, it’s hard to tell for me. Some of you may see some easy way to prove this, in which case do let me know.) Me being me, I wrote a little Python programme to test this out (helped immeasurably by the Python 3 num2words library). As I discovered before, if you’re trying to pick out patterns in a big graph of numbers which all link to one another, it’s a lot easier to have graphviz draw you pretty pictures, so that’s what I did.

I’ve run numbers up to 5000 or so (after that I got a bit bored waiting for answers; it’s not recreational mathematics if I have to wait around, it’s a job for which I’m not getting paid). And it looks like numbers settle out into a tiny island which ends up at 251, a little island which ends up at 285, and a massive island which ends up at 259, all of which become themselves1. (You can see an image of the first 500 numbers and how they end up; extending that up to 5000 just makes the islands larger, it doesn’t create new islands… and the diagrams either get rather unwieldy or they get really big and they’re hard to display.2)

I have a theory that (a) yes all numbers end up in a fixed point and (b) there probably aren’t any more fixed points. Warning: dubious mathematical assertions lie ahead.

There can’t be that many numbers that encode to themselves. This is both because I’ve run it up to 5000 and there aren’t, and because it just seems kinda unlikely and coincidental. So, we assume that the fixed points we have are most or all of the fixed points available. Now, every number has to end up somewhere; the process can’t just keep going forever. So, if you keep generating numbers, you’re pretty likely at some point to hit a number you’ve already hit, which ends up at one of the fixed points. And finally, the numbers-to-words process doesn’t grow as fast as actual numbers do. Once you’ve got over a certain limit, you’ll pretty much always end up generating a number smaller than oneself in the next iteration. The reason I think this is that adding more to numbers doesn’t make their word lengths all that much longer. Take, for example, the longest number (in words) up to 100,000, which is (among others) 73,373, or seventy-three thousand, three hundred and seventy-three. This is 47 characters long. Even if they were all Z, which they aren’t, it’d generate 47×26=1222, which is way less than 73,373. And adding lots more doesn’t help much: if we add a million to that number, we put one million on the front of it, which is only another 10 characters, or a maximum added value of 260. There’s no actual ceiling — numbers in words still grow without limit as the number itself grows — but it doesn’t grow anywhere near as fast as the number itself does. So the numbers generally get smaller as they iterate, until they get down below four hundred or so… and all of those numbers terminate in one of the three fixed points already outlined. So I think that all numbers will terminate thus.

The obvious flaw with this argument is that it ought to apply to the reverse-and-add process above too and it doesn’t for 196 (and some others). So it’s possible that my approach will also make a Lychrel-ish number that may not terminate, but I don’t think it will; the argument above seems compelling.

You might be thinking: bloody English imperialist! What about les nombres, eh? Or die Zahlen? Did you check those? Mais oui, I checked (nice one num2words for supporting a zillion languages!) Same thing. There are different fixed points (French has one big island until 177, a very small island to 232, a 258, 436 pair, and 222 which encodes to itself and nothing else encodes to it, for example.Not quite: see the update at the end. Nothing changes about the maths, though. Images of French and German are available, and you can of course use the Python 3 script to make your own; run it as python3 numwords.py no for Norwegian, etc.) You may also be thinking “what about American English, eh? 101 is ONE HUNDRED ONE, not ONE HUNDRED AND ONE.” I have not tested this, partially because I think the above argument should still hold for it, partially because num2words doesn’t support it, and partially because that’s what you get for throwing a bunch of perfectly good tea into the ocean, but I don’t think it’d be hard to verify if someone wants to try it.

No earth-shattering revelations here, not that it matters anyway because I’m 43 and you can only win a Fields Medal if you’re under forty, but this was a fun little diversion.

Update: Minirop pointed out on Twitter that my code wasn’t correctly highlighting the “end” of a chain, which indeed it was not. I’ve poked the code, and the diagrams, to do this better; it’s apparent that both French and German have most numbers end up in a fairy large loop, rather than at one specific number. I don’t think this alters my argument for why this is likely to happen for all numbers (because a loop of numbers which all encode to one another is about as rare as a single number which encodes to itself, I’d guess), but maybe I haven’t thought about it enough!

  1. Well, 285 is part of a 285, 267, 313, 248, 284, 285 loop.
  2. This is also why the graphs use neato, which is much less pleasing a layout for this than the “tree”-style layout of dot, because the dot images end up being 32,767 pixels across and all is a disaster.

Podcast Ubuntu Portugal: Assiste ao vivo ao próximo episódio do Podcast Ubuntu Portugal

3 days 15 hours ago

Com o objectivo constante de inovar vamos hoje, dia em que gravamos o episódio 74 do nosso podcast preferido, permitir a todos os que lerem esta publicação a tempo – e tiverem disponibilidade – poder assistir à gravação do PUP.

No futuro, este será um privilégio da patronagem (é $1, deixem-se de coisas!) mas por enquanto todos vão poder fazer parte.

Queremos com esta iniciativa atingir 3 objectivos:

  • Dar mais amor aos nossos patronos;
  • Aumentar o número de seguidores que temos no yt;
  • Aumentar o número de patronos.

Se, nesta altura, continuas com vontade de assistir, basta abrires esta ligação uns minutos antes das 22.00:

Stuart Langridge: The tiniest of Python templating engines

3 days 22 hours ago

In someone else’s project (which they’ll doubtless tell you about themselves when it’s done) I needed a tiny Python templating engine. That is: I wanted to be able to say, here is a template string, please substitute a bunch of variables into it. Now, Python already does this, in about thirty different ways, and str.format or string.Template do most of it as built-in.

str.format works like this:

"My name is {name} and I am {age} years old".format(name="Stuart", age=43)

and string.Template like this:

t=string.Template( "My name is $name and I am $age years old" ).safe_substitute(name="Stuart", age=43)

Both of which are pretty OK.

However, what they’re missing is loops; having more than one of a thing in your template, and looping over a list, substituting it each time. Every even fractionally-more-featureful templating system has this, whether Mustache or Jinja or whatever, of course, but I didn’t want another dependency. All I needed was str.format but with loops. So, I thought, I’ll write one, in about four lines of code, so I can just drop the function in to my Python file and then I’m good.

def LoopTemplate(s, ctx): def loophandler(m): md = m.groupdict() return "".join([LoopTemplate(md["content"], val) for val in ctx[md["var"]]]) return re.sub(r"\{loop (?P<var>[^}]+)\}(?P<content>.*?)\{endloop\}", loophandler, s, flags=re.DOTALL).format(**ctx)

And lo, twas so. So I can now do

LoopTemplate( "I am {name} and my imps' names are: {loop imps}{name}{endloop}", { "name": "Stuart", "imps": [ {"name": "Pyweazle"}, {"name": "Grimthacket"}, {"name": "Hardebon"} ] } )

and it all works. Not revolutionary, of course, but I was mildly pleased with myself.

Much internal debate about whether loophandler() should have been a lambda, but I eventually decided it was more confusing that way, on the grounds that it was confusing me and I knew what it was meant to be doing.

A brief explanation: re.sub lets you pass a function as the thing to replace with, rather than just a string. So we find all examples of {loop something}...{endloop} in the passed string, look up something in the “context”, or the dict of substitution variables you passed to LoopTemplate, and then we call LoopTemplate again, once per item in something (which is expected to be a list), and pass it the ... as its string and the next item in something as its context. So it all works. Of course, there’s no error handling or anything — if something isn’t present in the context, or if it’s not a list, or if you stray in any other way from the path of righteousness, it’ll incomprehensibly blow up. So don’t do that.

Ubuntu Blog: Design and Web team summary – 17 January 2020

5 days 16 hours ago

The second iteration of this year is the last one before our mid-cycle sprint next week.

Here’s a short summary of the work the squads in the Web & Design team completed in the last 2-week iteration.

Web, Ubuntu and Brand squad

Web is the squad that develop and maintain most of the brochure websites across Canonical and is the team that underpins our toolsets and architure of our projects. They maintain the CI and deployment of all websites we maintain. The Brand Squad are tasked with updating and managing the overall style of Canonical, Ubuntu and the many fantastic products we create both on and off-line.

New canonical.com website

Yesterday we released the new canonical.com website, which has been a few months in the making. The site is more succinct, consolidating the content into a single page, with clear, standout statements:

The largest piece of work was the new careers section, which provides a more interactive experience for discovering careers at Canonical:

Redesign of /download/server/thank-you

We’ve updated the thank-you page for downloading Ubuntu Server with a new form for signing up to our newsletter and also getting access to the CLI pro-tips 2020 cheatsheet.

451 Research: Kubernetes report

We’ve highlighted a new report from 451 Research on our homepage and with a dedicated page of its own.


The MAAS squad develops the UI for the maas project.

The maas-ui team was focused on two main areas this iteration – fixing up UI bugs for the upcoming 2.7 release and completing the first part of the work on importing the main machine listing data into the React machine listing component. In addition to that we spent a significant amount of time preparing for the upcoming sprint in South Africa, ensuring we have all the specifications documents we need to discuss with engineers and have prepared a presentation to inform everyone of the work we’ve done so far this cycle.


The JAAS squad develops the UI for the Charm Store and Juju GUI projects.

Controller view

The team worked on a first iteration of the Controller view for the new JAAS dashboard. This view is tailored for admin in particular, listing all the controllers that are under that group or user.

‘Group by’ functionality

The team implemented the functionality of grouping the model list table of the JAAS dashboard by status (default), owner and clouds and regions.

User testing

During our product sprint in South Africa we will be doing some user testing of the JAAS dashboard with internal users, before expanding the target group to customers and community users. The results will help us understand the prioritisation of the implementation and possible feature requests.

CharmHub POC

The Snapcraft team implemented the design of the detail page of the new CharmHub store exploring different front-end solutions in order to optimise the maintenance of the current Snap store on Snapcraft.io and the new CharmHub.io

UX and design explorations

The team explored different solutions on the graphs for the controller view of the JAAS dashboard, the side navigation and the table react component working with the MAAS team on the definition of the patterns.


The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

Multistage builds for docs.vanillaframework.io

We’ve been working on optimising our production builds recently. One of these optimisation is to use Docker’s build kit and multistage builds to both reduce image size and speed up subsequent builds.

This iteration we applied these enhancements to the build for docs.vanillaframework.io to improve the site’s release process.

Styling of the range input

Our existing Slider component was simply a styling on the HTML range input, so to keep consistency with the rest of native form inputs we removed the necessity of using p-slider class name. Any range input will now get Vanilla styling automatically.

This change will be live with the next version of Vanilla framework.

Encapsulating components

To make sure all of our components can be included and built independently from each other we started the work on encapsulating component styles, building them individually and making sure we have example pages for each individual component stylesheet.

This will allow us to make sure we don’t introduce any unnecessary dependencies between patterns in the future.


The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Integrating automated builds into snapcraft.io

We want to gradually import functionality from build.snapcraft.io to snapcraft.io. We have added authentication with GitHub and allowed the publisher the possibility of linking a GitHub repository with a Snap, this is done through a call to the Launchpad API.

Ubuntu Blog: 5 key steps to take your IoT device to market

5 days 16 hours ago

IoT businesses are notoriously difficult to get off the ground. No matter how good your product is or how good your team is, some of the biggest problems you will face are just in getting to market and maintaining your devices once they’re in the field. The webinar will take a look at how Canonical’s Brand Store product allows you to get to market while catering for long term problems and the need to keep your product up to date in the future.

More specifically, this webinar will look at the common problems we see organisations facing on their way to getting an IoT device to market, and cover five key steps to solve these problems. Along the way we will dig a little into serval case studies Canonical has done with various customers and partners to show you what has already been achieved with these solutions.

Watch the webinar

Kubuntu General News: Plasma 5.18 LTS Beta (5.17.90) Available for Testing

5 days 22 hours ago

Are you using Kubuntu 19.10 Eoan Ermine, our current Stable release? Or are you already running our development builds of the upcoming 20.04 LTS Focal Fossa?

We currently have Plasma 5.17.90 (Plasma 5.18 Beta)  available in our Beta PPA for Kubuntu 19.10.

The 5.18 beta is also available in the main Ubuntu archive for the 20.04 development release, and can be found on our daily ISO images.

This is a Beta Plasma release, so testers should be aware that bugs and issues may exist.

If you are prepared to test, then…..

For 19.10 add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

In case of issues, testers should be prepare to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], Telegram [2] or mailing lists [3].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.16 or 5.17?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

Podcast Ubuntu Portugal: Ep 73 – WSL por Nuno do Carmo (parte 1)

6 days 9 hours ago

Episódio 73 – WSL por Nuno do Carmo (parte 1). 2 Ubuntus e 1 Windows entram num bar e… Isto podia ser o início de mais uma anedota, mas o que realmente aconteceu foi mais 2 Ubuntus e 1 Windows entram num podcast e começam a falar sem parar sobre WSL, e não só, de tal maneira que a ocnversa ficou a meio e terá de ser cotinuada no próximo episódio. Já sabem: oiçam, comentem e partilhem!

  • https://meta.wikimedia.org/wiki/WikiCon_Portugal
  • https://www.humblebundle.com/books/python-machine-learning-packt-books?partner=PUP
  • https://www.humblebundle.com/books/holiday-by-makecation-family-projects-books?partner=PUP
  • https://stackoverflow.com/questions/56979849/dbeaver-ssh-tunnel-invalid-private-key
  • https://fosdem.org
  • https://github.com/PixelsCamp/talks
  • https://pixels.camp/

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

Command Line