Google has quietly removed Google Clips, an unsettling AI camera, from its Play Store, along with some other kit.…
US prosecutors say a South Korean man was behind the largest child-abuse image-swapping operation yet found on the internet.…
A compatibility issue between VMware's ESXi hypervisor and Windows Server 2019 will leave some customers unable to safely snapshot their virtual machines.…
Updated 123-Reg, which promotes its “award-winning 24/7 UK support,” as a selling point for its service, is suffering a sustained and ongoing email service outage – with Reg readers claiming the company has shut off its online support channels.…
Huawei has continued to rake in the big bucks in spite of continued dark mutterings over what may or may not be lurking within its code.…
Two key programmes related to the UK border's preparedness for Brexit pose "significant risks", the National Audit Office has said.…
Alternative search engine DuckDuckGo has announced improvements to its search options and an enhanced dark theme, but its tiny market share shows that most people are content to stick with Google, despite privacy issues.…
Controversial plans for mandatory age verification controls for pornographic websites have been scrapped, UK culture secretary Nicky Morgan announced today.…
Sierra Nevada Corporation (SNC) has wheeled out the first production version of its Dream Chaser spacecraft ahead of a 2021 mission to the International Space Station.…
Private equity monster Apollo Global Management has reportedly made an approach to buy Tech Data, one of the world's largest tech distributors, with a bid believed to be almost $5bn.…
A former TalkTalk programme director is crowdfunding the legal costs of an equal pay claim against the budget ISP.…
Amazon has turned off its final Oracle database, completing a migration effort that has involved "more than 100 teams" in the consumer biz.…
Bosses worldwide will be rejoicing after a British academic declared that banning work email use out of hours could negatively effect underlings' mental health.…
Puppetize PDX 2019 Despite standing squarely in the path of the GitLab juggernaut, DevOps automation outfit Puppet is betting that a one-size-fits-all approach will end up fitting nobody particularly well.…
NASA has unveiled two new space suit designs for future astronauts on its Artemis program, a mission to send “the first woman and the next man" to the surface of the Moon by 2024.…
Promo As the world gravitates towards cloud, edge computing, the internet of things, DevOps, and AIOps, the ground is shifting for infrastructure and operations teams. Their organisations must stay agile to keep up with the changing digital landscape.…
GitLab, a San Francisco-based provider of hosted git software, recently changed its company handbook to declare it won't ban potential customers on "moral/value grounds," and that employees should not discuss politics at work.…
Netizens are scrambling to find or build alternatives to Meetup.com – after the event-organizing app maker indicated it would charge people $2 per-RSVP.…
Docker says its services are back up and running after a Tuesday morning outage briefly left some developers unable to access its centralized Hub registry service.…
Video At a press event in New York City on Tuesday, Google announced its Pixel 4 phone, revised Pixel Bud earphones, its Pixelbook Go laptop, a revision and rebranding of its Wi-Fi mesh router as a Nest product, and a tweaked Nest Mini smart speaker.…
The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.Spotlight: Ubuntu 19.10 (Eoan Ermine) release imminent
The final testing and certification of Ubuntu 19.10 (Eoan Ermine) are nearly complete! Check out the release notes for a preview of what will be avialble shortly.cloud-init
Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.Proposed Uploads to the Supported Releases
Ansible vs Terraform vs Juju vs Chef vs SaltStack vs Puppet vs CloudFormation – there are so many tools available out there. What are these tools? Do I need all of them? Are they fighting with each other or cooperating?
The answer is not really straightforward. It usually depends on your needs and the particular use case. While some of these tools (Ansible, Chef, StaltStack, Puppet) are pure configuration management solutions, the others (Juju, Terraform, CloudFormation) focus more on services orchestration. For the purpose of this blog, we’re going to focus on Ansible vs Terraform vs Juju comparison – the three major players which have dominated the market.Ansible
Ansible is a configuration management tool, currently maintained by Red Hat Inc. Although the core project is open-source, some commercial extensions, such as Ansible Tower, are available too. By supporting a variety of modules, Ansible can be used to manage both Unix-like and Windows hosts. Its architecture is serverless and agentless. Instead of using proprietary communication protocols, Ansible relies on SSH or remote PowerShell sessions to perform configuration tasks.
The tool implements an imperative DevOps paradigm. This means that Ansible users are responsible for defining all of the steps required to achieve their desired goal. This includes writing instructions on how to install applications, preparing templates of configuration files, etc. All these steps are usually implemented in a form of so-called playbooks, however, users can execute ad hoc commands too. Once written, the playbooks can be used to automate configuration tasks across multiple machines in various environments.
Although perfectly suited for traditional configuration management, Ansible cannot really orchestrate services. It was just designed for different purposes, with automation being in the core. Moreover, some of its modules are cloud-specific which makes a potential migration from one platform to the other difficult. Finally, due to its imperative nature, Ansible does not scale in large environments consisting of various interconnected applications.Terraform
In turn, Terraform is an open-source IaC (Infrastructure-as-Code) solution that was developed by HashiCorp. It allows users to provision and manage cloud, infrastructure, and service resources using simple, human-readable configuration language called HCL (HashiCorp Configuration Language). The resources are delivered by so-called providers. At the moment Terraform supports over 200 providers, including public clouds, private clouds and various SaaS (Software-as-a-Service) providers, such as DNS, MySQL or Vault.
Terraform uses a declarative DevOps paradigm which means that instead of defining exact steps to be executed, the ultimate state is defined. This is a huge progress compared to the traditional configuration management tools. However, Terraform’s declarative approach is limited to providers only. The applications being deployed still have to be installed and configured using traditional scripts and tools. Of course, pre-built images can be used too, when deploying applications in cloud environments. Those can be later customized according to the users’ requirements.
In addition to the initial deployment, Terraform can also be used to orchestrate deployed workloads. This functionality is provided by its execution plans and resource graphs. Thanks to the execution plans users can define exact steps to be performed and the order in which they will be executed. In turn, resource graphs allow to visualise those plans. Again, this is much more than what Ansible can do.Juju
Contrary to both Ansible and Terraform, Juju is an application modelling tool, developed and maintained by Canonical. You can use it to model and automate deployments of even very complex environments consisting of various interconnected applications. Examples of such environments include OpenStack, Kubernetes or Ceph clusters. Apart from the initial deployment, you can also use Juju to orchestrate deployed services too. Thanks to Juju you can backup, upgrade or scale-out your applications as easily as executing a single command.
Like Terraform, Juju uses a declarative approach, but it brings it beyond the providers up to the applications layer. You can not only declare a number of machines to be deployed or number of application units, but also configuration options for deployed applications, relations between them, etc. Juju takes care of the rest of the job. This allows you to focus on shaping your application instead of struggling with the exact routines and recipes for deploying them. Forget the “How?” and focus on the “What?”.
The real power of Juju lies in charms – collections of scripts and metadata which contain a distilled knowledge of experts from Canonical and other companies. Charms contain all necessary logic required to install, configure, interconnect and operate applications. Canonical maintains a Charm Store with over 400 charms, but you can also write your own charms. This is because the whole framework and ecosystem is fully open-source.
While Juju’s role is to deploy and orchestrate applications, like Terraform it relies on a variety of providers to spin up machines (bare metal, VMs or containers) for hosting those applications. The supported providers include leading public clouds (AWS, Google Cloud, Azure, etc.) and various on-premise providers: LXD, MAAS, VMware vSphere, OpenStack and Kubernetes. In a very rare case, when your cloud environment is not natively supported by Juju, you can use a manual provider to let Juju deploy applications on top of your manually provisioned machines.Ansible vs Terraform vs Juju
Now, as we’ve arrived at the last section of this blog, could we somehow compare Ansible vs Terraform vs Juju? The answer is short – we cannot. This is because all of them were designed for different purposes and with a different focus in mind. It is fair to say that in some way they formed an evolution path of lifecycle management frameworks. It is really hard to perform Ansible vs Terraform vs Juju comparison then, as each of them is absolutely different.
Thus, if we cannot compare them, let’s maybe get back to the original questions and try to answer them instead.
Do I need all of those tools?
Well, it really depends on your use case, so let’s try to sum up what these tools are for. Ansible is a configuration management tool and fits very well wherever traditional automation is required. On the other hand, Terraform focuses more on infrastructure provisioning, assuming that applications will be delivered in a form of pre-built images. Finally, Juju takes a completely different approach by using charms for applications deployments and operations.
Are they fighting with each other or cooperating?
There are definitely areas in which they cooperate. For example, Juju charms can use Ansible playbooks to maintain configuration files. Or you can use Juju-deployed applications (e.g. OpenStack) as a provider for Terraform. As data centers are becoming more and more complex, there’s definitely space for all of them. This is because all of them are great in what they are doing and what they were designed for.I want Juju, what next?
If you want to evolve your DevOps organisation and benefit from model-driven, declarative approach to applications deployments and operations, Juju is the answer. Simply visit the Juju website, watch the “Introduction To Juju: Automating Cloud Operations” webinar or contact us directly. Canonical’s DevOps experts are waiting to help you to move forward with the transformation of your organisation.
We are so excited about what just happened that we felt we should tell everyone about it!
A group of 24 of us at Canonical from various teams including Sales, HR and Engineering, attended the Grace Hopper Celebration in Orlando, Florida. This year, it was an epic gathering of more than 26,000 people from all over the globe interested in tech. Despite its start as women’s work, the tech industry has gained a reputation of being dominated by and mostly suited for men. In reality, this only made the Grace Hopper conference feel more impactful, especially knowing that in its very first edition in 1994, only 500 women were present at the event. The Grace Hopper Conference was an awesome celebration of women; diverse, multi-talented, and deeply skilled!
Both women and men, mostly students, interested in everything from security to machine learning came by the Canonical booth to hear about Ubuntu. We brought along an Orange box so we could demo MAAS, OpenStack, and other incredible technologies happening on Ubuntu at Canonical.
We rotated attending informative and inspiring sessions; exploring an exhibition hall pulsating with energy and booths as far as the eye can see; and discussed Canonical offerings and job opportunities at our Canonical booth.
There were so many best parts to the week. We discussed various technologies with others in the industry, scoped out exceptional talent for Canonical job opportunities, visited various booths and found out who uses Ubuntu and what for. We also gave out Ubuntu trinkets and collected bags of trinkets from others. Perhaps our favourite was just hanging out and getting to know fellow Canonical’ers on the various teams and what they worked on.
All of us had the opportunity to share what we do and what we love about working for Canonical, the company behind Ubuntu. It was interesting for us that most of the people we met did not know the name ‘Canonical’, but knew and worked regularly with Ubuntu. Someone even said: “Ubuntu is the reason I chose this career!” and were very excited to talk to the people behind it.
Meeting that many smart women in tech made us realise that we are not alone. Every one of us has the capacity to contribute and drive change. #WeWill make a difference. See you next year at GHC 2020!
This was a fairly busy two weeks for the Web & design team at Canonical. This cycle we had two sprints. The first was a web performance workshop run by the amazing Harry Roberts. It was a whirlwind two days where we learned a lot about networking, browsers, font loading and more. We also spent a day working on implementing a lot of the changes. Hopefully our sites will feel a bit faster. More updates will be coming over the next few months. The second sprint was for the Brand and Web team, where we looked at where the Canonical and Ubuntu brands need to evolve. Here are some of the highlights of our completed work.Web squad
Web is the squad that develop and maintain most of the brochure websites across the Canonical.Takeovers and engage pages
This iteration we built two webinars with engage pages and two more case study engage pages.Deep Tech webinar
We built a new homepage takeover along with an engage page to learn more about the webinar.Intro to edge computing webinar series
We created a homepage takeover that leads to an engage page with a series of webinars people can watch about computing at the edge.Yahoo! Japan case study
We posted a new case study about how Canonical works with Yahoo! Japan and there IaaS platform.Domotz case study
We posted a new case study about how Canonical has helped Domotz with their IoT strategy.Base
Base is the team that underpins our toolsets and architure of our projects. They maintain the CI and deployment of all websites we maintain.HTTP/2 and TLS v1.3 for Kubernetes sites
Back in August, a number of vulnerabilities were discovered in HTTP/2, which opened up some DOS possibilities. In response to this, we disabled HTTP/2 for our sites until the vulnerabilities were fixed.
This iteration, the NGINX Ingress controller on our k8s cluster was updated, updating our sites to be served with the latest version of openresty, which includes all relevant fixes for these earlier vulnerabilities. In response we’ve re-enabled HTTP/2, which was also a strong performance recommended by Harry during the workshop.
Our canonicalwebteam.image-template module provides a template function which outputs <img> element markup in a recommended format for performance.
The performance workshop highlighted a number of best practices which we used to improve the module and release v1.0.0:
Many of our sites (particularly snapcraft.io, jaas.ai, ubuntu.com/blog and certification.ubuntu.com) rely heavily on pulling their data from an API. For these sites, the responsiveness of those APIs is central.
This iteration, we have enhanced our Graylog installation to read these metrics from logs and output beautiful graphs of our API.MAAS
The MAAS squad develop the UI for the maas project.
Our team continues with the work of separating the UI from maas-core. We have very nearly completed taking the settings section to React and are also working on converting the user preferences tab to React as well.
We are also progressing with the work on network testing. The core functionality is all complete now and we’re ironing out some final details.
As part of the work on representing NUMA topology in MAAS, we completely redesigned the machine summary page, which was implemented this iteration.
We are also experimenting with introducing white background to MAAS as well as the rest of the suite of websites and applications we create. This work is ongoing.
The team continued working on the new JAAS dashboard, moving forward the design with explorations on responsiveness, interactions, navigation, and visuals.
The team also continued working on Juju website, and the alignment between the CLI commands of Juju, Snap, Charm and Snapcraft. CharmHub wise, the team explored the home page of the new website charmhub.io, to start defining the content and the user experience of the page and navigation.Snapcraft
The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.
The headline story from the last iteration is the improvement to overall page load times, but specifically the store page. With some code organisation, and the aforementioned image-template module, we’ve managed to drop the initial load time of the store page from an average of ~15s to ~5s (or quicker, as in the video above).
Faster Snap browsing for everyone!
In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:
September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.
For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!
This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the reduced involvment of another long term sponsor, we are still at the same funding level (roughly 216 hours sponsored by month).
New sponsors are in bold.
Welcome to the Ubuntu Weekly Newsletter, Issue 600 for the week of October 6 – 12, 2019. The full version of this issue is available here.
In this issue we cover:
The Ubuntu Weekly Newsletter is brought to you by:
Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License
Ubucon Europe: What is the Ubuntu community doing in Sintra? Sharing technical knowledge and tightening connections
News from the new Ubuntu distribution, the exploration of the several platforms and many “how to”, rule the 4-days agenda where the open source and open technologies are in the air.
The Olga Cadaval Cultural centre in Sintra, is the main stage of a busy agenda filled with several talks and more technical sessions, but at Ubucon Europe there’s also room for networking and cultural visits, a curious fusion between spaces full of history, like the Pena Palace or the Quinta da Regaleira, and one of the youngest “players” in the world of software.
For 4 days, the international Ubuntu Community gathers in Sintra for an event open to everyone, where the open source principles and open technology are dominating. The Ubucon Europe Conference begun Thursday, October 10th, and extends until Sunday, October 13th, keeping an open doors policy to everyone who wants to
Afterall, what is the importance of Ubucon? The number of participants, which should be around 150, doesn’t tell the whole story of what you can learn during these days, as the SAPO TEK had the opportunity to check this morning.
Organised by the Ubuntu Portugal Community, with the National Association for Open Software, the Ubuntu Europe Federation and the Sintra Municipality, the conference brings to Portugal some of the biggest open source specialists and shows that Ubuntu is indeed alive, even if not yet known by most people, and still far from the “world domain” aspired by some.
15 years of Ubuntu
This year is Ubuntu’s 15th birthday after its creation in 2004 by Mark Shuttleworth who gathered a team of Debian developers and founded Canonical, in South Africa, with the purpose of developing a Linux distribution easy to use. He called it Ubuntu, a word that comes from the Zulo and Xhosa languages meaning “I am because we are” which shows its social dimension.
The millionaire Mark Shuttleworth declared at the time “my motivation and goal is to find a way of creating a global operating system for desktops which is free in every way, but also sustainable and with a quality comparable to any other that money can buy”.
And in the last 15 years Ubuntu hasn’t stop growing, following trends and moving from the desktop and servers to the Cloud, the IoT and even phones. Canonical ended up withdrawing from this last one, leaving the development on UBport’s hands.
“Ubuntu has never been better”, states Tiago Carrondo, head of the Ubuntu Portugal Community, explaining that Cloud usage is growing every month and the same is happening on the desktop. “The community has proved being alive and participative” and Ubucon is an example of that capacity to deliver and to be involved in projects.
A new version of Ubuntu is going to be launched in two weeks (October 19th) and in April, next year, it’s time for Ubuntu 20.04, the new LTS version which is generating expectations and it’s the focus of several talks during Ubucon.An operating system not just for ‘geeks’
But is this a subject just for some “geeks” who don’t mind getting their hands dirty and mess with coding to adapt the operating system to their needs? Gustavo Homem, CTO of Ângulo Sólido, ensures Ubuntu is increasingly being used by companies and in the cloud Azure, AWS and DigitalOcean is among the most used operating systems, highlighting the ease of use, flexibility and security.
The Ângulo Sólido uses Ubuntu internally and with their clients, from desktops to routers and Cloud solutions, and during Ubucon it presented the more and the least expected uses for Ubuntu, where some hacks with mixing desks take part.
It’s in the Cloud where Ubuntu has grown the most, due to the freedom of the operating system, because at the level of computer’s desktops and laptops it depends on the manufactures willingness to sell devices with a pre-installed operating system, or without any, leaving room for ubuntu’s using.
However, even if it’s easy and more and more prepared to connect to every peripherals and it supports most of the software on the market, Ubuntu is far from being recognised by the majority of computer users, so its use is reserved to a restrict group of people with more technical training and knowledge.
In Cell phones, where there was a movement for creating an operating system in 2014 which could be an alternative to android and IOS, the abandonment of the project by Canonical didn’t help creating a mass movement involving manufactures. The UBports community continues developing the concept and coding, and during Ubucon showed some news and developments with Fairphone and Pine64, but it’s still far from becoming a solid operating system, in which you can fully trust, as Jan Sprinz admitted.
In the audience of the talk which SAPO TEK attended, there were many Ubuntu Touch users, the mobile operating system, but with doubts and concerns, such as the availability of the most used apps. Nevertheless the operating system is cherished, and there was even someone comparing it to a pet, which may destroy the leaving room and chew the shoes, but the owner never stops loving it.How do you do an Ubucon?
“We wanted to make a memorable Ubucon”, explains Tiago Carrondo, the face of the organisation who, during the last few months dedicated much of his time to the preparation of all the logistics, part of a very small but very committed team, as he stated to SAPO TEK.
The European event is now on its 4th edition and it arose spontaneously, inside the community, and after Germany (Essen), France (Paris) and Spain (Xixón), Portugal is the 4th country hosting the community with the purpose of “having an Ubucon without rain” and from here, the community goes, in 2020 to a new location, which should be revealed this week but now still a well-kept secret.
Characterising Ubuntu Portugal as a community of people, Tiago Carrondo explains that companies are “friends”, and appear as associates and sponsors for the event, where there are also connections with educational institutes.
The centre of the organisation and purpose of Ubucon are the people, so there’s a very big social component, allowing volunteers working in Ubuntu’s projects during the entire year to meet face to face and share experiences and knowledge. For that reason, the schedule was designed to start a little later than usual, around 10 am, and to finish early with a long pause for lunch.
The conference ends tomorrow, but those who want to attend the last presentations in Olga Cadaval Cultural Centre in Sintra, can still do it. By registering or by simply showing up at the venue, because the organisation policy is open doors and respect for privacy.
Those who didn’t have the chance to assist will be able to watch everything in video over the next few weeks. Tiago Carrondo explains that they didn’t want to stream it, but everything is being recorded to be edited and will be available soon.
Adoption of edge computing is taking hold as organisations realise the need for highly distributed applications, services and data at the extremes of a network. Whereas data historically travelled back to a centralised location, data processing can now occur locally allowing for real-time analytics, improved connectivity, reduced latency and ushering in the ability to harness newer technologies that thrive in the micro data centre environment.
In an earlier post, we discussed the importance of choosing the right primitives for edge computing services. When looking at use-cases calling for ultra-low latency compute, Kubernetes and containers running on bare metal are ideal for edge deployments because they offer direct access to the kernel, workload portability, easy upgrades and a wide selection of possible CNI choices.
While offering clear advantages, setting up Kubernetes for edge workload development can be a difficult task – time and effort better spent on actual development. The steps below walk you through an end-to-end deployment of a sample edge application. The application runs on top of Kubernetes with advanced latency budget optimization. The deployed architecture includes Ubuntu 18.04 as the host operating system, Kubernetes v1.15.3 (MicroK8s) on bare-metal, MetalLB load balancer and CoreDNS to serve external requests.Let’s roll
Summary of steps:
Let’s start with the development workstation Kubernetes deployment using MicroK8s by pulling the latest stable edition of Kubernetes.
$ sudo snap install microk8s --classic
microk8s v1.15.3 from Canonical✓ installed
$ snap list microk8s
Name Version Rev Tracking Publisher Notes
microk8s v1.15.3 826 stable canonical✓ classic
As I’m deploying Kubernetes on the bare metal node, I chose to utilise MetalLB, as I won’t be able to rely on the cloud to provide LBaaS service. MetalLB is a fascinating project supporting both L2 and BGP modes of operation, and depending on your use case, it might just be the thing for your bare metal development needs.
$ microk8s.kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
Once installed, you need to make sure to update the iptables configuration to allow IP forwarding and configure your metalLB with networking mode and address the pool you want to use for load balancing. The config files need to be created manually, please see listing 1 below for reference.
$ sudo iptables -P FORWARD ACCEPT
Listing 1 : MetalLB configuration (metallb-config.yaml)
- name: default
Now that you have your config file ready, you continue with CoreDNS sample workload configuration. Especially for edge use cases, you usually want to have fine-grained control over how your application is exposed to the rest of the world. This includes ports as well as the actual IP address you would like to request from your load balancer. For the purpose of this exercise, I use .35 IP addresses from 10.0.2.32/28 subnet and create Kubernetes service using this IP.
Listing 2: CoreDNS external service definition (coredns-service.yaml)
- name: coredns
For the workload configuration itself, I use a simple DNS cache configuration with logging and forwarding to Google’s open resolver service.
Listing 3: CoreDNS ConfigMap (coredns-configmap.yaml)
forward . 22.214.171.124
Finally, the description of our Kubernetes deployment calling for 3 workload replicas, latest CoreDNS image and configuration I’ve defined in our ConfigMap.
Listing 4: CoreDNS Deployment definition (coredns-deployment.yaml)
- name: coredns
args: [ "-conf", "/etc/coredns/Corefile" ]
- name: config-volume
- containerPort: 53
- containerPort: 53
- name: config-volume
- key: Corefile
With all the service components defined, prepared and configured, you’re ready to start the actual deployment and verify the status of Kubernetes pods and services.$ microk8s.kubectl apply -f metallb-config.yaml
Once all the containers are fully operational, you can evaluate how your new end to end service is performing. As you can see, the very first request takes around 50ms to get answered (which aligns with usual latency between my ISP access network and Google DNS infrastructure), however, subsequent requests provide significant latency reduction as expected from a local DNS caching instance.
$ host -a www.ubuntu.com 10.0.2.35
Using domain server:
Received 288 bytes from 10.0.2.35#53 in 50 ms
$ host -a www.ubuntu.com 10.0.2.35
Using domain server:
Received 288 bytes from 10.0.2.35#53 in 0 ms
$ host -a www.ubuntu.com 10.0.2.35
Using domain server:
Received 288 bytes from 10.0.2.35#53 in 1 ms
CoreDNS is an example of a simple use case for distributed edge computing, proving how network distance and latency can be optimised for better user experience by changing service proximity. The same rules apply to exciting services such as AR/VR, GPGPU-based inference AI and content distribution networks.
The choice of proper technological primitives, flexibility to manage your infrastructure to meet service requirements and process to manage distributed edge resources in scale will become critical factors for edge cloud adoption. This is where MicroK8s comes in, to reduce the complexity and cost of development and deployment without sacrificing quality.
So you’ve just on-boarded an edge application, now what? Take MicroK8s for a spin with your use case(s) or just try to break stuff. If you’d like to contribute or request features/enhancements, Please shout out on our Github, Slack #MicroK8s or Kubernetes forum.
We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February 2020 at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.
== tl;dr ==
We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.
The Community DevRoom will take place on Sunday 2nd February 2020.
Our goals in running this DevRoom are to:
* Connect folks interested in nurturing their communities with one another so they can share knowledge during and long after FOSDEM
* Educate those who are primarily software developers on community-oriented topics that are vital in the process of software development, e.g. effective collaboration
* Provide concrete advice on dealing with squishy human problems
* To unpack preconceived ideas of what community is and the role it plays in human society, free software, and a corporate-dominated world in 2020.
We would seek proposals on all aspects of creating and nurturing communities for free software projects.
== TALK TOPICS ==
Here are some topics we are interested in hearing more about this year:
1) Is there any real role for community in corporate software projects?
Can you create a healthy and active community while still meeting the needs of your employer? How can you maintain an open dialog with your users and/or contributors when you have the need to keep company business confidential? Is it even possible to build an authentic community around a company-based open source project? Have we completely lost sight of the ideals of community and simply transformed that word to mean “interested sales prospects?”
2) Creating Sustainable Communities
With the increased focus on the impact of short-term and self-interested thinking on both our planet and our free software projects, we would like to explore ways to create authentic, valuable, and lasting community in a way that best respects our world and each other. We would like to hear from folks about how to support community building in person in sustainable ways, how to build community effectively online in the YouTube/Instagram era, and how to encourage corporations to participate in community processes in a way that does not simply extract value from contributors. If you have recommendations or case studies on how to make this happen, we very much want to hear from you.
We are particularly interested to hear about academic research into FOSS Sustainability and/or commercial endeavors set up to address this topic.
3) Bringing free software to the GitHub generation
Those of us who have been in the free and open source software world for a long time remember when the coolest thing you could do was move from CVS to SVN, Slack ended in “ware”, IRC was where you talked to your friends instead of IRL (except now no one talks in IRL anyway, just texts), and Twitter was something that birds did. Here we are in 2020, and clearly things have changed.
How can we bring more younger participants into free software communities? How do we teach the importance of free software values in an era where freely-available code is ubiquitous? Will the ethical underpinnings of free software attract millenials and Gen Z to participate in our communities when our free software tends to require lots of free time?
We promise we are not cranky old fuddy duddies. Seriously. It’s important to us that the valuable experiences we had in our younger days working in the free software community are available to everyone. And we want to know how to get there.
4) Applying the principles of building free software communities to other endeavors
What can the lessons about decentralization, open access, open licensing, and community engagement teach us as we address the great issues of our day? We have left this topic not well defined because we would like people to bring whatever truth they have to the question. Great talks in this category could be anything from “why to never start a business in Silicon Valley” to “working from home is great and keeps C02 out of the air.” Let your imagination take you far – we are excited to hear from you.
5) How can free software protect the vulnerable
At a time when some of the best accessibility features are built as proprietary products, at a time when surveillance and predictive policing lead to persecution of dissidents and imprisonment of those who were guilty before proven innocent, how can we use free software to protect the vulnerable? What sort of lobbying efforts would be required to make certain free software – and therefore fully auditable – code becomes a civic requirement? How do we as individuals, and actors at employers, campaign for the protection of vulnerable people – and other living things – as part of our mission of software freedom.
6) Conflict resolution
How do we continue working well together when there are conflicts? Is there a difference in how types of conflicts best get resolved, e.g. ”this code is terrible” vs. “we should have a contributor agreement”? We are especially interested in how tos / success stories from projects that have weathered conflict.
We are now at 2020 and this issue still comes up semi-daily. Let’s share our collective wisdom on how to make conflict less painful and more productive.
Again, these are just suggestions. We welcome proposals on any aspect of community building!
== PREPARING YOUR SUBMISSION & DEADLINES ==
=== LENGTH OF PRESENTATION ===
We are looking for talk submissions between 30 and 45 minutes in length, including time for Q&A. In general, we are hoping to accept as many talks as possible so we would really appreciate it if you could make all of your remarks in 30 minutes – our DevRoom is only a single day – but if you need longer just let us know.=== ANYTHING EXTRA YOU WOULD LIKE US TO KNOW ===
Beyond giving us your speaker bio and paper abstract, make sure to let us know anything else you’d like to as part of your submission. Some folks like to share their Twitter handles, others like to make sure we can take a look at their GitHub activity history – whatever works for you. We especially welcome videos of you speaking elsewhere, or even just a list of talks you have done previously. First time speakers are, of course, welcome!=== SUBMISSION INSTRUCTIONS ===
Community DevRoom Mailing List: firstname.lastname@example.org
This is part 2 of our blog post series on our current and future work around ZFS on root support in ubuntu. If you didn’t yet read the introductory post, I strongly recommend you to do this first!
Here we are going to discuss what landed by default ubuntu 19.10.Upstream ZFS On Linux
We are shipping ZFS On Linux version 0.8.1, with features like native encryption, trimming support, checkpoints, raw encrypted zfs transmissions, project accounting and quota and a lot of performance enhancements. You can see more about 0.8 and 0.8.1 released on the ZOL project release page directly. 0.8.2 didn’t make it on time for a good integration and tests in Eoan. So, we backported some post-release upstream fixes as they fit, like newer kernel compatibility, to provide the best user experience and reliability. Some small upstream fixes and feedback were contributed by our team to upstream ZFS On Linux project.
Any existing ZFS on root user will automatically get those benefits as soon as they update to Ubuntu 19.10.Installer integration
The ubiquity installer is now providing an experimental option for setting up ZFS on root on your system. While ZFS has been a a mature product for a long time, the installer ZFS support option is in alpha and people opting in should be conscious about it. It’s not advised to run this on production system or a system wherre you have critical data (apart if you have regular and verified backups, which we all do, correct?). To be fully clear, there may be breaking changes in the design as long as the feature is experimental, and we may, or may not, provide transition path to the next layout.
With that being said, what does ZFS on root means? It means that most of your system will run on ZFS. Basically even your “/” directory, is installed on ZFS.
Ready to jump in, despite all those disclaimers? If so, you download an ubuntu 19.10 ISO and you will see that the disk partitioning screen in Ubiquity has an additional option (please read the Warning!):
Yes, the current experimental support is limited right now to a whole disk installation. If you have multiple disks, the next screen will ask you to pick which one:
You will then get the “please confirm we’ll reformat your whole disk” screen.
… and finally the installation will proceed as usual:
In case you didn’t notice yet, this is experimental (what? ;)) and we have some known quirks, like the confirmation screen showing that it’s going to format and create an ext4 partition. This is difficult to fix for ubuntu 19.10 (for the technical users interested in details, what we are actually doing is creating multiple partitions in order to let partman handle the ESP, and then, overwrite the ext4 partition with ZFS, so it’s technically not lying ;)). It’s something we will fix before getting out of the experimental phase, hopefully next cycle.Partitions layout
We’ll create the following partitions:rpool
One ZFS partition for the “rpool” (as root pool), which will contain your main installation and user data. This is basically your main course and the one we’ll detail the dataset layout in the next article as we have a lot to say about it.bpool
Another ZFS partition for your boot pool named “bpool”, which contains kernels and initramfs (basically your /boot without the EFI and bootloader). We have to separate this from your main pool as grub can’t support all ZFS features that we want to enable on the root pool, and so, your pool would be otherwise unreadable by your boot loader, which will sadly result in unbootable system! Consequently, this pool runs a different version of ZFS pool version (right now, version 28, but we are looking for next cycle to upgrade to version 5000, with some features disabled). Note that due to this, even if zpool status proposes that you to upgrade your bpool, you should never do that or you won’t be able to reboot. We will work on a patch to prevent this to happen.ESP partition
There is the ESP partition (mounted as /boot/efi). Right now, it’s only created if you have a UEFI system, but we might get it created in Ubiquity systematically in the future, so that people who disabled secure boot and enable it later on can have a smooth transition.grub partition
A grub partition (mounted as /boot/grub), which is formatted as ext4. This partition isn’t a ZFS one because it’s global to your machine, so the content and state should be shared between multiple installations on the same system. In addition, we don’t want to reference a grub menu which can be snapshotted and roll backed, as it means the grub menu won’t give access to “future system state” after a particular revert. If we succeed in having an ESP partition systematically created in the future, we can move grub itself to it unconditionally next cycle.Continuing work on pure ZFS system
We are planning to continue reporting feedback upstream (probably post 19.10 release, once we have more room for giving detailed information and various use-case scenarios) as our default dataset layout is quite advanced (more on that later) and current upstream mount ordering generator doesn’t really cope with it. This is the reason why we took the decision to disable our revert GRUB feature for pure ZFS installation (but not Zsys!) in 19.10, as some use case could lead to unbootable systems. This is a very alpha experiment, but we didn’t want to risk user’s data on purpose.
But this is far from being the end of our road to our enhanced ZFS support in Ubuntu! Actually, the most interesting and exciting part (from a user’s perspective) will come with Zsys.Zsys, ZFS System handler
Zsys is our work in progress, enhanced support of ZFS systems. It allows running multiple ZFS installations in parallel on the same machine and managing complex ZFS dataset layouts, separating user data from system and persistent data. It will soon provide automated snapshots, backups, system managements.
However, as we first wanted to have feedback in Ubuntu 19.10 about pure ZFS systems, we didn’t seed it by default. It’s available though an apt install zsys for the adventurous audience, and some Ubuntu flavors already jumped on the band wagon where it will be installed by default! Even if you won’t immediately see differences, this will unleash some of our grub, adduser and initramfs integration that are baked right in 19.10.
The excellent Ars Technica review by Jim Salter was wondering about the quite complex dataset layout we are setting up. We’ll shed some light on this on the next blog post which will explain what Zsys is really, what it does bring to the table and what our future plans are.
The future of ZFS on root on Ubuntu is bright, I’m personally really excited about what this is going to bring to both server and desktop users! (And yes, we can cook up some very nice features for our desktop users with ZFS)!
If you want to join the discussion, feel free to hop in our ubuntu discourse dedicated topic.
This week we’ve been playing LEGO Worlds and tinkering with Thinkpads. We round up the news and goings on from the Ubuntu community, introduce a new segment, share some events and discuss our news picks from the tech world.
In this week’s show:
That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to email@example.com or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.
We have recently announced that we are transitioning the Chromium deb package to the snap in Ubuntu 19.10. Such a transition is not trivial, and there have been many constructive discussions around it, so here we are summarising why we are doing this, how, and the timeline.Why
Chromium is a very popular web browser, the fully open source counterpart to Google Chrome. On Ubuntu, Chromium is not the default browser, and the package resides in the ‘universe’ section of the archive. Universe contains community-maintained software packages. Despite that, the Ubuntu Desktop Team is committed to packaging and maintaining Chromium because a significant number of users rely on it.
Maintaining a single release of Chromium is a significant time investment for the Ubuntu Desktop Team working with the Ubuntu Security team to deliver updates to each stable release. As the teams support numerous stable releases of Ubuntu, the amount of work is compounded.
Comparing this workload to other Linux distributions which have a single supported rolling release misses the nuance of supporting multiple Long Term Support (LTS) and non-LTS releases.
Google releases a new major version of Chromium every six weeks, with typically several minor versions to address security vulnerabilities in between. Every new stable version has to be built for each supported Ubuntu release − 16.04, 18.04, 19.04 and the upcoming 19.10 − and for all supported architectures (amd64, i386, armhf, arm64).
Additionally, ensuring Chromium even builds (let alone runs) on older releases such as 16.04 can be challenging, as the upstream project often uses new compiler features that are not available on older releases.
In contrast, a snap needs to be built only once per architecture, and will run on all systems that support snapd. This covers all supported Ubuntu releases including 14.04 with Extended Security Maintenance (ESM), as well as other distributions like Debian, Fedora, Mint, and Manjaro.
While this change in packaging for Chromium can allow us to focus developer resources elsewhere, there are additional benefits that packaging as a snap can deliver. Channels in the Snap Store enable publishing multiple versions of Chromium easily under one name. Users can switch between channels to test different versions of the browser. The Snap Store delivers snaps automatically in the background, so users can be confident they’re running up to date software without having to manually manage their updates. We can also publish specific fixes quickly via branches in the Snap Store enabling a fast user & developer turnaround of bug reports. Finally the Chromium snap is strictly confined, which provides additional security assurances for users.
In summary: there are several factors that make Chromium a good candidate to be transitioned to a snap:
The first release of the Chromium snap happened two years ago, and we’ve come a long way since then. The snap currently has more than 200k users across Ubuntu and more than 30 other Linux distributions. The current version has a few minor issues that we’re working hard to address, but we felt it’s solid and mature enough for a transition. We feel confident that it is time to start transitioning users of the development release (19.10) of Ubuntu to it. We are eager to collect feedback on what works and what doesn’t ahead of the next Long Term Support release of Ubuntu, 20.04.
In 19.10, the chromium-browser deb package (and related packages) have been made a transitional package that contains only wrapper scripts and a desktop file for backwards compatibility. When upgrading or installing the deb package on 19.10, the snap will be downloaded from the Snap Store and installed.
Special care has been taken to not break existing workflows and to make the transition as seamless as possible:
If you’re experimenting with Ubuntu 19.10 then you can try Chromium as a snap and test the transition from the deb package right now. However, you don’t need to wait until the release on the 17th of October to start using the snap and sharing your feedback. Simply run the following commands to be up and running:
snap install chromium
snap run chromium
Once 19.10 is released, we will carefully consider extending the transition to other stable releases, starting with 19.04. This won’t happen until all the important known issues are addressed, of course.
Now is the perfect time to put the snap to the test and report issues and regressions you encounter.
We appreciate all the feedback and commentary we’ve been sent over the last few months as we announced this project. We honestly believe delivering applications as snaps provides significant advantages both to developers and users. We know there may be some rough edges as we work towards the future and will continue to listen to our users as we chart this new journey.
Copyright 2019 © All rights reserved