ESA has published its report into the loss of the Vega VV17 mission and said the screwup was due to an "inversion of electrical connections" during integration.…
Huawei has launched the first developer preview of its in-house smartphone operating system, HarmonyOS 2.0.…
Some 5G networks are at risk of attack thanks to "long-standing vulnerabilities" in core protocols, according to infosec researchers at Positive Technologies.…
Something for the Weekend, Sir? "It will never catch on." The next thing you know, you’re staring at a badly drawn zob scrawled over Shakespeare's shimoneta*.…
On Call Welcome to the last On Call before Christmas, and a reminder that furry friends and technology do not always make good bed, or even floor, fellows.…
Amidst the paroxysms of coronavirus and Brexit, the United Kingdom on Wednesday found time to ratify the Convention that formally establishes the SKA Observatory (SKAO), paving the way for the giant radio telescope to be built.…
Your web search history plus records of the browser and device you use to make those searches could enable financial institutions to calculate you a more accurate credit rating than traditional methods, according to the International Monetary Fund (IMF). And the global finance organisation says the ability to use those records might be a good thing rather than a privacy nightmare.…
Alibaba Group today admitted its cloud business developed what it is described as “a facial recognition technology … that included ethnicity as an algorithm attribute for tagging video imagery,” then vowed it will never again see the light of day.…
America's nuclear weapons agency was hacked by the suspected Russian spies who backdoored SolarWinds' IT monitoring software and compromised several US government bodies, and Microsoft was caught up in the same cyber-storm, too, it was reported Thursday.…
Google has rejected Australia’s plan to force it to pay local news publishers for the right to index their output and present it in search results.…
On Thursday Google was hit for the third time in as many months in the United States with an antitrust lawsuit, once again focused on the internet giant's alleged monopolization of the search advertising market.…
Oracle on Thursday said it has uncovered the largest fraud campaign yet targeting businesses booking advertising in video streams showing on so-called "smart" televisions.…
Renewable electricity and gas supplier People’s Energy has told its 250,000-plus customers that a “gap” in the security of its IT system was exploited by digital burglars.…
Google and Qualcomm have linked arms to extend the lifecycle of new Android devices, meaning future phones could receive as many as three major operating system updates provided they're running the latest Snapdragon silicon.…
Those within NASA hoping for some festive treats were in for disappointment this week as the US Government Accountability Office (GAO) administered a kicking over the agency's beleaguered Artemis programme.…
GitHub on Thursday said it has removed all cookie banners from its website, a decision the company is making in the interest of privacy, despite the claimed popularity of its disclosure interface.…
Enterprise data warehouse stalwart Teradata has capped a difficult year with an update to Teradata QueryGrid, which promises to connect customers to a vast array of new data sources – a decidedly underwhelming move, according to some.…
AWS has claimed its upcoming Amazon Location Service for developers building mapping and geographic features into applications is "priced at a fraction of common alternatives," presumably aiming squarely at a company whose name rhymes with schmoogle.…
Google is discontinuing its Android Things IoT platform for non-commercial users. The Chocolate Factory will not allow the creation of new projects after 5 January and the entire platform will be nuked the following year.…
Microsoft is updating its certification system to one that requires an annual renewal as it eyes the rapidly changing tech landscape.…
In November we held the last
LTS team meeting for 2020 on IRC, with the next one coming up at the end of January.
We announced a new formalized initiative for Funding Debian projects with money from Freexian’s LTS service.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!
We’re also glad to welcome two new sponsors, Moxa, a device manufacturer, and a French research lab (Institut des Sciences Cognitives Marc Jeannerod).
Sponsors that joined recently are in bold.
Open source MANO (OSM) community recently added two more bricks in the wall of NFV orchestration events: OSM Release NINE and OSM#10 Hackfest. The community has come a long way to mature OSM into its 9th version. A toast to all the system integrators, network operators, researchers, and VNF vendors who have hit the home run there!
Canonical’s contribution towards the OSM Release NINE highlights the model-driven capabilities for network services onboarding. This model-driven approach paves the way towards the goal of multi-cloud orchestration using centralized OSM components. Another major integration is the alignment of the OSM network model with ETSI (European Telecommunication Standard Institute) SOL006 standards.
Canonical being a key member of the OSM community, also participated in Hackfest, and in collaboration with other stakeholders, covered the majority of the hackfest sessions. The event held once again behind the screens from Nov 30 to Dec 4, 2020 and focused on enhancing community interactivity and encouraging developers to participate in the project.Canonical in OSM#10 Hackfest – The highlights
Canonical’s engagement in OSM#10 Hackfest was aiming to drive better lifecycle management of network services using a model-driven approach. The team was present both as instructors in Hackfest sessions and leaders in community technical sessions.
The Hackfest sessions were categorized into Operations, VNF onboarding, and OSM Ecosystem progression. The first three days were focused on deployment and operations of network functions over bare-metal machines, virtual machines (VMs), and Kubernetes-managed containers. The fourth day was focused on creating network descriptor packages from scratch and onboarding of virtual network functions (VNFs), physical network functions (PNFs), and Kubernetes network functions (KNFs) using operators (Juju Charms). Finally, the fifth day focused on good practices for OSM development. You can follow the event agenda on the OSM wiki for videos and presentations.
Following are some highlights presented by Canonical during the event
Model-driven Universal Operator Lifecycle Manager: Mark Shuttleworth (CEO at Canonical, and OSM technical steering committee (TSC) member) demonstrated how model-driven operators address challenges of network service lifecycle management. VNF configuration and abstraction (VCA) component in OSM provides universal operators for deployment of network services on underlying infrastructure of bare metal machines, VMs, and K8s clusters. VCA is now able to provide dynamic integration of operators in production environments which implies that service providers can add and integrate operators with the existing model. Multiple sessions on this topic were covered by David Garcia (module development lead (MDL) for N2VC (network to VNF communication)). In the end, followed by many efforts of the environment setup and preparation of Hackfest, a session on the guidelines for OSM development was covered by Mark Bariel (MDL for DevOps)
TATA Charmed 5G Deployment: Another exciting news on Charmed OSM, leveraging the management capabilities for 5G telco workloads, was announced by TATA ELXSI. The functional ability of Juju’s controller and Charmed OSM made possible the deployment of 5G Kubernetes network functions (KNFs) on MicroK8s infrastructure. All the components, i.e. Radio Access Network (RAN) and 5G Core, have been based on a model-driven operator framework that can be reused, upgraded, and integrated according to the current demands. We congratulate TATA ELXSI on this step forward!
OSM Release NINE is at the end of the tunnel to offer compelling features to telcos, system integrators, VNF vendors, and all the curious minds out there in the NFV world.
The technical plenary meetings for OSM Release NINE were held in parallel to the Hackfest sessions. The main offerings by Canonical are summarised below
Multi-Cloud scenario manager: Progressing towards “distilling your operations into code”, OSM VCA (Juju) is targeting multi-cloud support for future releases, which will allow service providers to deploy and integrate network services across various virtual infrastructure managers (VIMs). This will also enable the decomposition of even very complex network services, consisting of bare metal machines, VMs, and containers, and integrating them from a centralized unit with a consistent user experience.
Upgrades in VCA: OSM supports both proxy and native charms (operators). In native charms, operator and workload are co-located in the same machine, and in proxy charms, they reside in different machines. The addition of distributed proxy charms brings the operator near to workload on registered clouds of the same VIM. In this case, the operator can be deployed anywhere away from OSM (edge) which reduces the possibility of a single point of failure and unlocks geo-redundancy in the future roadmap of OSM.
Some other major features in the OSM Release NINE include:
Get started with Charmed OSM to benefit from the latest features in open source NFV management and orchestration.
To learn more about Canonical’s solutions for telcos, visit our website.
Tornamos histórias enfadonhas em aventuras fantásticas, acontecimentos cinzentos em verdadeiros contos de fadas, ou então falamos só sobre Ubuntu e outras cenas… Aqui fica mais um episódio no vosso podcast preferido.
Já sabem: oiçam, subscrevam e partilhem!
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.Atribuição e licenças
Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).
Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.
This week we’ve been playing Cyberpunk 2077 and applying for Ubuntu Membership. We round up the goings on in the Ubuntu community and also bring you our favourite news picks from the wider tech world.
In this week’s show:
That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to email@example.com or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.
Considering migrating to Ubuntu from other Linux platforms, such as CentOS?
Think Ubuntu- the most popular Linux distribution on public clouds, data centre and the edge. Since its inception, Ubuntu consistently gains market share, as of today reaching almost 50%.
Wondering why Ubuntu is so popular?
Here is our take:
According to the 2020 HackerEarth Developer Survey, 66% of experienced developers and 69% of students prefer Ubuntu over other Linux distributions. This is because Ubuntu provides them with the greatest amount of latest open source software to work with.
For example, Ubuntu 20.04 LTS comes with over 30,000 open source packages such as Python, Ruby, Go, Java, Apache, Nginx, PostgreSQL, MySQL, Node.js, PHP and more. This is why Ubuntu is by far the most popular Linux distribution, followed by a distant no.2 – CentOS chosen by 11% of working professionals.
A long term support (LTS) version of Ubuntu is released every two years, and all LTS releases benefit from five years of free security maintenance (which can be extended to ten years). To keep Ubuntu users secure, the Ubuntu Security Team applies thousands of security patches. For instance, Ubuntu 16.04 LTS benefited from over 5,000 common vulnerabilities and exposures (CVEs) that have been patched since April 2016 absolutely free of charge!
Moreover, the team acts really fast to leave no time for bad actors to exploit vulnerabilities: critical CVEs are patched in less than 24 hours on average. With the latest release – Ubuntu 20.04 LTS – all users get security updates and straightforward access to thousands of curated open source applications freely available until 2025.Fact 3. Ubuntu has no mandatory subscriptions
Ubuntu is freely available to download and use. Each Ubuntu instance comes with the same bits, whether an Ubuntu Advantage (UA) subscription is attached or not. UA is an optional, per-machine subscription for enhanced compliance, extended security and 24/7 enterprise-grade support.
As a result, users benefit from the consistent experience regardless of whether their Ubuntu machine is used for development purposes or is running workloads in production.Fact 4. Ubuntu LTS offers enterprise-grade support with transparent, per-machine pricing
Ubuntu is the most cost-effective open source platform with millions of users worldwide. It is also backed by an enterprise support team of experts offering assistance with a migration to Ubuntu. It provides access to compliance-specific modules, including FIPS 140-2 certified cryptography, DISA/STIG and CIS hardening, Kernel Livepatch for improved uptime and security.Fact 5. Ubuntu delivers a multi-cloud experience
You will find that Ubuntu works exactly the same wherever you need it. On workstations, in the data centre, on the edge, and in clouds. On public clouds specifically, it delivers the same great Ubuntu experience with a layer of seamless integration and many kernel-level, cloud-specific optimisations.
Ubuntu is in the heart of the infrastructure stack. It is a platform of choice when building large-scale infrastructure, such as OpenStack private cloud, Kubernetes, High Performance Computing (HPC) and Big Data. The widespread adoption of Ubuntu in this kind of projects comes from its stability, interoperability, security and straightforward user experience.
Ubuntu is also used by scientists all over the world, powering various platforms for data analytics. Remember the first picture of the black hole? Guess, what was it created on?<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/f322/D34mUNOX4AAkhaI.jpeg" width="720" /> </noscript> Astronomers creating the first-ever image of a black hole using Ubuntu
Join thousands of developers and enterprises that chose Ubuntu as their platform for development, innovation and production workloads.
Let’s take off together!
2021 is around the corner and we had such a tremendous journey this year. Like many others, at Canonical, the publisher of Ubuntu, we lived different times and maybe more than ever we saw how important it was to stay connected. Therefore, Canonical continued to innovate in the telco world and brought Ubuntu closer to it, by offering open source systems and supporting the deployment of various applications. From 5G to network function virtualisation (NFV), from virtual events to webinars for our users, we spread energy around and gathered enterprise feedback.<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/1823/Capture.jpg" width="720" /> </noscript> Telco carriers announcements
Canonical has been working with more enterprises from the telco world over the years, but in 2020, the company went beyond its private cloud expertise and kicked off the deployment of 5G, the big step that the industry has been talking about.<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/452e/Screenshot-from-2020-12-17-13-23-50.png" width="720" /> </noscript> MTS and Ubuntu deploying cloud infrastructure
In November, MTS, Russia’s largest mobile operator and a leading provider of media and digital services, announced the selection of Canonical’s Charmed OpenStack to power the company’s next-generation cloud infrastructure. The company mentioned that this is the foundation of the 5G rollout that would come in the following months, enhancing their network’s edge compute capabilities.Malawi’s TNM and Ubuntu leading virtualisation charge
November has been a month of good news, as Malawi’s TNM also announced that they have selected Canonical, the publisher of Ubuntu, and its Charmed OpenStack distribution, an open-source private cloud solution that allows businesses to control large pools of compute, storage and networking in a data centre, to modernise and virtualise its entire telecommunications infrastructure. TNM is Malawi’s leading telecoms provider and aims to create faster time to market across its product range through the move.Telco events <noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4819/3Capture.jpg" width="720" /> </noscript>
From physical meet-up-s we moved to virtual, but it did not stop us. We invited everyone to join us at the Open Infrastructure Summit, where the theme was “The Next decade of Open Infrastructure”. The content is available on-demand for a few months. Check it out before it’s gone.
In the era of virtual events, Africa was not out of our radar, so in October 2020, Canonical took Ubuntu to AfricaCom, the largest and most influential tech and telecoms event on the continent, where topics included opportunities to manage capabilities from core to edge.
After sponsoring a few other events, by the end of the year, in September, Canonical and Ubuntu organized the first virtual event dedicated to telecom industry, “Transforming Telco infrastructure”, where Arno Van Huyssteen – Director, Field Engineering Telco EMEA-APAC and Tytus Kurek – Telco Product Manager talked about the challenges and trends in the telco landscape.
When people talk about telco, they also think of NFV Management and Orchestration (MANO). Whereas Charmed OSM has been around for quite some time, Canonical’s presence at OSM Hackfest this year has increased. In September, we shared the presenter floor with the rest of the Open Source MANO (OSM) community, where we introduced end-to-end scenarios, shared insights about running OSM on Kubernetes and helped push forward the latest OSM release.
2020 was a busy year, with events, big announcements and new pieces of content. However, Canonical & Ubuntu never stopped working on the engineering side, getting new technical achievements.
Canonical announced the availability of center of Internet security (CIS) automation tooling to its Ubuntu Advantage for Infrastructure customers. The compliance tooling has two objectives:
Less than a month after this good news, Canonical also had new things to share with its customers. In May 2020, we announced that OpenStack Ussuri can be deployed on Ubuntu 18.04 LTS and Ubuntu 20.04 LTS. The most notable enhancements were related to the Open Virtual Networking (OVN) driver and the Masakari project which allow organisations to run highly available workloads on top of an open-source Software-Defined Networking (SDN) platform.
OpenStack Charms 20.10 became available in October 2020 and they introduced a range of improvements and features that enhanced Charmed OpenStack. In consequence, OpenStack Victoria can be deployed on Ubuntu 20.04 LTS and Ubuntu 20.10 with full support from Canonical until April 2030. At the same time, OVS to OVN migration for Charmed OpenStack has been smoother and provides a fully functional open source SDN platform.Ubuntu and Canonical share knowledge
Part of our philosophy is to share, therefore Ubuntu empowers not only through open-source projects but also through sharing knowledge and pieces of content that could be revisited when needed.
In the telecom world, Ubuntu powers the entire infrastructure of leading Global Service Providers, including tier-1 carriers. As part of this journey, we presented an entire NFV stack based on established Open Source technologies.
Tytus Kurek had a talk around NFV, cloud-native and OSM, approaching technical aspects such as current trends in implementation or challenges in CNFs orchestration, followed by a live demo during which we demonstrated how to deploy and operate network services for mobile core network management, using Magma.
The telco ecosystem has been important for Canonical and Ubuntu in 2020. Hoping not to forget anything, we look forward to more good news in the following year. As technology keeps evolving, we will keep growing and innovating in this landscape where everything moves fast and where staying connected is more than ever part of our normal lives.
See you in 2021!
In the previous post I went over the reasons for switching to my own hardware and what hardware I ended up selecting for the job.
Now it’s time to look at how I intend to achieve the high availability goals of this setup. Effectively limiting the number of single point of failure as much as possible.Hardware redundancy
On the hardware front, every server has:
The switch is the only real single point of failure on the hardware side of things. But it also has two power supplies and hot swappable fans. If this ever becomes a problem, I can also source a second unit and use data and power stacking along with MLAG to get rid of this single point of failure.
I mentioned that each server has four 10Gbit ports yet my switch is Gigabit. This is fine as I’ll be using a mesh type configuration for the high-throughput part of the setup. Effectively connecting each server to the other two with a dual 10Gbit bond each. Then each server will get a dual Gigabit bond to the switch for external connectivity.Software redundancy
The software side is where things get really interesting, there are three main aspects that need to be addressed:
For storage, the plan is to rely on Ceph, each server will run a total of 4 OSDs, one per physical drive with the SATA SSD acting as boot drive too with the OSD being a large partition on it instead of the full disk.
Each server will also act as MON, MGR and MDS providing a fully redundant Ceph cluster on 3 machines capable of providing both block and filesystem storage through RBD and FS.
Two maps will be setup, one for HDD storage and one for SSD storage.
Storage affinity will also be configured such that the NVME drives will be used for the primary replica in the SSD map with the SATA drives holding secondary/tertiary replicas instead.
This makes the storage layer quite reliable. A full server can go down with only minimal impact. Should a server being offline be caused by hardware failure, the on-site staff can very easily relocate the drives from the failed server to the other two servers allowing Ceph to recover the majority of its OSDs until the defective server can be repaired.Networking
Networking is where things get quite complex when you want something really highly available. I’ll be getting a Gigabit internet drop from the co-location facility on top of which a /27 IPv4 and a /48 IPv6 subnet will be routed.
Internally, I’ll be running many small networks grouping services together. None of those networks will have much in the way of allowed ingress/egress traffic and the majority of them will be IPv6 only.
The majority of egress will be done through a proxy server and IPv4 access will be handled through a DNS64/NAT64 setup.
Ingress when needed will be done by directly routing an additional IPv4 or IPv6 address to the instance running the external service.
At the core of all this will be OVN which will run on all 3 machines with its database clustered. Similar to Ceph for storage, this allows machines to go down with no impact on the virtual networks.
Where things get tricky is on providing a highly available uplink network for OVN. OVN draws addresses from that uplink network for its virtual routers and routes egress traffic through the default gateway on that network.
One option would be for a static setup, have the switch act as the gateway on the uplink network, feed that to OVN over a VLAN and then add manual static routes for every public subnet or public address which needs routing to a virtual network. That’s easy to setup, but I don’t like the need to constantly update static routing information in my switch.
Another option is to use LXD’s l2proxy mode for OVN, this effectively makes OVN respond to ARP/NDP for any address it’s responsible for but then requires the entire IPv4 and IPv6 subnet to be directly routed to the one uplink subnet. This can get very noisy and just doesn’t scale well with large subnets.
The more complicated but more flexible option is to use dynamic routing.
Dynamic routing involves routers talking to each other, advertising and receiving routes. That’s the core of how the internet works but can also be used for internal networking.
My setup effectively looks like this:
This may feel overly complex and it quite possibly is, but that gives me three routers, one on each server and only one of which need to be running at any one time. It also gives me the ability to balance routing traffic both ingress or egress by tweaking the BGP or VRRP priorities.
The nice side effect of this setup is that I’m also able to use anycast for critical services both internally and externally. Effectively running three identical copies of the service, one per server, all with the exact same address. The routers will be aware of all three and will pick one at the destination. If that instance or server goes down, the route disappears and the traffic goes to one of the other two!Compute
On the compute side, I’m obviously going to be using LXD with the majority of services running in containers and with a few more running in virtual machines.
Stateless services that I want to always be running no matter what happens will be using anycast as shown above. This also applies to critical internal services as is the case above with my internal DNS resolvers (unbound).
Other services may still run two or more instances and be placed behind a load balancing proxy (HAProxy) to spread the load as needed and handle failures.
Lastly even services that will only be run by a single instance will still benefit from the highly available environment. All their data will be stored on Ceph, meaning that in the event of a server maintenance or failure, it’s a simple matter of running lxc move to relocate them to any of the others and bring them back online. When planned ahead of time, this is service downtime of less than 5s or so.Up next
In the next post, I’ll be going into more details on the host setup, setting up Ubuntu 20.04 LTS, Ceph, OVN and LXD for such a cluster.
It’s re:invent season already, and we had exciting news to announce with Amazon this year. With all these remote sessions, what’s better than a quick lab to play with the new stuff? It’s starting to feel like Christmas already!
We’re going to kill two birds with one stone (just an idiom, keep reading) and experiment with two of our latest announcements. First on the list is the “Install Amazon EKS Distro anywhere” with the EKS Snap, a frictionless way to try all the EKS-D experience in a snap. Second is the LTS Docker Image Portfolio of secure container images from Canonical, available on Amazon ECR Public.<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a3df/cover.png" width="720" /> </noscript>
This blog will be a good starting point to try these new AWS services with open-source technology.Why opting for LTS Docker Images?
“Who needs to run one container for ten – even five – years?” you may ask. And that would be a fair wonder.
LTS stands for “Long Term Support.” The Ubuntu distro made the acronym famous a few years ago, giving 5-year security updates every two years, for free. Since then, Canonical also offered the Extended Security Maintenance (ESM), an additional five years of support. With the LTS Docker Image Portfolio, Canonical extends this 10-year commitment to some applications on top of Ubuntu container images.
Why opting for LTS Docker Images, when agility runs the world? The reality is that enterprises, mainly where there are intricate software pieces, cannot keep up with the development pace. At particular locations, such as the edge of the network, or in some critical use cases, production workloads won’t receive new versions with potentially breaking changes and are limited to receiving security updates only. Recent publications showed that vulnerabilities in containers are a reality. Keeping up with the pace of upstream applications isn’t always possible (this article from DarkReading takes image analysis on medical devices as an example). Canonical’s LTS images ensure your pipelines won’t break every two days, giving you time to develop at your pace and focus on your core features.Getting started
Here I will show you how to create an Amazon EKS cluster on your computer or server, on which we will deploy a sample LTS NGINX docker image. You will need a machine that can run snaps (Ubuntu already ships with it). Also, make sure you remove MicroK8s if you have installed it, because it would conflict with the EKS snap.
I use Multipass to get a clean Ubuntu VM; I recommend it for this Lab.<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/953f/Multipass-Install-Term.png" width="720" /> </noscript>
Amazon EKS Distro (EKS-D) comes in a snap called “EKS” – its documentation is on snapcraft.io/eks. Let’s snap install it! At the time of writing, the EKS snap is available in the edge (development) channel and without strict confinement (classic).sudo snap install eks --classic --edge
Once the EKS snap is installed, we will add our user to the “eks” group (to run commands without sudo), give them permissions on the kubelet config folder, reload the session (to make the changes effective) and create an alias (to make our lives easier).sudo usermod -a -G eks $USER sudo chown -f -R $USER ~/.kube sudo su - $USER sudo snap alias eks.kubectl kubectl sudo eks status --wait-ready
You can already communicate with your cluster. Run kubectl get node, you will see information about your node running EKS-D:<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/5c98/eks-status.png" width="720" /> </noscript>
Hurray, you’ve successfully created a Kubernetes cluster using Amazon EKS Distro. 🎉Deploy an LTS NGINX using EKS-D
We will now deploy an NGINX server from Canonical’s maintained repository on Amazon ECR Public. Let’s use the public.ecr.aws/lts/nginx:1.18-20.04_beta image. It guarantees a secure, fully maintained, no higher risk than beta, NGINX 1.18 server on top of the Ubuntu 20.04 LTS image.
First, use the following command to create an index.html that we will later fetch from a browser.mkdir -p project cat <<EOF >> ./project/index.html <html> <head> <title>HW from EKS</title> </head> <body> <p>Hello world, this is NGINX on my EKS cluster!</p> <img src="https://http.cat/200" /> </body> </html> EOF
Then, create your deployment configuration, nginx-deployment.yml, with the following content.# nginx-deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: public.ecr.aws/lts/nginx:1.18-20.04_beta volumeMounts: - name: nginx-config-volume mountPath: /var/www/html/index.html subPath: index.html volumes: - name: nginx-config-volume configMap: name: nginx items: - key: nginx-site path: index.html
We’re telling EKS to create a deployment made of one pod with one container running NGINX and mapping our local index.html file through a configMap. Let’s first create the said configMap, and apply our deployment:kubectl create configmap nginx --from-file=nginx-site=./project/index.html kubectl apply -f nginx-deployment.yml watch kubectl get pods <noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/6e6b/Kubectl-Apply.png" width="720" /> </noscript>
That’s it, you could let it run for ten years!
Jokes aside, using an LTS image as part of your CI/CD pipeline means freeing your app from upstream changes, without compromising security, all thanks to containers.Expose and access your website
Let’s edit our deployment to access our website while implementing a few best practices.Limit pod resources
Malicious attacks are often the result of a combination of cluster misconfiguration and container vulnerabilities. This cocktail is never good. To prevent attackers from destroying your whole cluster by attacking only your NGINX pod, we’re going to set resource limits.
Edit your nginx-deployment.yml file to add the following resources section:# nginx-deployment.yml - skipped [...] some parts to save space [...] containers: - name: nginx [...] resources: requests: memory: "30Mi" cpu: "100m" limits: memory: "100Mi" cpu: "500m" [...]
Run one more kubectl apply -f nginx-deployment.yml to update your configuration.Create a service to expose Nginx
Keep reading, we’re so close to the goal! Let’s make this web page reachable from outside the cluster.
Create a nginx-service.yml file with the following content:# nginx-service.yml apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31080 name: nginx
One more apply, and voilà!$ kubectl apply -f nginx-service.yml service/nginx-service created $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service NodePort 10.152.183.242 <none> 80:31080/TCP 4s <noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/cb14/Screenshot-2020-12-11-at-21.26.47.png" width="720" /> </noscript> Well done!
Note: 192.168.64.15 is my Multipass VM – my EKS node – IP (look at the first screenshot!)What’s next?
We installed Amazon EKS Distro, using the EKS snap. We then deployed an LTS NGINX server with EKS-D. All this on any machine where you can use snap… in other words, any Linux. The Amazon EKS anywhere experience has never been simpler!
Next, you could use Juju to manage your applications on both public clouds and edge devices running MicroK8s or EKS-D. Or you could simply start by adding a few more pods to your cluster, using Canonical’s LTS Docker Image Portfolio from Amazon ECR Public.
But this is going to be a bit of a journey and is about my personal infrastructure so this feels like a better home for it!What is this all about?
For years now, I’ve been using dedicated servers from the likes of Hetzner or OVH to host my main online services, things ranging from DNS servers, to this blog, to websites for friends and family, to more critical things like the linuxcontainers.org website, forum and main image publishing logic.
All in all, that’s around 30 LXD instances with a mix of containers and virtual machines that need to run properly 24/7 and have good internet access.
I’m a sysadmin at heart, I know how to design and run complex infrastructures, how to automate things, monitor things and fix them when things go bad. But having everything rely on a single beefy machine rented month to month from an online provider definitely has its limitations and this series is all about fixing that!The current state of things
As mentioned, I have about 30 LXD instances that need to be online 24/7.
This is currently done using a single server at OVH in Montreal with:
I consider this a pretty good value for the cost, it comes with BMC access for remote maintenance, some amount of monitoring and on-site staff to deal with hardware failures.
But everything goes offline if:
LXD has a very solid clustering feature now which requires a minimum of 3 servers and will provide a highly available database and API layer. This can be combined with distributed storage through Ceph and distributed networking through OVN.
But to benefit from this, you need 3 servers and you need fast networking between those 3 servers. Looking around for options in the sub-500CAD price range didn’t turn up anything particularly suitable so I started considering alternatives.Going with my own hardware
If you can’t rent anything at a reasonable price, the alternative is to own it.
Buying 3 brand new servers and associated storage and network equipment was out of the question. This would quickly balloon into the tens of thousands of dollars for something I’d like to buy new and just isn’t worth it given the amount of power I actually need out of this cluster.
So as many in that kind of situation, I went on eBay
My criteria list ended up being:
In the end, I ended up shopping directly with UnixSurplus through their eBay store. I’ve used them before and they pretty much beat everyone else on pricing for used SuperMicro kit.
What I settled on is three:
The motherboard supports Xeon E5v4 chips, so the CPUs can be swapped for more recent and much more powerful chips should the need arise and good candidates show up on eBay. Same story with the memory, this is just 4 sticks of 16GB leaving 20 free slots for expansion.
For each of them, I’ve then added some storage and networking:
For those, I went with new off Amazon/Newegg and picked what felt like the best deal at the time. I went with high quality consumer/NAS parts rather than DC grade but using parts I’ve been running 24/7 elsewhere before and that in my experience provide adequate performance.
For the network side of things, I wanted a 24 ports gigabit switch with dual power supply, hot replaceable fans and support for 10Gbit uplink. NorthSec has a whole bunch of C3750X which have worked well for us and are at the end of their supported life making them very cheap on eBay, so I got a C3750X with a 10Gb module for around 450CAD.
Add everything up and the total hardware cost ends up at a bit over 6000CAD, make it 6500CAD with extra cables and random fees.
My goal is to keep that hardware running for around 5 years so a monthly cost of just over 100CAD.
Before I actually went ahead and ordered all that stuff though, I had to figure out a place for it and sort out a deal for power and internet.
Getting good co-location deals for less than 5U of space is pretty tricky. Most datacenter won’t even talk to you if you want less than a half rack or a rack.
Luckily for me, I found a Hive Datacenter that’s less than a 30min drive from here and which has nice public pricing on a per-U basis. After some chit chat, I got a contract for 4U of space with enough power and bandwidth for my needs. They also have a separate network for your OOB/IPMI/BMC equipment which you can connect to over VPN!
This sorts out the where to put it all, so I placed my eBay order and waited for the hardware to arrive!Up next
So that post pretty much cover the needs and current situation and the hardware and datacenter I’ll be using for the new setup.
What’s not described is how I actually intend to use all this hardware to get me the highly available setup that I’m looking for!
The next post goes over all the redundancy in this setup, looking at storage, network and control plane and how it can handle various failure cases.
We’re almost there, 2021 is just around the corner. Like many others, we at Canonical have a deep appreciation for all things Raspberry Pi. We see the good they do and the joy they bring and can’t help but be impressed. This year marks the beginning of a stronger collaboration between the folks at Raspberry Pi and us at Canonical. We are by no means done and still have a long way to go. But we have made strides in the right direction. This is a roundup of all things Ubuntu on Raspberry Pi 2020.Raspberry Pi Ubuntu 20.10 Desktop
This was the big one. On the 22nd of October, we announced the Ubuntu 20.10 release with FULL support for the Raspberry Pi 4 (4GB+). People have been trying to use Raspberry Pis as a day to day Desktop PC for years. And while Raspberry Pi OS is great at what it does, it’s fair to say there are more mature distributions. When the Raspberry Pi 4 came to have 4GBs, enough RAM to run a full Desktop, we got to work. Numerous teams at Canonical including Desktop, Kernel, Foundations and the Certification team engaged with the folks over at Raspberry Pi to collaborate on making sure Ubuntu is a first-class experience.<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a1c4/Ubuntu-Screens-for-Raspberry-Pi.png" width="720" /> </noscript>
Now you can get yourself a Raspberry Pi 4 (or a Pi 400), stick Ubuntu Desktop 20.10 on it, and it just works. No faff. We are committed to doing the same thing for every Ubuntu Desktop release in the future. This means you can do almost anything you do with an Ubuntu Desktop on a Raspberry Pi. You have access to the vast Ubuntu community, can browse the web, watch the latest films and T.V shows, and develop with the latest open source technologies (that have ARM support… we’re working on it).
This release was our biggest for the Raspberry Pi to date and it generated the most discussion in the community. We printed limited edition Groovy Gorilla stickers for people to win in a little competition, or to get with their Pi 4s from our friends at Pimoroni. Plenty of folks sent in pictures of themselves making use of Raspberry Pi on Ubuntu. We had lots of good conversations on Reddit, Twitter and in the Ubuntu Discourse. We were even humbled to join Eben Upton, CEO of Raspberry Pi Trading, for a few joint interviews to spread the word.
On October 19th and November 2nd, the Raspberry Pi Foundation launched the compute module 4 and the Raspberry Pi 400. The CM4 has been a long-anticipated addition to their ‘industrial use case’ product line. We get a lot of interest from people developing embedded products using the compute modules wanting to run Ubuntu. And the Pi 400 was a brand new form factor, imitating the likes of commodore 64 computers from the past.<noscript> <img alt="" height="295" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_527,h_295/https://ubuntu.com/wp-content/uploads/41d1/photo_2020-10-30_19-24-27.jpg" width="527" /> </noscript>
Thanks to the work we do together, on the very same days, Ubuntu was supported for both platforms. Anyone wanting to develop IoT or Industrial products using the CM4 can count on Ubuntu to just work and turning your Pi 400 into an Ubuntu Desktop is just as easy. I know, its what I’m writing this on.
Both launches went wonderfully well and we published a complementary blog alongside the Pi 400 launch on the 2nd. Whatever comes next, it is our pleasure to provide Ubuntu support from day 0 and users can count on the experience only getting better. We still have some work to do to truly make the most of the Raspberry Pi, but that’s where we are going.
Canonical has had a form of Raspberry Pi support for years. For Ubuntu Server and Ubuntu Core that is. This year marks a much deeper level of support. Engineering teams across Canonical have bought it. More and more products or software that we develop is made to work for the Raspberry Pi. Spanning from the Ubuntu Desktop to MicroK8s, a lightweight, opinionated version of Kubernetes, and LXD, a next-generation system container manager.<noscript> <img alt="" height="346" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_520,h_346/https://ubuntu.com/wp-content/uploads/1834/photo_2020-02-20_11-35-25.jpg" width="520" /> </noscript>
For those who don’t know, Kubernetes and container management are buzz words for technologies used in Data Centres and clouds by the biggest companies in the world. A problem Canonical and many others have been trying to solve is making these technologies more accessible. To make it so any developer with an interest can take advantage of it. We believe bringing support for these technologies to Raspberry Pi is a leap in the right direction. It means you can test locally or bring the capabilities of a cloud down to a cluster of Raspberry Pis.
But talk is cheap. What can you actually do? This year we released three tutorials to take advantage of these production-ready technologies. A MicroK8s clustering tutorial to get started. Where we show you how to get the cluster set up and ready for whatever tickles your fancy. An LXD appliance homelab tutorial for setting up testing or developer environments across numerous Raspberry Pis. And a highly available MicroK8s PiHole tutorial that enables you to run the famous PiHole ad blocker distributed across a robust, highly available cluster.The Raspberry Pi Imager (snap)
The Raspberry Pi imager turned out to be an even more useful tool that we had expected. A little application that makes flashing an SD card for a Raspberry Pi as simple as a few clicks. While there are still numerous ways to flash an SDcard with an image, this little app makes it that much easier. For us and for you. Our ‘how-to’ instructions are that much easier. And once it was released, we were right there.<noscript> <img alt="" height="364" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_545,h_364/https://ubuntu.com/wp-content/uploads/6d8f/Screenshot-from-2020-10-21-11-42-03.png" width="545" /> </noscript>
Not only can you use the imager application to flash an SDcard with Ubuntu Desktop, Server and Core, but you can do it all with a snap. Days after the release the wonderful developer advocate Alan Pope jumped on it and packaged it in a snap. So while it is easier than ever to flash an SDcard, it is also easier than ever to get the app. Just:
Or you can go to the website, of course, we’re there too.<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/89a4/Screenshot-from-2020-12-11-19-03-20.png" width="720" /> </noscript> Ubuntu Appliances on Raspberry Pi
Not long after the imager was released came the start of the Ubuntu Appliance portfolio. A catalogue of images that with a few clicks and a few commands enable you to turn a Raspberry Pi into a single purpose IoT device that does one thing, perfectly. The Raspberry Pi was the obvious, easy and favourite option to have as the platform of choice. And while we add support for future Raspberry Pis each Ubuntu Appliance will be available there too.
The portfolio is to encourage application developers to bring their software to the edge, to more accessible hardware. While the portfolio is growing and we iron out the lumps and bumps we were honoured to release the first 7 this year. You can head over to our website to have a look or to Raspberry Pi.org where they were kind enough to have me as a guest blogger.<noscript> <img alt="" height="423" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_448,h_423/https://ubuntu.com/wp-content/uploads/04d1/Screenshot-from-2020-12-11-19-17-47.png" width="448" /> </noscript> Ubuntu 20.04 certified to ‘just work’
For the folks paying attention, I think this was our big signal of intent. This year we made a much bigger deal of announcing Ubuntu 20.04 certification on the Raspberry Pi. Ubuntu 20.04 is an LTS release, it receives continued support for 5 years, and just like we do for the ‘main’ Ubuntu images for workstations and PCs from Dell or Lenovo or HP, we guarantee that Ubuntu will ‘just work’ on a Raspberry Pi too, for five years.
Other Linux distributions don’t have a strict cadence or don’t provide significantly extended support. This means users and developers don’t know when their devices are going to get an upgrade or update. They have to take the risk of their deployments going out of date, unsupported, or unpatched. Bringing the Ubuntu images for Raspberry Pi officially into this support cycle means Pis never have to worry again. And as far as I (and our hardware certification team) are concerned, they are single-board PCs.
One of the things we’re playing catch up on is making sure all of the Raspberry Pi ‘extras’ work just as well on Ubuntu as they do on Raspberry Pi OS. While we are committed to making this happen there is work still to do. But when the HQ camera was released this year, we had to do something. So we got it to work. You can go into your Raspberry Pi running Ubuntu and follow a couple of instructions and you’re away. In the new year, we’re going to be putting out much more information and tutorials around this one so stay tuned. But the HQ camera and native Bluetooth support were served up this year and built-in. The work left there is to make it as intuitive to use as possible.
With all of the above it’s no surprise, but still humbling, that there was a lot of talk online. We engaged in some fun and interesting discussions, some incredibly useful feedback and some less than helpful rants (but it wouldn’t be the internet without that).
Starting this month and continuing into the new year we are also going to turn attention to support for the community. Earlier this month one of our Field Engineers, one Taiten Peng, gave a presentation to a group of Raspberry Pi enthusiasts in Taipei about Ubuntu on Raspberry Pi. Let’s call it a test drive. To get the word out and to help people get cracking with. We want to give some talks, do some presentations, maybe some workshops to Pi communities who might be interested. Virtually for a while of course.
2020 was many things, but we at Canonical are grateful to call it the year that Raspberry Pi came to tea. From a few emails to some engineering collaboration to a full Ubuntu Desktop, committed and supported on the fully certified hardware, that is, the Raspberry Pi (4). To conclude I’d like to express thanks to the Raspberry Pi Foundation and to Raspberry Pi Trading for the great work they do, the great product they produce and for the hours various Pi people have spent on calls with us.
Boy, I hope I haven’t forgotten anything. See you in 2021.
After an unexpectedly short discussion on debian-project, we’re moving forward with this new initiative. The Debian security team submitted a project proposal requesting some improvements to tracker.debian.org, and since nobody of the security team wants to be paid to implement the project, we have opened a request for bids to find someone to implement this on a contractor basis.
If you can code in Python following test-driven development and know the Django framework, feel free to submit a bid! Ideally you have some experience with the security tracker too but that’s not a strong requirement.About the project
If you haven’t read the discussion on debian-project, Freexian is putting aside part of the money collected for Debian LTS to use it to fund generic Debian development projects. The goal is two-fold:
We have tried to formalize a process to follow from project submission up to its implementation in this salsa project:
We highly encourage the above-mentioned Debian teams to make proposals. A member of those teams can implement the project and be paid for it. Or they can decide to let someone else implement it (we expect some of the paid LTS contributors to be willing to implement such projects), and just play the reviewer role driving the person doing the work in the right direction. Contrary to Google’s Summer of code and other similar projects, we put the focus on the results (and not in recruiting new volunteers), so we expect to work with experienced persons to implement the project. But if the reviewer is happy to be a mentor and spend more time, then it’s OK for us too. The reviewer is (usually) not a paid position.
If you’re not among those teams, but if you have a project that can have a positive impact on Debian LTS (even if only indirectly in the distant future), feel free to try your chance and to submit a proposal.
Welcome to the Ubuntu Weekly Newsletter, Issue 661 for the week of December 6 – 12, 2020. The full version of this issue is available here.
In this issue we cover:
The Ubuntu Weekly Newsletter is brought to you by:
Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License
As you know from our previous post, back in 2019 the Kubuntu team set to work collaborating with MindShare Management Ltd to bring a Kubuntu dedicated laptop to the market. Recently, Chris Titus from the ‘Chris Titus Tech’ YouTube channel acquired a Kubuntu Focus M2 for the purpose of reviewing it, and he was so impressed he has decided to keep it as his daily driver. That’s right; Chris has chosen the Kubuntu Focus M2 instead of the Apple MacBook Pro M1 that he had intended to get. That is one Awesome recommendation!
Chris stated that the Kubuntu Focus was “The most unique laptop, and I am not talking about the Apple M1, and neither I am talking about AMD Ryzen.” he says.
In the review on his channel, not only did he put our Kubuntu based machine through it’s software paces, additionally he took the hardware to pieces and demonstrated the high quality build. Chris made light work of opening the laptop up and installing additional hardware, and he went on to say “The whole build out is using branded, high quality parts, like the Samsung EVO Plus, and Crucial memory; not some cheap knock-off”
The Kubuntu Focus team have put a lot of effort into matching the software selection and operating system to the hardware. This ensures that users get the best possible performance from the Kubuntu Focus package. As Chris says in his review video “The tools, scripts and work this team has put together has Impressed the hell out of me!”
By using the power optimizations available in Kubuntu, and additionally providing a GPU switcher which makes it super simple to change between the discreet Nvidia GPU and the integrated Intel based GPU. This impressed Chris a lot “I was able to squeeze 7 to 8 hours out of it on battery, absolutely amazing!” he said.
The Kubuntu Focus is an enterprise ready machine, and arguably ‘The Ultimate Linux laptop”. In his video, Chris goes on to demonstrate that the Kubuntu Focus includes Insync integration support for DropBox, OneDrive and GoogleDrive file sharing.
The Kubuntu Focus is designed from the get-go to be a transition device, providing Apple MacBook and Microsoft Windows users with a Cloud Native device in a laptop format which delivers desktop computing performance.
Chris ran our machine through a variety of benchmark testing tools, and the results are super impressive “Deep Learning capabilities are unparalleled, but more impressive is that it is configured for deep learning out of the box, and took just 10 minutes to be up and running. This is the best mobile solution you could possibly get.” Chris states.
To bring this article to a close it would be remiss of me not to mention Chris Titus’s experience with the support provided by the Kubuntu Focus team. Chris was able to speak directly to the engineering team, and get fast accurate answers to all his questions. Chris says “Huge shout out to the support team, I am beyond impressed”
Congratulations to the support team at MindShare Management Ltd, delivering great customers support is very challenging, and their experience and expertise is obviously coming across with their customers.
WoW! this is a monumental YouTube review of Kubuntu, and the whole Kubuntu community should congratulate themselves for creating ‘The Ultimate Linux Desktop’ which is being used to build ‘The Ultimate Linux Laptop’. Below is the YouTube review on the ‘Chris Titus Tech’ YouTube channel. Check it out, and see for yourself how impressed he is with this machine. Do remember to share this article.
About the Author:
Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20
BlackArch 2020.12.1 Out
Nitrux 1.3.5 Out
KDE Plasma 5.20.4 Out
OpenZFS 2.0 Out
Linux Kernel 5.10 rc6 Out
System76 Pangolin Announced
Copyright 2019 © All rights reserved