Tasker: Total Automation for Android

The Register

Planet Ubuntu

Ubuntu Blog: MicroK8s Gets Powerful Add-ons

11 hours 36 minutes ago

We are excited to announce new Cilium and Helm add-ons, coming to MicroK8s! These add-ons add even more power to your Kubernetes environment built on MicroK8s. The Cilium CNI plugin brings enhanced networking features, including Kubernetes NetworkPolicy support, to MicroK8s. You’ll also get direct CLI access to Cilium within MicroK8s using the microk8s.cilium wrapper.

If you do not already have a version of cilium installed you can alias microk8s.cilium to cilium using the following command:
snap alias microk8s.cilium cilium

Helm, the package manager for Kubernetes will allow even easier management of your MicroK8s environment.

How hard is it going to be to use these add-ons in MicroK8s? For those of you familiar with MicroK8s, you guessed it, a simple one-liner!

Cilium: microk8s.enable cilium

Helm:  microk8s.enable helm

Kudos to the Cilium team for making it happen – shout out to Joe Stringer for submitting the PR for these add-ons! 


About Cilium

Cilium is open source software that provides and secures the network and API connectivity between application services deployed using Linux container management platforms. Running Cilium in MicroK8s will enable powerful pod-to-pod connectivity management and service load balancing between pods.

You will be able to reach specific pods in your K8s cluster as well as define network security policies for connectivity. Using the Kubernetes Services abstraction you can balance loads of network traffic between pods. For more details, check out the doc on Cilium.

About Helm

Using Helm within Microk8s allows you to manage, update, share and rollback Kubernetes applications. Helm is maintained by the CNCF – in collaboration with Microsoft, Google, Bitnami and the Helm contributor community. For more details, check out Helm.

What’s next?

We are incredibly proud of our community and the contributions you make! Canonical looks forward to keep working together and building the best experiences to empower innovation and the open-source ecosystem.

 
We will continue working on improving the future with exciting additions to MicroK8s. If you’re working on an exciting project and have feature requests or suggestions please contribute or reach out to us on Github, Kubernetes forum, Slack (#microk8s channel) or tag us @canonical, @ubuntu on Twitter (#MicroK8s).

Daniel Holbach: What’s been happening in Ignite

11 hours 49 minutes ago

First of all: thanks Dennis Marttinen and Lucas Käldström for helping write this up.

It’s been only a bit over a month since Weave Ignite was announced to the world (others talked about it as well). Time to catch up on what happened in the meantime, the team around it has been busy.

If you’re new to Weave Ignite, it’s an open source VM manager with a container UX and built-in GitOps management (check out the docs). It’s built on top of Firecracker which has proven to be able to run 4000 micro-VMs on the same host. Time to give it a go, right?

Since the initial announce 43 people contributed to the project on Github, 20 got commits merged in the repo. Thanks to every one of you: 

@BenTheElder, @DieterReuter, @PatrickLang, @Strum355, @akshaychhajed, @alex-leonhardt, @alexeldeib, @alexellis, @andrelop, @andrewrynhard, @aojea, @arun-gupta, @asaintsever, @chanwit, @curx, @danielcb, @dholbach, @hbokh, @jiangpengcheng, @junaid18183, @kim3z, @liwei, @luxas, @najeal, @neith00, @paavan98pm, @patrobinson, @pditommaso, @praseodym, @prologic, @robertojrojas, @rugwirobaker, @saiyam1814, @seeekr, @sftim, @silenceshell, @srinathgs, @stealthybox, @taqtiqa-mark, @twelho, @tyhal, @vielmetti, @webwurst

Since then the team got four releases out the door. Let’s go through the big changes one by one and why they matter:

  • Lots of bug fixes, enhanced stability, more tests and more and better docs (find them here)
  • Support for Persistent Storage, ARM64, manifest directories, the improved v1alpha2 API, both declarative and imperative VM management
  • More pre-built VM images (currently there are images based on Ubuntu, CentOS, Amazon Linux, Alpine and OpenSUSE + a kubeadm image)
  • ignited was introduced to move Ignite towards a client-server model and improve VM lifecycle management
  • The Docker-like UX has been further improved, now also featuring ‘ignite exec’
  • Read-write GitOps support, now status updates/changes (e.g. IP addresses) are pushed back to the repository

It’s impressive how such a young project got all of this together in such a short amount of time (only around a month).


We also have been busy growing our community. As mentioned above, documentation was an important part of this: API docs, a very solid CLI reference and short tutorials to get you started were the key.

We also started a mailing list and regular community Ignite developer meetings. These happen Mondays at 15:00 UTC (what’s UTC?) and are meant to get people together who are generally interested in Ignite and want to learn more and potentially help out as well. Project authors Lucas Käldström and Dennis Marttinen are always very approachable, but here especially they made a point of introducing everyone to the goals behind Ignite, its roadmap and the currently ongoing work.

We’ve recorded all of the meetings. Meeting Notes are available too (Please join weaveworks-ignite on Google Groups to get write access).

Here’s what we covered so far:

  • 1st meeting:
    • Team introductions
    • Demo of Ignite
    • Roadmap overview
    • Current work-in-progress
  • 2nd meeting:
    • What’s coming in v0.5.0?
    • Roadmap for v0.6.0
    • Integration with Kubernetes through Virtual Kubelet
    • How to contribute to Ignite
  • 3rd meeting
    • v0.5.0 and v0.5.1 released
    • GitOps Toolkit is being split out – what is it for?
    • Footloose integration – what is it about?
    • Coming up: containerd support
    • Discussion of application logging

And this is where you come in… our next meeting is Monday, 26th August 2019 15:00 UTC and we have an action packed agenda as well:

  • containerd integration
  • CNI integration
  • The GitOps Toolkit
  • Code walk-through / project architecture
  • Discussion: what would you like to see in Ignite? What do/could you use it for?
  • Releasing v0.6.0
  • <you can still add your own agenda item here>

We are very excited to see the direction Ignite is taking, particularly because it contributes a lot to the ecosystem. How?

We realised that all the GitOps functionality of Ignite would be useful to the rest of the world, so we split it out into the GitOps Toolkit.

The team is also working on containerd integration, so that you don’t need Docker installed to run Ignite VMs. Why does Ignite require a container runtime to be present? Because Ignite integrates with the container world, so you can seamlessly run both VMs and containers next to each other. containerd is super lightweight, as is Firecracker, so pairing them with Ignite makes a lot of sense!

If the above sounds exciting to you and your project, please share the news and meet up with us on Monday. We look forward to seeing you there!

But that’s not all. This is just where we felt Ignite could make a difference. If you have your own ideas, own use-cases, issues or challenges, please let us know and become part of the team – even if it’s just by giving us feedback! If you’d like to get inspiration about what others are doing with Ignite, or add your own project, check out the awesome-ignite page.

If you are interested in helping out, that’s fantastic! The meeting should be interesting for you too. If you can’t wait, check out our contributors guide and check out our open issues too. If you are interested in writing docs, adding comments, testing, filing issues or getting your feet into the project, we’re all there and happy to help.

We’ll have more news on Ignite soon. But for today’s update we are signing off with a bitter-sweet announcement: Lucas and Dennis will from September step down as project maintainers in order to embark on a new adventure: Aalto University in Helsinki! They have started something very remarkable and we could not be happier for them. Watch this space for more news.

If you’d like to join the journey, you can do so here:

David Tomaschik: CVE-2019-10071: Timing Attack in HMAC Verification in Apache Tapestry

22 hours 39 minutes ago
Description

Apache Tapestry uses HMACs to verify the integrity of objects stored on the client side. This was added to address the Java deserialization vulnerability disclosed in CVE-2014-1972. In the fix for the previous vulnerability, the HMACs were compared by string comparison, which is known to be vulnerable to timing attacks.

Affected Versions
  • Apache Tapestry 5.3.6 through current releases.
Mitigation

No new release of Tapestry has occurred since the issue was reported. Affected organizations may want to consider locally applying commit d3928ad44714b949d247af2652c84dae3c27e1b1.

Timeline
  • 2019-03-12: Issue discovered.
  • 2019-03-13: Issue reported to security@apache.org.
  • 2019-03-29: Pinged thread to ask for update.
  • 2019-04-19: Fix committed.
  • 2019-04-23: Asked about release timeline, response “in the upcoming months”
  • 2019-05-28: Pinging again about release.
  • 2019-06-24: Asked again, asked for CVE number assigned. No update on timeline.
  • 2019-08-22: Disclosure posted.
Credit

This vulnerability was discovered by David Tomaschik of the Google Security Team.

Ubuntu Podcast from the UK LoCo: S12E20 – Outrun

1 day 15 hours ago

This week we’ve been experimenting with lean podcasting and playing Roguelikes. We discuss what goes on at a Canonical Roadmap Sprint, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 20 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Stuart Langridge are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Alan has been creating a lean podcast – TeleCast with popey.
    • Mark has been playing Roguelikes.
  • We discuss what goes on at a Canonical Product Roadmap Sprint.

  • We share a Command Line Lurve:

    • Ctrl+X – Expand a character
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image taken from Outrun arcade machine manufactured in 1986 by Sega.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

Ubuntu Blog: Useful security software from the Snap Store

1 day 20 hours ago

Overall, most Linux distributions offer sane, reasonable defaults that balance security and functionality quite well. However, most of the security mechanisms are transparent, running in the background, and you still might require some additional, practical software to bolster your security array. Back in July, we talked about handy productivity applications available in the Snap Store, and today we’d like to take a glimpse at the security category, and review several cool, interesting snaps.

KeePassXC

Once upon a time, password management was a simple thing. There were few services around, the Internet was a fairly benign place, and we often used the same combo of username and password for many of them. But as the Internet grew and the threat landscape evolved, the habits changed.

In the modern Web landscape, there are thousands of online services, and many sites also require logins to allow you to use their full functionality. With data breaches a common phenomenon nowadays, tech-savvy users have adopted a healthier practice of avoiding credentials re-use. However, this also creates a massive administrative burden, as people now need to memorize hundreds of usernames and their associated passwords.

The solution to this fairly insurmountable challenge is the use of secure, encrypted digital password wallets, which allow you to keep track of your endless list of sites, services and their relevant credentials.

KeePassXC does exactly that. The program comes with a simple, fairly intuitive interface. On first run, you will be able to select your encryption settings, including the ability to use KeePassXC in conjunction with a YubiKey. Once the application is configured, you can then start adding entries, including usernames, passwords, any notes, links to websites, and even attachments. The contents are stored in a database file, which you can easily port or copy, so you also gain an element of extra flexibility – as well as the option to back up your important data.

BitWarden

Side by side with KeepPassXC, BitWarden is a free, open-source password manager. After the application is installed, you need to create an account. Then, you can populate your database (vault) with the entries, including login names, passwords and other details, like card numbers and secure notes. BitWarden uses strong security, and the encrypted vault is synced across devices. This gives you additional portability, as well as an element of necessary redundancy, which is highly important for something like a password database.

BitWarden also includes a Premium version, which offers 1 GB encrypted storage and support for YubiKey and other 2FA hardware devices. The application also allows you to use PIN locking, and arrange your items into folders.

Secrethub-cli

Given that we’ve discussed password management, the next logical step is to talk about collaborative development, configuration files and passwords (secrets) that sometimes need to be used or shared in projects. If you use public repositories (or even private ones), there is always some risk in keeping credentials out in the open.

Secrethub-cli is designed to provide a workaround to this issue by allowing developers to store necessary credentials (like database usernames and passwords) inside encrypted vaults, and then inject them into configuration files only when necessary.

You start by signing up for an account, after which you can use the command-line interface to populate your vault. The next step is to create template files (.tpl) with specifically defined secrets placeholders, and then pass the files to secrethub-cli, which will inject the right credentials based on the provided placeholders (username and password), and then print out the result to the standard output, or if you like, into a service configuration file for your application.

cat example.config.tpl | secrethub inject

This way, the command will run correctly if the right secrethub-cli account is used, but it won’t work for anyone else, allowing reliable sharing of project work. The application is available for free for personal projects.

Wormhole

This software might very well be familiar to you, as we’ve discussed Wormhole in greater detail several months ago. It is an application designed to allow two end systems to exchange files in a safe, secure manner. Rather than using email or file sharing services, you can send content to your friends and colleagues directly, using Wormhole codes, which allow the two sides to identify one another and exchange data. Wormhole is a command-line program, but it is relatively simple to use. It also offers unlimited data transfers, and can work with directories too (and not just individual files).

Livepatch

System restarts can be a nuisance, and might lead to a (temporary) loss of productivity. Sometimes though, they are necessary, especially if your machine has just received a slew of security updates. Livepatch is a Canonical tool, offering rebootless kernel patching. It runs as a service on a host and occasionally applies patches to the kernel, which will be used until a full kernel update and the subsequent restart. It is a convenient and practical solution, especially in the mission-critical server environment.

However, home users can benefit from this product too. Livepatch is available for free to Ubuntu users on LTS releases (like 16.04 or 18.04). The only additional requirement is that you do have to register for an Ubuntu SSO account, which will provide you with a token, which you can then use to enable the livepatch service on up to three systems (for free).

snap install canonical-livepatch
canonical-livepatch enable "token"

Once Livepatch is installed and enabled, it will run in the background, doing its job. As a technology, Livepatch fixes cannot be created for every single kernel vulnerability, but a large number of them can be mitigated, dispensing the need for frequent reboots. You can always check the status of the service on the command line, to see that it is working:

canonical-livepatch status Summary

We hope you enjoyed this piece. Software security often has a somber angle, but we’d like to believe that today’s blog post dispels that notion. The exercise of practicality, data integrity and the ability to protect your important information does not have to be an arduous and difficult task. In fact, you might even enjoy yourself.

We would also suggest you visit the Snap Store and explore; who knows, you might find some rather useful applications that you haven’t really thought of or known before. If you have any comments, please join our forum for a discussion.

Photo by Jason Blackeye on Unsplash.

Ubuntu Blog: Jupyter looks to distro-agnostic packaging for the democratisation of installation

2 days 19 hours ago

When users of your application range from high school students to expert data scientists, it’s often wise to avoid any assumptions about their system configurations. The Jupyter Notebook is popular with a diverse user base, enabling the creation and sharing of documents containing live code, visualisations, and narrative text. The app uses processes (kernels) to run interactive code in different programming languages and send output back to the user. Filipe Fernandes has a key responsibility in the Jupyter community for its packaging and ease of installation. At the 2019 Snapcraft Summit in Montreal, he gave us his impressions of snaps as a tool to improve the experience for all concerned.

“I’m a packager and a hacker, and I’m also a Jupyter user. I find Jupyter to be great as a teaching tool. Others use it for data cleaning and analysis, numerical simulation and modelling, or machine learning, for example. One of the strengths of Jupyter is that it is effectively language agnostic. I wanted Jupyter packaging to be similar, distro-agnostic, if you like.”

Filipe had heard about snaps a while back, but only really discovered their potential after he received an invitation to the Snapcraft Summit and noticed that Microsoft Visual Studio Code had recently become available as a snap. The ease of use of snaps was a big factor for him. “I like things that just work. I often get hauled in to sort out installation problems for other users – including members of my own family! It’s great to be able to tell them just to use the snap version of an application. It’s like, I snap my fingers and the install problems disappear!”

At the Summit, getting Snapcraft questions answered was easy too. “Every time I hit a snag, I raised my hand, and someone helped me.” Filipe was able to experiment with packaging trade-offs for Jupyter snaps. “I made a design choice to make the overall Jupyter package smaller by not including the Qt console. Most people just want the browser interface anyway. Similarly, I excluded the dependency for converting Jupyter Notebooks to other formats via pandoc. The size of the Jupyter snap then decreased from about 230 MB to just 68 MB”. 

What would he like to see in the Snapcraft of tomorrow? “There are some technical tasks to be done for each Jupyter snap, like declaring features of plug-ins and setting different permissions. It would be nice to find a way for automating these tasks, so that they do not have to be done manually each time a snap is built. Also, it’s not always easy to see which parts of the Snapcraft documentation are official and which are from enthusiastic but unsanctioned users.” Filipe suggests that creating a ‘verified publisher’ status or certification could be an answer, helping other users to decide how they want to consider different contributions to the documentation.  

A stable Jupyter snap is now available from the Snap Store providing the Jupyter users another option to install beyond the official sources. Filipe and the Jupyter community have been working on promoting it via banners, and blogs. “Some people get overwhelmed by the amount of information out there, especially when they start Googling options. I think snaps is a way to shortcut that,” adds Filipe. He recommends that other developers who want to get to this level should also come to the Summit. “The interactions here are so quick, to the point that I felt very productive within a really small amount of time, like I’d accomplished weeks of work. It’s awesome to be here and I’m looking forward to the next one.”

Install the community managed Jupyter snap here

Ubuntu Blog: How to add a linter to ROS 2

2 days 20 hours ago

A well configured linter can catch common errors before code is even run or compiled. ROS 2 makes it easy to add linters of your choice and make them part of your package’s testing pipeline.

We’ll step through the process, from start to finish, of adding a linter to ament so it can be used to automatically test your projects. We’ll try to keep it generic, but where we need to lean on an example we’ll be referring to the linter we recently added for mypy, a static type analyzer for Python. You can view the finished source code for ament_mypy and ament_cmake_mypy.

Design

We’ll need to make sure our linter integrates into ament‘s testing pipeline. Namely, this means writing CMake scripts to integrate with ament_cmake_test and ament_lint_auto.

We need to be able to generate a JUnit XML report for the Jenkins build farm to parse, as well as handle automatically excluding directories with AMENT_IGNORE files, so we’ll need to write a wrapper script for our linter as well.

Overall, we’ll need to write the following packages:

  • ament_[linter]
    • CLI wrapper for linter
      • Collect files, ignore those in AMENT_IGNORE directories
      • Configure and call linter
      • Generate XML report
  • ament_cmake_[linter]
    • Set of CMake scripts
      • ament_[linter].cmake
        • Function to invoke linter wrapper
      • ament_cmake_[linter]-extras.cmake
        • Script to hook into ament_lint_auto
        • Registered at build as the CONFIG_EXTRA argument to ament_package
      • ament_[linter].cmake
        • Hook script for ament_lint
Getting Started – Python

We’ll start with making the ament_[linter] package.

We’ll be using Python to write this package, so we’ll add a setup.py file, and fill out some required fields. It’s easiest to just take one from an existing linter and customize it. What it ends up containing will be specific to the linter you’re adding, but for mypy it looks like this:

from setuptools import find_packages from setuptools import setup setup( name='ament_mypy', version='0.7.3', packages=find_packages(exclude=['test']), package_data={'': [ 'configuration/ament_mypy.ini', ]}, install_requires=['setuptools'], zip_safe=False, author='Ted Kern', author_email='<email>', maintainer='Ted Kern', maintainer_email='<email>', url='https://github.com/ament/ament_lint', download_url='https://github.com/ament/ament_lint/releases', keywords=['ROS'], classifiers=[ 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python', 'Topic :: Software Development', ], description='Check Python static typing using mypy.', long_description="""\ The ability to check code for user specified static typing with mypy.""", license='Apache License, Version 2.0', tests_require=['pytest', 'pytest-mock'], entry_points={ 'console_scripts': [ 'ament_mypy = ament_mypy.main:main', ], }, )

We’ll of course need a package.xml file. We’ll need to make sure it has an <exec_depend> on the linter’s package name in ROSDistro. If its not there, you’ll need to go through the process of adding it. This is required in order to actually install the linter itself as a dependency of our new ament linter package; without it any tests using it in CI would fail. Here’s what it looks like for mypy:

<?xml version="1.0"?> <?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?> <package format="3"> <name>ament_mypy</name> <version>0.7.3</version> <description>Support for mypy static type checking in ament.</description> <maintainer email="me@example.com">Ted Kern</maintainer> <license>Apache License 2.0</license> <author email="me@example.com">Ted Kern</author> <exec_depend>python3-mypy</exec_depend> <export> <build_type>ament_python</build_type> </export> </package> The Code

Create a python file called ament_[linter]/main.py, which will house all the logic for this linter. Below is the sample skeleton of a linter, again attempting to be generic where possible but nonetheless based on ament_mypy:

#!/usr/bin/env python3 import argparse import os import re import sys import textwrap import time from typing import List, Match, Optional, Tuple from xml.sax.saxutils import escape from xml.sax.saxutils import quoteattr # Import your linter here import mypy.api # type: ignore def main(argv: Optional[List[str]] = None) -> int: if not argv: argv = [] parser.add_argument( 'paths', nargs='*', default=[os.curdir], help='The files or directories to check. For directories files ending ' 'in '.py' will be considered.' ) parser.add_argument( '--exclude', metavar='filename', nargs='*', dest='excludes', help='The filenames to exclude.' ) parser.add_argument( '--xunit-file', help='Generate a xunit compliant XML file' ) # Example of a config file specification option parser.add_argument( '--config', metavar='path', dest='config_file', default=os.path.join(os.path.dirname(__file__), 'configuration', 'ament_mypy.ini'), help='The config file' ) # Example linter specific option parser.add_argument( '--cache-dir', metavar='cache', default=os.devnull, dest='cache_dir', help='The location mypy will place its cache in. Defaults to system ' 'null device' ) args = parser.parse_args(argv) if args.xunit_file: start_time = time.time() if args.config_file and not os.path.exists(args.config_file): print("Could not find config file '{}'".format(args.config_file), file=sys.stderr) return 1 filenames = _get_files(args.paths) if args.excludes: filenames = [f for f in filenames if os.path.basename(f) not in args.excludes] if not filenames: print('No files found', file=sys.stderr) return 1 normal_report, error_messages, exit_code = _generate_linter_report( filenames, args.config_file, args.cache_dir ) if error_messages: print('mypy error encountered', file=sys.stderr) print(error_messages, file=sys.stderr) print('\nRegular report continues:') print(normal_report, file=sys.stderr) return exit_code errors_parsed = _get_errors(normal_report) print('\n{} files checked'.format(len(filenames))) if not normal_report: print('No errors found') else: print('{} errors'.format(len(errors_parsed))) print(normal_report) print('\nChecked files:') print(''.join(['\n* {}'.format(f) for f in filenames])) # generate xunit file if args.xunit_file: folder_name = os.path.basename(os.path.dirname(args.xunit_file)) file_name = os.path.basename(args.xunit_file) suffix = '.xml' if file_name.endswith(suffix): file_name = file_name[:-len(suffix)] suffix = '.xunit' if file_name.endswith(suffix): file_name = file_name[:-len(suffix)] testname = '{}.{}'.format(folder_name, file_name) xml = _get_xunit_content(errors_parsed, testname, filenames, time.time() - start_time) path = os.path.dirname(os.path.abspath(args.xunit_file)) if not os.path.exists(path): os.makedirs(path) with open(args.xunit_file, 'w') as f: f.write(xml) return exit_code def _generate_linter_report(paths: List[str], config_file: Optional[str] = None, cache_dir: str = os.devnull) -> Tuple[str, str, int]: """Replace this section with code specific to your linter""" pass def _get_xunit_content(errors: List[Match], testname: str, filenames: List[str], elapsed: float) -> str: xml = textwrap.dedent("""\ <?xml version="1.0" encoding="UTF-8"?> <testsuite name="{test_name:s}" tests="{test_count:d}" failures="{error_count:d}" time="{time:s}" > """).format( test_name=testname, test_count=max(len(errors), 1), error_count=len(errors), time='{:.3f}'.format(round(elapsed, 3)) ) if errors: # report each linter error/warning as a failing testcase for error in errors: pos = '' if error.group('lineno'): pos += ':' + str(error.group('lineno')) if error.group('colno'): pos += ':' + str(error.group('colno')) xml += _dedent_to("""\ <testcase name={quoted_name} classname="{test_name}" > <failure message={quoted_message}/> </testcase> """, ' ').format( quoted_name=quoteattr( '{0[type]} ({0[filename]}'.format(error) + pos + ')'), test_name=testname, quoted_message=quoteattr('{0[msg]}'.format(error) + pos) ) else: # if there are no mypy problems report a single successful test xml += _dedent_to("""\ <testcase name="mypy" classname="{}" status="No problems found"/> """, ' ').format(testname) # output list of checked files xml += ' <system-out>Checked files:{escaped_files}\n </system-out>\n'.format( escaped_files=escape(''.join(['\n* %s' % f for f in filenames])) ) xml += '</testsuite>\n' return xml def _get_files(paths: List[str]) -> List[str]: files = [] for path in paths: if os.path.isdir(path): for dirpath, dirnames, filenames in os.walk(path): if 'AMENT_IGNORE' in filenames: dirnames[:] = [] continue # ignore folder starting with . or _ dirnames[:] = [d for d in dirnames if d[0] not in ['.', '_']] dirnames.sort() # select files by extension for filename in sorted(filenames): if filename.endswith('.py'): files.append(os.path.join(dirpath, filename)) elif os.path.isfile(path): files.append(path) return [os.path.normpath(f) for f in files] def _get_errors(report_string: str) -> List[Match]: return list(re.finditer(r'^(?P<filename>([a-zA-Z]:)?([^:])+):((?P<lineno>\d+):)?((?P<colno>\d+):)?\ (?P<type>error|warning|note):\ (?P<msg>.*)$', report_string, re.MULTILINE)) # noqa: E501 def _dedent_to(text: str, prefix: str) -> str: return textwrap.indent(textwrap.dedent(text), prefix) if __name__ == 'main': sys.exit(main(sys.argv[1:]))

We’ll break this down into chunks.

Main Logic

We write the file as an executable and use the argparse library to parse the invocation, so we begin the file with the shebang:

#!/usr/bin/env python3

and end it with the main logic:

if __name__ == 'main': sys.exit(main(sys.argv[1:]))

to forward failure codes out of the script.

The main() function will host the bulk of the program’s logic. Define it, and make sure the entry_points argument in setup.py points to it.

def main(argv: Optional[List[str]] = None) -> int: if not argv: argv = []

Notice the use of type hints, mypy will perform static type checking where possible and where these hints are designated.

Parsing the Arguments

We add the arguments to argparse that ament expects:

parser.add_argument( 'paths', nargs='*', default=[os.curdir], help='The files or directories to check. For directories files ending ' 'in '.py' will be considered.' ) parser.add_argument( '--exclude', metavar='filename', nargs='*', dest='excludes', help='The filenames to exclude.' ) parser.add_argument( '--xunit-file', help='Generate a xunit compliant XML file' )

We also include any custom arguments, or args specific to the linter. For example, for mypy we also allow the user to pass in a custom config file to the linter, with a pre-configured default already set up:

# Example of a config file specification option parser.add_argument( '--config', metavar='path', dest='config_file', default=os.path.join(os.path.dirname(__file__), 'configuration', 'ament_mypy.ini'), help='The config file' ) # Example linter specific option parser.add_argument( '--cache-dir', metavar='cache', default=os.devnull, dest='cache_dir', help='The location mypy will place its cache in. Defaults to system ' 'null device' )

Note: remember to include any packaged non-code files (like default configs) using a manifest or package_data= in setup.py.

Finally, parse and validate the args:

args = parser.parse_args(argv) if args.xunit_file: start_time = time.time() if args.config_file and not os.path.exists(args.config_file): print("Could not find config file '{}'".format(args.config_file), file=sys.stderr) return 1 filenames = _get_files(args.paths) if args.excludes: filenames = [f for f in filenames if os.path.basename(f) not in args.excludes] if not filenames: print('No files found', file=sys.stderr) return 1 Aside: _get_files

You’ll notice the call to the helper function _get_files, shown below. We use a snippet from the other linters to build up an explicit list of files to lint, in order to apply our exclusions and the AMENT_IGNORE behavior.

def _get_files(paths: List[str]) -> List[str]: files = [] for path in paths: if os.path.isdir(path): for dirpath, dirnames, filenames in os.walk(path): if 'AMENT_IGNORE' in filenames: dirnames[:] = [] continue # ignore folder starting with . or _ dirnames[:] = [d for d in dirnames if d[0] not in ['.', '_']] dirnames.sort() # select files by extension for filename in sorted(filenames): if filename.endswith('.py'): files.append(os.path.join(dirpath, filename)) elif os.path.isfile(path): files.append(path) return [os.path.normpath(f) for f in files]

Note that in the near future this and _get_xunit_content will hopefully be de-duplicated into the ament_lint package.

This function, when given a list of paths, expands out all files recursively and returns those .py files that don’t belong in directories containing an AMENT_IGNORE file.

We exclude those files that are in the exclude argument list, and we return a failure from main if no files are left afterwards.

filenames = _get_files(args.paths) if args.excludes: filenames = [f for f in filenames if os.path.basename(f) not in args.excludes] if not filenames: print('No files found', file=sys.stderr) return 1

Otherwise we pass those files, as well as relevant configuration arguments, to the linter.

Invoking the Linter

We call the linter using whatever API it exposes:

normal_report, error_messages, exit_code = _generate_linter_report( filenames, args.config_file, args.cache_dir )

abstracted here with the following method signature:

def _generate_linter_report(paths: List[str], config_file: Optional[str] = None, cache_dir: str = os.devnull) -> Tuple[str, str, int]: Recording the Output

Any failures the linter outputs are printed to stdout, while any internal linter errors go to stderr and return the (non-zero) exit code:

if error_messages: print('linter error encountered', file=sys.stderr) print(error_messages, file=sys.stderr) print('\nRegular report continues:') print(normal_report, file=sys.stderr) return exit_code

We collect each warning/error/note message emitted individually:

errors_parsed = _get_errors(normal_report)

We then report the errors to the user with something like:

print('\n{} files checked'.format(len(filenames))) if not normal_report: print('No errors found') else: print('{} errors'.format(len(errors_parsed))) print(normal_report) print('\nChecked files:') print(''.join(['\n* {}'.format(f) for f in filenames])) Generating JUnit XML Output

Here we generate an xml report write the file to disk in the requested location.

if args.xunit_file: folder_name = os.path.basename(os.path.dirname(args.xunit_file)) file_name = os.path.basename(args.xunit_file) suffix = '.xml' if file_name.endswith(suffix): file_name = file_name[:-len(suffix)] suffix = '.xunit' if file_name.endswith(suffix): file_name = file_name[:-len(suffix)] testname = '{}.{}'.format(folder_name, file_name) xml = _get_xunit_content(errors_parsed, testname, filenames, time.time() - start_time) path = os.path.dirname(os.path.abspath(args.xunit_file)) if not os.path.exists(path): os.makedirs(path) with open(args.xunit_file, 'w') as f: f.write(xml)

An example of a valid output XML to the schema is shown below

<?xml version="1.0" encoding="UTF-8"?> <testsuite name="tst" tests="4" failures="4" time="0.010" > <testcase name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/lc.py:0:0)" classname="tst" > <failure message="error message:0:0"/> </testcase> <testcase name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/l.py:0)" classname="tst" > <failure message="error message:0"/> </testcase> <testcase name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/no_pos.py)" classname="tst" > <failure message="error message"/> </testcase> <testcase name="warning (/tmp/pytest-of-ubuntu/pytest-164/use_me7/warn.py)" classname="tst" > <failure message="warning message"/> </testcase> <system-out>Checked files: * /tmp/pytest-of-ubuntu/pytest-164/use_me7/lc.py * /tmp/pytest-of-ubuntu/pytest-164/use_me7/l.py * /tmp/pytest-of-ubuntu/pytest-164/use_me7/no_pos.py * /tmp/pytest-of-ubuntu/pytest-164/use_me7/warn.py </system-out> </testsuite> Aside: _get_xunit_content

We write a helper function, _get_xunit_content, to format the XML output to the schema . This one is a bit specific to mypy, but hopefully it gives you a good idea of what’s needed:

def _get_xunit_content(errors: List[Match], testname: str, filenames: List[str], elapsed: float) -> str: xml = textwrap.dedent("""\ <?xml version="1.0" encoding="UTF-8"?> <testsuite name="{test_name:s}" tests="{test_count:d}" failures="{error_count:d}" time="{time:s}" > """).format( test_name=testname, test_count=max(len(errors), 1), error_count=len(errors), time='{:.3f}'.format(round(elapsed, 3)) ) if errors: # report each mypy error/warning as a failing testcase for error in errors: pos = '' if error.group('lineno'): pos += ':' + str(error.group('lineno')) if error.group('colno'): pos += ':' + str(error.group('colno')) xml += _dedent_to("""\ <testcase name={quoted_name} classname="{test_name}" > <failure message={quoted_message}/> </testcase> """, ' ').format( quoted_name=quoteattr( '{0[type]} ({0[filename]}'.format(error) + pos + ')'), test_name=testname, quoted_message=quoteattr('{0[msg]}'.format(error) + pos) ) else: # if there are no mypy problems report a single successful test xml += _dedent_to("""\ <testcase name="mypy" classname="{}" status="No problems found"/> """, ' ').format(testname) # output list of checked files xml += ' <system-out>Checked files:{escaped_files}\n </system-out>\n'.format( escaped_files=escape(''.join(['\n* %s' % f for f in filenames])) ) xml += '</testsuite>\n' return xml Return from main

Finally, we return the exit code.

return exit_code The CMake Plugin

Now that our linting tool is ready, we need to write an interface for it to attach to ament.

Getting Started

We create a new ros2 package named ament_cmake_[linter] in the ament_lint folder, and fill out package.xml. As an example, the one for mypy looks like this:

<?xml version="1.0"?> <?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?> <package format="3"> <name>ament_cmake_mypy</name> <version>0.7.3</version> <description> The CMake API for ament_mypy to perform static type analysis on python code with mypy. </description> <maintainer email="<email>">Ted Kern</maintainer> <license>Apache License 2.0</license> <author email="<email>">Ted Kern</author> <buildtool_depend>ament_cmake_core</buildtool_depend> <buildtool_depend>ament_cmake_test</buildtool_depend> <buildtool_export_depend>ament_cmake_test</buildtool_export_depend> <buildtool_export_depend>ament_mypy</buildtool_export_depend> <test_depend>ament_cmake_copyright</test_depend> <test_depend>ament_cmake_lint_cmake</test_depend> <export> <build_type>ament_cmake</build_type> </export> </package> CMake Configuration

We write the installation and testing instructions in CMakeLists.txt, as well as pass our extras file to ament_package. This is the one for mypy, yours should look pretty similar:

cmake_minimum_required(VERSION 3.5) project(ament_cmake_mypy NONE) find_package(ament_cmake_core REQUIRED) find_package(ament_cmake_test REQUIRED) ament_package( CONFIG_EXTRAS "ament_cmake_mypy-extras.cmake" ) install( DIRECTORY cmake DESTINATION share/${PROJECT_NAME} ) if(BUILD_TESTING) find_package(ament_cmake_copyright REQUIRED) ament_copyright() find_package(ament_cmake_lint_cmake REQUIRED) ament_lint_cmake() endif()

Then we register our extension with ament in ament_cmake_[linter]-extras.cmake. Again, this one is for mypy, but you should be able to easily repurpose it.

find_package(ament_cmake_test QUIET REQUIRED) include("${ament_cmake_mypy_DIR}/ament_mypy.cmake") ament_register_extension("ament_lint_auto" "ament_cmake_mypy" "ament_cmake_mypy_lint_hook.cmake")

We then create a CMake function in cmake/ament_[linter].cmake to invoke our test when needed. This will be specific to your linter and the wrapper you wrote above, but here’s how it looks for mypy:

# # Add a test to statically check Python types using mypy. # # :param CONFIG_FILE: the name of the config file to use, if any # :type CONFIG_FILE: string # :param TESTNAME: the name of the test, default: "mypy" # :type TESTNAME: string # :param ARGN: the files or directories to check # :type ARGN: list of strings # # @public # function(ament_mypy) cmake_parse_arguments(ARG "" "CONFIG_FILE;TESTNAME" "" ${ARGN}) if(NOT ARG_TESTNAME) set(ARG_TESTNAME "mypy") endif() find_program(ament_mypy_BIN NAMES "ament_mypy") if(NOT ament_mypy_BIN) message(FATAL_ERROR "ament_mypy() could not find program 'ament_mypy'") endif() set(result_file "${AMENT_TEST_RESULTS_DIR}/${PROJECT_NAME}/${ARG_TESTNAME}.xunit.xml") set(cmd "${ament_mypy_BIN}" "--xunit-file" "${result_file}") if(ARG_CONFIG_FILE) list(APPEND cmd "--config-file" "${ARG_CONFIG_FILE}") endif() list(APPEND cmd ${ARG_UNPARSED_ARGUMENTS}) file(MAKE_DIRECTORY "${CMAKE_BINARY_DIR}/ament_mypy") ament_add_test( "${ARG_TESTNAME}" COMMAND ${cmd} OUTPUT_FILE "${CMAKE_BINARY_DIR}/ament_mypy/${ARG_TESTNAME}.txt" RESULT_FILE "${result_file}" WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}" ) set_tests_properties( "${ARG_TESTNAME}" PROPERTIES LABELS "mypy;linter" ) endfunction()

This function checks for the existence of your linting CLI, prepares the argument list to pass in, creates an output directory for the report, and labels the test type.

Finally, in ament_cmake_[linter]_lint_hook.cmake, we write the hook into the function we just defined. This one is for mypy but yours should look almost identical:

file(GLOB_RECURSE _python_files FOLLOW_SYMLINKS "*.py") if(_python_files) message(STATUS "Added test 'mypy' to statically type check Python code.") ament_mypy() endif() Final Steps

With both packages ready, we build our new packages using colcon:

~/ros2/src $ colcon build --packages-select ament_mypy ament_cmake_mypy --event-handlers console_direct+ --symlink-install

If all goes well, we can now use this linter just like any other to test our Python packages!

It’s highly recommended you write a test suite to go along with your code. ament_mypy lints itself with flake8 and mypy, and has an extensive pytestbased suite of functions to validate its behavior. You can see this suite here.

Check out our other article on how to use the mypy linter if you’d like to learn more about how to invoke linters from your testing suite for other packages.

Ubuntu Blog: How to integrate Ubuntu with Active Directory

3 days 15 hours ago

Ubiquitous use of Microsoft tools coupled with increasing popularity of open source Linux software for enterprise presents new challenges for non-Microsoft operating systems that require seamless integration with Active Directory for authentication and identity management. This is because Active Directory was never designed as a cross-platform directory service.

Integrating Ubuntu Desktop 18.04 LTS into an existing Active Directory architecture can be an automated and effortless process when using the Powerbroker Identity Service Open tool (PBIS Open), a derivative of BeyondTrust’s Open-Source Active Directory Bridging.

Integrating Ubuntu Desktop into an existing Active Directory architecture can be an automated and effortless process

This whitepaper provides detailed insights and step-by-step instructions for using PBIS Open to integrate Ubuntu Desktop into Active Directory and suggests alternative solutions in cases where it is not a suitable option.

What can I learn from this whitepaper?
  • Overview, benefits and drawbacks of using PBIS Open to integrate third party operating systems into an existing Microsoft Active Directory architecture.
  • Detailed steps for PBIS Open set up and integrating Ubuntu into Active Directory.
  • Alternative tools to integrate Ubuntu into Active Directory.

To download the whitepaper, complete the form below:

Raphaël Hertzog: Promoting Debian LTS with stickers, flyers and a video

3 days 18 hours ago

With the agreement of the Debian LTS contributors funded by Freexian, earlier this year I decided to spend some Freexian money on marketing: we sponsored DebConf 19 as a bronze sponsor and we prepared some stickers and flyers to give out during the event.

The stickers only promote the Debian LTS project with the semi-official logo we have been using and a link to the wiki page. You can see them on the back of a laptop in the picture below. As you can see, we have made two variants with different background colors:

The flyers and the video are meant to introduce the Debian LTS project and to convince companies to sponsor the Debian LTS project through the Freexian offer. Those are short documents and they can’t explain the precise relationship between Debian LTS and Freexian. We try to show that Freexian is just an intermediary between contributors and companies, but some persons will still have the feeling that a commercial entity is organizing Debian LTS.

Check out the video on YouTube:

The inside of the flyer looks like this:

Click on the picture to see it full size

Note that due to some delivery issues, we have left-over flyers and stickers. If you want some to give out during a free software event, feel free to reach out to me.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2019

3 days 20 hours ago

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 199 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned but did nothing (plus 10 extra hours from June), thus he is carrying over 18h to August.
  • Ben Hutchings did 18.5 hours (out of 18.5 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 21 hours (out of 18.5h allocated + 17h remaining, thus keeping 14.5 extra hours for August).
  • Hugo Lefeuvre did 9.75 hours (out of 18.5 hours, thus carrying over 8.75h to Augustq).
  • Jonas Meurer did 19 hours (out of 17 hours allocated plus 2h extra hours June).
  • Markus Koschany did 18.5 hours (out of 18.5 hours allocated).
  • Mike Gabriel did 15.75 hours (out of 18.5 hours allocated plus 7.25 extra hours from June, thus carrying over 10h to August.).
  • Ola Lundqvist did 0.5 hours (out of 8 hours allocated plus 8 extra hours from June, then he gave 7.5h back to the pool, thus he is carrying over 8 extra hours to August).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 18.5 hours (out of 18.5 hours allocated).
  • Thorsten Alteholz did 18.5 hours (out of 18.5 hours allocated).
Evolution of the situation

July was different than other months. First, some people have been on actual vacations, while 4 of the above 14 contributors met in Curitiba, Brazil, for DebConf19. There, a talk about LTS (slides, video) was given, followed by a Q&ligA session. Also a new promotional video about Debian LTS, aimed at potential sponsors was shown there for the first time.

DebConf19 was also a success in respect to on-boarding of new contributors, we’ve found three potential new contributors, one of them is already in training.

The security tracker (now for oldoldstable as Buster has been released and thus Jessie became oldoldstable) currently lists 51 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

The Fridge: Ubuntu Weekly Newsletter Issue 592

4 days 6 hours ago

Welcome to the Ubuntu Weekly Newsletter, Issue 592 for the week of August 11 – 17, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

Jono Bacon: Announcing my new book: ‘People Powered: How communities can supercharge your business, brand, and teams’

4 days 14 hours ago

I am absolutely thrilled to announce my brand new book, ‘People Powered: How communities can supercharge your business, brand, and teams’ published by HarperCollins Leadership.

It will be available in hard cover, audiobook, and e-book formats, available from Amazon, Audible, Walmart, Target, Google Play, Apple iBooks, Barnes and Noble, and other great retailers.

The book is designed for leaders, founders, marketing and customer success staff, community managers/evangelists, and others who want to build a more productive, more meaningful relationship with your users, customers, and broader audience.

‘People Powered’ covers three key areas:

  1. The value and potential of building a community inside and outside a business, how it can create a closer relationship with your users and customers, and deliver tangible value such as improved support, technology development, advocacy, and more.
  2. I present the strategic method that I have used with hundreds of clients and companies I consult with and advise. This guides you how to create a comprehensive, productive, and realistic community strategy that scales up, build cross-departmental skin in the game, create incentives, run events, measure community success, and deliver results.
  3. Finally, I walk you through how to to integrate this strategy into a business, covering hiring staff, building internal skills and capabilities, measuring this work with a series of concrete maturity models, and much more.

The book covers a comprehensive range of topics within these areas:

The book features a forward from New York Times bestseller Peter Diamandis, founder of XPRIZE and Singularity University.

It also features contributions from Joseph Gordon-Levitt (Emmy-award winning actor), Jim Whitehurst (CEO, Red Hat), Mike Shinoda (Co-Founder, Linkin Park), Ali Velshi (Anchor, MSNBC), Jim Zemlin (Executive Director, The Linux Foundation), Noah Everett (Founder, TwitPic), Alexander van Engelen (Contributor, Fractal Audio Systems), and others.

The book has also received a comprehensive range of endorsements, including Nat Friedman (CEO, GitHub), Jim Whitehurst (CEO, Red Hat), Whitney Bouck (COO, HelloSign), Jeff Atwood (Founder, StackOverflow/Discourse), Juan Olaizola (COO, Santander Espana), Jamie Hyneman (Co-Creator and Presenter, Mythbusters), and many others:

Here are a few sample endorsements:

“If you want to tap into the power that communities can bring to businesses and teams, there is no greater expert than Jono Bacon.”

Nat Friedman, CEO of GitHub

“If you want to unlock the power of collaboration in communities, companies, and teams, Jono should be your tour guide and ‘People Powered’ should be your map.”

Jamie Smith, Former Advisor to President Barack Obama

“If you don’t like herding cats but need to build a community, you need to read ‘People Powered’.”

Jamie Hyneman, Co-Creator/Host of Mythbusters

“In my profession, building networks is all about nurturing relationships for the long term. Jono Bacon has authored the recipe how to do this, and you should follow it.”

Gia Scinto, Head of Talent at YCombinator

“When people who are not under your command or payment eagerly work together towards a greater purpose, you can move mountains. Jono Bacon is one of the most accomplished experts on this, and in this book he tells you how to it’s done.”

Mårten Mickos, CEO of HackerOne

“Community is fundamental to DigitalOcean’s success, and helped us build a much deeper connection with our audience and customers. ‘People Powered’ presents the simple, pragmatic recipe for doing this well.”

Ben Uretsky, Co-Founder of DigitalOcean

“Technology tears down the barriers of collaboration and connects our communities – globally and locally. We need to give all organizations and developers the tools to build and foster this effort. Jono Bacon’s book provides timely insight into what makes us tick as humans, and how to build richer, stronger technology communities together.”

Kevin Scott, CTO of Microsoft

People Powered Preorder Package

‘People Powered’ is released on 12th November 2019 but I would love you wonderful people to preorder the book.

Preordering will give you access to a wide range of perks. This includes early access to half the book, free audio book chapters, an exclusive six-part, 4-hour+ ‘People Powered Plus’ video course, access to a knowledge base with 100+ articles, 2 books, and countless videos, exclusive webinars and Q&As, and sweepstakes for free 1-on-1 consulting workshops.

All of these perks are available just for the price of buying the book, there are no additional costs.

To unlock this preorder package, you simply buy the book, fill in a form with your order number and these perks will be unlocked. Good times!

To find out more about the book and unlock the preorder package, click here

The post Announcing my new book: ‘People Powered: How communities can supercharge your business, brand, and teams’ appeared first on Jono Bacon.

Ubuntu Blog: Design and Web team summary – 16 August 2019

4 days 20 hours ago

This iteration was the Web & design team’s first iteration of the second half of our roadmap cycle, after returning from the mid-cycle roadmap sprint in Toronto 2 weeks ago.

Priorities have moved around a bit since before the cycle, and we made a good start on the new priorities for the next 3 months. 

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

We launched three takeovers; “A guide to developing Android apps on Ubuntu”, “Build the data centre of the future” and “Creating accurate AI models with data”.

Ubuntu.com Vanilla conversion 

We’ve made good progress on converting ubuntu.com to version 2.3.0 of our Vanilla CSS framework

EKS redesign

We’ve been working on a new design for our EKS images page.

Canonical.com design evolution

New designs and prototypes are coming along well for redesigned partners and careers sections on canonical.com.

Vanilla squad

The Vanilla squad works on constantly improving the code and design patterns in our Vanilla CSS framework, which we use across all our websites.

Ubuntu SSO refresh

The squad continues to make good progress on adding Vanilla styling to all pages on login.ubuntu.com.

Colour theming best practices

We investigated some best practices for the use of colours in themes.

Improvements to Vanilla documentation

We made a number of improvements to the documentation of Vanilla framework.

Base

The Base squad supports the other squads with shared modules, development tooling and hosting infrastructure across the board.

certification.ubuntu.com

We continued to progress with the back-end rebuild and re-hosting of certification.ubuntu.com, which should be released next iteration.

Blog improvements

We investigated ways to improve the performance of our blog implementations (most importantly ubuntu.com/blog). We will be releasing new versions of the blog module over the next few weeks which should bring significant improvements.

MAAS

The MAAS squad works on the browser-based UI for MAAS, as well as the maas.io website.

“Real world MAAS”

We’ve been working on a new section for the maas.io homepage about “Real world MAAS”, which will be released in the coming days. As MAAS is used at various enterprises of different scale we’re proving grouped curated content for three of the main audiences

UI settings updates

We’ve made a number of user experience updates to the settings page in the MAAS UI, including significant speed improvements to the Users page in conjunction with the work of moving the settings part of the application to React (from Django). We have completed the move of the General, Network, and Storage tabs, and have redesigned the experience for DHCP snippets and Scripts. 

Redesigned DHCP snippets tab

JAAS

The JAAS squad works on jaas.ai, the Juju GUI, and upcoming projects to support Juju.

This iteration we setup a bare-bones scaffold of our new JAAS Dashboard app using React and Redux.

Snaps

The Snap squad works on improvements to snapcraft.io.

Updating snapcraft.io to Vanilla 2.3.0

We continued work updating snapcraft.io to the latest version of Vanilla.

Ubuntu Blog: Design and Web team summary – 16 August 2019

4 days 20 hours ago

This iteration was the Web & design team’s first iteration of the second half of our roadmap cycle, after returning from the mid-cycle roadmap sprint in Toronto 2 weeks ago.

Priorities have moved around a bit since before the cycle, and we made a good start on the new priorities for the next 3 months. 

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

We launched three takeovers; “A guide to developing Android apps on Ubuntu”, “Build the data centre of the future” and “Creating accurate AI models with data”.

Ubuntu.com Vanilla conversion 

We’ve made good progress on converting ubuntu.com to version 2.3.0 of our Vanilla CSS framework

EKS redesign

We’ve been working on a new design for our EKS images page.

Canonical.com design evolution

New designs and prototypes are coming along well for redesigned partners and careers sections on canonical.com.

Vanilla squad

The Vanilla squad works on constantly improving the code and design patterns in our Vanilla CSS framework, which we use across all our websites.

Ubuntu SSO refresh

The squad continues to make good progress on adding Vanilla styling to all pages on login.ubuntu.com.

Colour theming best practices

We investigated some best practices for the use of colours in themes.

Improvements to Vanilla documentation

We made a number of improvements to the documentation of Vanilla framework.

Base

The Base squad supports the other squads with shared modules, development tooling and hosting infrastructure across the board.

certification.ubuntu.com

We continued to progress with the back-end rebuild and re-hosting of certification.ubuntu.com, which should be released next iteration.

Blog improvements

We investigated ways to improve the performance of our blog implementations (most importantly ubuntu.com/blog). We will be releasing new versions of the blog module over the next few weeks which should bring significant improvements.

MAAS

The MAAS squad works on the browser-based UI for MAAS, as well as the maas.io website.

“Real world MAAS”

We’ve been working on a new section for the maas.io homepage about “Real world MAAS”, which will be released in the coming days. As MAAS is used at various enterprises of different scale we’re proving grouped curated content for three of the main audiences

UI settings updates

We’ve made a number of user experience updates to the settings page in the MAAS UI, including significant speed improvements to the Users page in conjunction with the work of moving the settings part of the application to React (from Django). We have completed the move of the General, Network, and Storage tabs, and have redesigned the experience for DHCP snippets and Scripts. 

Redesigned DHCP snippets tab

JAAS

The JAAS squad works on jaas.ai, the Juju GUI, and upcoming projects to support Juju.

This iteration we setup a bare-bones scaffold of our new JAAS Dashboard app using React and Redux.

Snaps

The Snap squad works on improvements to snapcraft.io.

Updating snapcraft.io to Vanilla 2.3.0

We continued work updating snapcraft.io to the latest version of Vanilla.

The post Design and Web team summary – 16 August 2019 appeared first on Ubuntu Blog.

Ubuntu Blog: Issue #2019.08.19 – Kubeflow at CERN

4 days 21 hours ago
  • Replicating Particle Collisions at CERN with Kubeflow – this post is interesting for a number of reasons. First, it shows how Kubeflow delivers on the promise of portability and why that matters to CERN. Second, it reiterates that using Kubeflow adds negligible performance overhead as compared to other methods for training. Finally, the post shows another example of how images and deep learning can replace more computationally expensive methods for modelling real-word behaviour. This is the future, today.
  • AI vs. Machine Learning: The Devil Is in the Details – Need a refresh on what the difference is between artificial intelligence, machine learning and deep learning? Canonical has done a webinar on this very topic, but sometimes a different set of words are useful, so read this article for a refresh. You’ll also learn about a different set of use cases for how AI is changing the world – from Netflix to Amazon to video surveillance and traffic analysis and predictions.
  • Making Deep Learning User-Friendly, Possible? – The world has changed a lot in the 18 months since this article was published. One of the key takeaways from this article is a list of features to compare several standalone deep learning tools. The exciting news? The output of these tools can be used with Kubeflow to accelerate Model Training. There are several broader questions as well – How can companies leverage the advancements being made within the AI community? Are better tools the right answer? Finding a partner may be the right answer.
  • Interview spotlight: One of the fathers of AI is worried about its future – Yoshua Bengio is famous for championing deep learning, one of the most powerful technologies in AI. Read this transcript to understand some of his concerns with the direction of AI, as well as the exciting developments in AI. Research that is extending deep learning into things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

The post Issue #2019.08.19 – Kubeflow at CERN appeared first on Ubuntu Blog.

David Tomaschik: Hacker Summer Camp 2019: CTFs for Fun & Profit

4 days 22 hours ago

Okay, I’m back from Summer Camp and have caught up (slightly) on life. I had the privilege of giving a talk at BSidesLV entitled “CTFs for Fun and Profit: Playing Games to Build Your Skills.” I wanted to post a quick link to my slides and talk about the IoT CTF I had the chance to play.

I played in the IoT Village CTF at DEF CON, which was interesting because it uses real-world devices with real-world vulnerabilities instead of the typical made-up challenges in a CTF. On the other hand, I’m a little disappointed that it seems pretty similar (maybe even the same) year-to-year, not providing much variety or new learning experiences if you’ve played before.

Command Line