Hack a Day

Subscribe to Hack a Day feed Hack a Day
Fresh hacks every day
Updated: 25 min 34 sec ago

Big Trak Gets a New Brain

1 hour 9 min ago

If you were a kid in the 1980s you might have been lucky enough to score a Big Trak — a robotic toy you could program using a membrane keyboard to do 16 different motions. [Howard] has one, but not wanting to live with a 16-step program, he gave it a brain transplant with an Arduino and brought it on [RetroManCave’s] video blog and you can see that below.

If you want to duplicate the feat and your mom already cleaned your room to make it a craft shop, you can score one on eBay or there’s even a new replica version available, although it isn’t inexpensive. The code you need is on GitHub.

The CPU isn’t the only upgrade, as the updated Big Trak has an OLED display. [Howard] plans to add either WiFi or Bluetooth and wire the keyboard up to the onboard Arduino. [Howard] shows the inside and there is a lot of room by today’s standards. Of course, we wanted to see the original PCB, but it was nowhere to be found. Luckily, we found an image of the single-sided PCB on Wikipedia, so if you are like us, you can see it below, under the video.

There’s no wiring diagram that we could see, but from the Arduino code you can back out what the connections are to the sonar, the OLED display, and the new motor drivers for the original motors.

Oddly enough, this isn’t the first Big Trak that has made it to the pages of Hackaday. Of course, we have no shortage of hacked toy robots.

Bigtrak PCB Image – [Sergio Calleja] CC BY-SA 2.0

 

Over The Air Updates For Your Arduino

2 hours 40 min ago

An Arduino and a data radio can make a great remote sensor node. Often in such situations, the hardware ends up installed somewhere hard to get to – be it in a light fitting, behind a wall, or secreted somewhere outdoors. Not places that you’d want to squeeze a cable repeatedly into while debugging.

[2BitOrNot2Bit] decided this simply wouldn’t do, and decided to program the Arduinos over the air instead.

Using the NRF24L01 chip with the Arduino is a popular choice to add wireless communications to a small project. By installing one of these radios on both the remote hardware and a local Arduino connected to the programming computer, it’s possible to remotely flash the Arduino without any physical contact whatsoever using Optiboot.

The writeup is comprehensive and covers both the required hardware setup for both ends of the operation as well as how to install the relevant bootloaders. If you’re already using the NRF24L01 in your projects, this could be the ideal solution to your programming woes. Perhaps you’re using a different platform though – like an Arduino on WiFi? Don’t worry – you can do OTA updates that way, too.

Know Your Video Waveform

4 hours 8 min ago

When you acquired your first oscilloscope, what were the first waveforms you had a look at with it? The calibration output, and maybe your signal generator. Then if you are like me, you probably went hunting round your bench to find a more interesting waveform or two. In my case that led me to a TV tuner and IF strip, and my first glimpse of a video signal.

An analogue video signal may be something that is a little less ubiquitous in these days of LCD screens and HDMI connectors, but it remains a fascinating subject and one whose intricacies are still worthwhile knowing. Perhaps your desktop computer no longer drives a composite monitor, but a video signal is still a handy way to add a display to many low-powered microcontroller boards. When you see Arduinos and ESP8266s producing colour composite video on hardware never intended for the purpose you may begin to understand why an in-depth knowledge of a video waveform can be useful to have.

The purpose of a video signal is to both convey the picture information in the form of luminiance and chrominance (light & dark, and colour), and all the information required to keep the display in complete synchronisation with the source. It must do this with accurate and consistent timing, and because it is a technology with roots in the early 20th century all the information it contains must be retrievable with the consumer electronic components of that time.

We’ll now take a look at the waveform and in particular its timing in detail, and try to convey some of its ways. You will be aware that there are different TV systems such as PAL and NTSC which each have their own tightly-defined timings, however for most of this article we will be treating all systems as more-or-less identical because they work in a sufficiently similar manner.

Get That Syncing Feeling A close-up on a single line of composite video from a Raspberry Pi.

Looking at the synchronisation element of a composite video signal, there are two different components that are essential to keeping the display on the same timing as the source. There is a short line sync pulse at the start of each individual picture line, and a longer frame sync pulse at the start of each frame. The line sync pulses are a short period of zero volts that fills the time between the picture line.

A frame sync period, incorporating multiple line sync pulses.

In the close-up of a single picture line above there are two line sync pulses, you can see them as the two rectangular pulses that protrude the lowest. Meanwhile in the close-up of a frame sync period to the right you can see the frame sync pulse as a period of several lines during which the entire signal is pulled low. Unexpectedly though it also contains inverted line pulses. This is because on an older CRT the line oscillator would still have to be able to detect them to stay in sync. This frame sync pulse is surrounded by a few empty lines during which a CRT display would turn off its electron gun while the beam traversed the screen from bottom right to top left. This is referred to as the frame blanking period, and is the place in which data services such as teletext and closed-captioning can be concealed. In the spirit of electronic television’s origins in the early 20th century, both types of sync pulses are designed to be extracted using simple RC filters.

Know Your Porches An annotated capture of a composite video line sync pulse.

The area around the line sync pulse is particularly interesting, because it contains the most obvious hint on an oscilloscope screen that a composite video signal is carrying colour information. It also has a terminology all of its own, which is both mildly amusing and useful to know when conversing on the subject.

Immediately before and after the sync pulse itself are the short time periods referred to as the front porch and back porch respectively. These are the periods during which the picture information has stopped but the line sync pulse is not in progress, and they exist to demarcate the sync pulse from its surroundings and aid its detection.

Directly after the back porch is a short period of a pure sine wave (called the colour burst) that is at the frequency of the colour subcarrier. This so-called colour burst exists to allow the reference oscillator in the colour decoder circuit to be phase-locked to the one used to encode the colour information at the source. Each and every line that is not part of the frame blanking period will carry a colour burst, ensuring that the reference oscillator never has the time to drift out of phase.

After the colour burst there follows the luminance information for that line of the picture, with higher voltages denoting more brightness. Across the whole period from front porch to the start of the luminance information, that old CRT TV would have generated a line blanking pulse to turn off the electron gun while its target moves back across the screen to start the next line — the perfect time to transmit all of this important information.

Where Do All Those Figures Come From?

We’ve avoided specific figures because the point of this article is not to discuss individual standards. But it is worth taking a moment to ask why some of those figures come into being, and the answer to that question lies in a complex web of interconnected timing and frequency relationships born of a standard that had to retain backward compatibility as it evolved.

The frame rate is easy enough to spot, being derived from the AC mains frequency of the countries developing the standards. PAL and SECAM have a 50 Hz frame rate, while NTSC has a 60Hz one. The line frequencies though are less obvious, being chosen to fit the limitations of electronic frequency dividers in the mid 20th century. When there were no handy catalogues of 74 series logic, any frequency multiples between line and frame rates for the desired number of lines had to be chosen for simplicity in the divider chains required to link them in synchronisation from a single oscillator. As an example, the PAL system has 625 lines, with each 625 line image in the form of two interlacing frames of 312 and then 313 lines. The studio would have had a 31.250 kHz master oscillator, from which it would have derived the 15.625 kHz line frequency with a single divide-by-two circuit, and the 50Hz frame frequency with a chain of four divide-by-5 circuits.

Meanwhile the frequencies of the colour and sound subcarriers require a different view of the composite signal, in the frequency domain. The video spectrum is full of harmonics of the line frequency at regular intervals, and any extra carriers would have been required to have been chosen such that they did not interfere with any of these harmonics or with other carriers already present. Thus seemingly odd figures such as the PAL 4.43361875 MHz colour subcarrier frequency start to make sense when you view them as lying between line harmonics.

That a composite video signal can contain so much information while retaining the ability for it to be extracted by mid-century technology and furthermore to be able to be explained in a single page, is a testament to the inginuity of its many designers who added to its specification over the years. There was no one person who invented composite video, instead it is the culmination of the work of many different teams from [John Logie Baird] and [Philo T Farnsworth] onwards. It is unlikely that there will be further enhancements made to it, and it is probable that over the next decade or so it will march into history. For now though there is still a benefit to having a basic understanding of its components, because you never know when you might need to hack a display onto a microcontroller that happens to have a spare I2S interface.

Solenoids and Servos for Self Actuated Switches

5 hours 40 min ago

The new hotness in home automation is WiFi controlled light switches. Sure, we’ve had computer-controlled home lighting for literally forty years with X10 modules, but now we have VC money pouring into hardware, and someone needs to make a buck. A few years ago, [Alex] installed WiFi switches in a few devices in his house and discovered the one downside to the Internet of Light Switches — his light switches didn’t have a satisfying manual override. Instead of cursing the darkness for want of an Internet-connected candle, [Alex] did the only sensible thing. He installed electromagnets, solenoids, and servos behind the light switches in his house.

The exact problem [Alex] is trying to solve here is stateful wall switches. With an Internet-connected lamp socket, the wall switch no longer functions. Being able to turn on a light when your phone is out of charge is something we all take for granted, and the solution is, of course, to have Internet-connected switches.

Being able to read the state of a switch and send some data off to a server is easy. For this, [Alex] used a WeMos D1 mini, a simple ESP8266-based board. The trick here, though, is stateful switches that can toggle themselves on and off. This is a mechanical build, and although self-actuated switches that can flip up and down by computer command exist, they’re horrifically expensive. Instead, [Alex] went the DIY route, first installing electromagnets behind the switches, then moving to solenoids, and finally designing a solution around four cheap hobby servos. The entire confabulation stuffed into a 2-wide electrical box consists of two switches, four hobby servos, the D1 mini, and an Adafruit servo driver board.

The software stack for this entire setup includes a NodeJS server connected to Orvibo Smart Sockets over UDP. Also on this server is a WebSocket server for browser-based clients that want to turn the lights on and off, a FauXMo server to turn the lights on and off via an Amazon Echo via WeMo emulation, and an HTTP server for other clients like [Alex]’ Pebble Watch.

This is, without question, the most baroque method of turning a lamp on and off that we’ve ever seen. Despite this astonishing complexity, [Alex] has something that is also intuitive to use and, to borrow an applhorism, ‘Just Works’. With a setup like this, anyone can flick a switch and turn a lamp on or off over the Internet, or vice-versa. This is the best Home Automation build we’ve ever seen.

You can check out [Alex]’ video demo of his build below, or his GitHub for the entire project here.

Neural Networking: Robots Learning From Video

7 hours 9 min ago

Humans are very good at watching others and imitating what they do. Show someone a video of flipping a switch to turn on a CNC machine and after a single viewing they’ll be able to do it themselves. But can a robot do the same?

Bear in mind that we want the demonstration video to be of a human arm and hand flipping the switch. When the robot does it, the camera that is its eye will be seeing its robot arm and gripper. So somehow it’ll have to know that its robot parts are equivalent to the human parts in the demonstration video. Oh, and the switch in the demonstration video may be a different model and make, and the CNC machine may be a different one, though we’ll at least put the robot within reach of its switch.

Sound difficult?

Researchers from Google Brain and the University of Southern California have done it. In their paper describing how, they talk about a few different experiments but we’ll focus on just one, getting a robot to imitate pouring a liquid from a container into a cup.

Getting A Head Start TCN producing attributes

When you think about it, a human has a head start. They’ve used their arm and hand before, they’ve reached for objects and they’ve manipulated them.

The researchers give their robot a similar head start by first training a deep neural network called a Time-Contrastive Network (TCN). Once trained, the TCN can be given an image from a video frame either of a human or robot performing the action and it outputs a vector (a list of numbers) that embeds within it high-level attributes about the image such as:

  • is the hand in contact with the container,
  • are we within pouring distance,
  • what is the container angle,
  • is the liquid flowing, and
  • how full is the cup?

Notice that nowhere do the attributes say whether the hand is human or robotic, nor any details about the liquid, the container or the cup. No mention is made of the background either. All of that has been abstracted out so that the differences don’t matter.

Training The TCN

Another advantage a human has is stereo vision. With two eyes looking at an object from different positions, the object in the center of vision is centered for both eyes but the rest of the scene is at different positions in each eye. The researchers do a similar thing by training the TCN using two phone cameras for two viewpoints and get much better results than when they try with a single viewpoint. This allows the robot pick out the arm, hand, liquid, container, and cup from the background.

Recording the videos

Altogether they train the TCN using around 60 minutes of dual-viewpoint video:

  • 20 minutes of humans pouring,
  • 20 minutes of humans manipulating cups and bottles in ways which don’t involve pouring, and
  • 20 minutes of the robot in a pouring setting but not necessarily pouring successfully.

The latter video of robots in a pouring setting is needed in order for the TCN to create the abstraction of a hand that’s independent of either a human or a robot hand.

Frames from the videos are given to the TCN one at a time. But for every three frames, a loss is calculated. One of those frames, called the Anchor, is taken from View 1 where the camera is held steady. A second frame, called the Positive, is taken from View 2 where the camera moves around more but from the same instant in time. And the third frame, called the Negative, is taken from a time when things are functionally different. View 2 is moved around more to introduce more variety and motion blur.

A loss value is then calculated using the three frames and is used to adjust the values within the TCN. This results in the similarities between the Anchor and the Positive being learned. it also means that the TCN learns what’s different between the Anchor and the Negative.

The use of different viewpoints helps eliminate the background from the learned attributes. Other things that are different are the lighting conditions and the scale of the objects and so these are also not included in the learned attributes. Had each frame been used one-by-one to learn attributes then all these irrelevant things would have been learned too.

By including the Negative frames, the ones where functionality differs, attributes that change across time are learned, such as the fullness of the cup.

Even different containers, cups and liquids are used so that these can be learned as more abstract or general attributes.

Learning To Imitate

As we said, the TCN is the equivalent of the human head start. It’s where a human already knows about arms, hands, switches and how to reach for objects before being shown how to flip a switch to turn on the CNC. The biggest difference here is that this TCN is trained about cups and liquids instead of switches.

Once the researchers have trained the TCN they’re able to use it to produce an input to a reinforcement learning algorithm to control the robot’s movements.

TCN and reinforcement learning

Only a single video of a human pouring a liquid is needed to do the reinforcement learning. Each frame of the video is fed into the TCN as are frames from the robot’s camera. Notice that the points of view and much else are different. But that doesn’t matter since for each of the frames, the TCN is outputting at the more abstract level of attributes such as those we listed above (e.g. are we within pouring distance?). The more similar those attributes are between the two frames the better the robot is doing.

A reward value is calculated and passed on to the robot agent code along with the current state so that it can produce an action. The robot agent is actually a machine learning construct which is being fine-tuned based on the reward. It outputs the next action and the process is repeated. This is called reinforcement learning, the reward reinforces the machine learning construct’s learning.

The end result can be viewed in the following video, along with more of the experiments talked about in the paper.

Test results

For the chart shown here, there are 10 attempts per iteration. The Multi-view TCN is the algorithm we’ve described here. After around 100 attempts, the 7-DoF KUKA robot consistently pours the orange beads into the cup, but you can see that it was already doing quite well by the 50th iteration.

Limitations Or Advantages?

Only a single pouring video was needed for the robot to learn to imitate. However, as we saw, before that could be done, the TCN needed to be trained and that required a dataset of 60 minutes of video from multiple viewpoints both of a human and the robot. Viewing this as a limitation is a mistake, though. Human children spend dozens of hours pouring water into and out of cups in the bathtub before they fully master their eye-hand coordination and arm kinematics.

Certainly, one limitation with the current research is that separate TCNs are needed for each task. Go back and look at the list of attributes the TCN produced to see what we mean — they embody a lot of human knowledge about how hands pour liquids into cups. The attribute “is the liquid flowing” isn’t going to help much with the switch-flipping problem, for instance. The researchers do say that, in principle, a single TCN should be able to learn multiple tasks by using larger datasets, but they have yet to explore that avenue.

Stereo vision and neural pathway diagram by MADS00 CC BY-SA 4.0

Interestingly, the researchers also tried training the TCN using a single viewpoint. Three frames were again used, two adjacent ones and one further away, but the results weren’t as good as with the use of multiple viewpoints. When we watch someone pour water into a cup, we have a complete 3D mental model of the event. Can you imagine what it would look like from above, behind, or to the side of the cup all at the same time? You can, and that helps you reason about where the water’s going.

Adding these extra inputs into the TCN gives it a chance to do the same. Perhaps researchers should be doing object recognition with multiple views taken at the same instant using multiple cameras.

In the end, pouring water into a cup isn’t a hard task for a robot with a complete machine vision setup. What’s interesting about this research is that the TCN, when fed the right input variables for the task, can learn abstractly enough to be useful for training based on a video of a human doing the same task. The ability to make mental models of the world and the way it works, abstracting away from ourselves and taking the viewpoints of others, is thought to be unique to primates. Leaving the cherry-picking of TCN’s output variables and the limited task aside, this is very impressive research. In any case, we’re one step closer to having workshop robot assistants that can work our CNC machines for us.

Capture the Flag Challenge is the Perfect Gift

10 hours 10 min ago

Nothing says friendship like a reverse engineering challenge on unknown terrain as a birthday present. When [Rikaard] turned 25 earlier this year, his friend [Veydh] put together a Capture the Flag challenge on an ESP8266 for him. As a software guy with no electronics background, [Rikaard] had no idea what he was presented with, but was eager to find out and to document his journey.

Left without guidance or instructions, [Rikaard] went on to learn more about the ESP8266, with the goal to dump its flash content, hoping to find some clues in it. Discovering the board is running NodeMCU and contains some compiled Lua files, he stepped foot in yet another unknown territory that led him down the Lua bytecode rabbit hole. After a detour describing his adjustments for the ESP’s eLua implementation to the decompiler he uses, his quest to capture the flag began for real.

While this wasn’t [Rikaard]’s first reverse engineering challenge, it was his first in an completely unknown environment outside his comfort zone — the endurance he demonstrated is admirable. There is of course still a long way down the road before one opens up chips or counts transistors in a slightly more complex system.

Flying the Friendly Skies with A Hall Effect Joystick

13 hours 10 min ago

There are plenty of PC joysticks out there, but that didn’t stop [dizekat] from building his own. Most joysticks mechanically potentiometers or encoders to measure position. Only a few high-end models use Hall effect sensors. That’s the route [dizekat] took.

Hall effect sensors are non-contact devices which measure magnetic fields. They can be used to measure the position and orientation of a magnet. That’s exactly how [dizekat] is using a trio of sensors in his design. The core of the joystick is a universal joint from an old R/C car. The center section of the joint (called a spider) has two one millimeter thick disc magnets glued to it. The Hall sensors themselves are mounted in the universal itself. [Dizekat] used a small piece of a chopstick to hold the sensors in position while he found the zero point and glued them in. A third Hall effect sensor is used to measure a throttle stick positioned on the side of the box.

An Arduino micro reads the sensors and converts the analog signal to USB.  The Arduino Joystick Library by [Matthew Heironimus] formats the data into something a PC can understand.

While this is definitely a rough work in progress, we’re excited by how much [dizekat] has accomplished with simple hand tools and glue. You don’t need a 3D printer, laser cutter, and a CNC to pull off an awesome hack!

If you think Hall effect sensors are just for joysticks, you’d be wrong – they work as cameras for imaging magnetic fields too!

Making A Covox Speech Thing Work On A Modern PC

16 hours 10 min ago

Long ago, when mainframes ruled the earth, computers were mute. In this era before MP3s and MMUs, most home computers could only manage a simple beep or two. Unless you had an add-on device like the Covox Speech Thing, that is. This 1986 device plugged into your parallel port and allowed you to play sound. Glorious 8-bit, mono sound. [Yeo Kheng Meng] had heard of this device, and wondered what it would take to get it running again on a modern Linux computer. So he found out in the best possible way: by doing it.

The Covox Speech Thing is a very simple device, a discrete component digital-to-analog converter (DAC) that uses computer parallel port. This offers 8 data pins, and the Covox couples each of these to a resistor of different value. Tie the output of these resistors together, then raise the voltage on different pins and you create an analog voltage level from digital data. Do this repeatedly, and you get an audio waveform. It’s a simple device that can create the waveform with a sampling frequency as fast as the parallel port can send data. It isn’t as Hi-Fi as modern sound cards, but it was a lot better than a bleep.  If you don’t have one lying around, we’ve covered how to build your own.

The main problem that [Yeo Keng Meng] found with writing a program to drive this device is the sophistication of modern computers. Most of the time, devices like parallel ports are hidden behind drivers and buffers that control the flow of data. That makes things simple for the programmer: they can let the driver take care of the tedious details. This device requires a more direct approach: the data has to be written out to the parallel port at the right frequency to create the waveform. If there is any buffering or other fiddling about, this timing is off and it doesn’t work. [Yeo’s] code gets around this by writing the data (created from an MP3 file) directly to the parallel port address in memory. That only really works in Linux, though: it is much harder to do in OSes like Windows that do their best to keep you away from the hardware. It’s arguable if that is a good or a bad thing, but [Yeo] has done a nice job of writing up his work in a way that might intrigue a modern hacker trying to understand how things in the past were both simpler and more complicated at the same time.

Speech Recognition For Linux Gets A Little Closer

Wed, 01/17/2018 - 22:00

It has become commonplace to yell out commands to a little box and have it answer you. However, voice input for the desktop has never really gone mainstream. This is particularly slow for Linux users whose options are shockingly limited, although decent speech support is baked into recent versions of Windows and OS X Yosemite and beyond.

There are four well-known open speech recognition engines: CMU Sphinx, Julius, Kaldi, and the recent release of Mozilla’s DeepSpeech (part of their Common Voice initiative). The trick for Linux users is successfully setting them up and using them in applications. [Michael Sheldon] aims to fix that — at least for DeepSpeech. He’s created an IBus plugin that lets DeepSpeech work with nearly any X application. He’s also provided PPAs that should make it easy to install for Ubuntu or related distributions.

You can see in the video below that it works, although [Michael] admits it is just a starting point. However, the great thing about Open Source is that armed with a working set up, it should be easy for others to contribute and build on the work he’s started.

IBus is one of those pieces of Linux that you don’t think about very often. It abstracts input devices from programs, mainly to accommodate input methods that don’t lend themselves to an alphanumeric keyboard. Usually this is Japanese, Chinese, Korean, and other non-Latin languages. However, there’s no reason IBus can’t handle voice, too.

Oddly enough, the most common way you will see Linux computers handle speech input is to bundle it up and send it to someone like Google for translation despite there being plenty of horsepower to handle things locally. If you aren’t too picky about flexibility, even an Arduino can do it. With all the recent tools aimed at neural networks, the speech recognition algorithms aren’t as big a problem as finding a sufficiently broad training database and then integrating the data with other applications. This IBus plugin takes care of that last problem.

Recreating the Radio from Portal

Wed, 01/17/2018 - 19:00

If you’ve played Valve’s masterpiece Portal, there’s probably plenty of details that stick in your mind even a decade after its release. The song at the end, GLaDOS, “The cake is a lie”, and so on. Part of the reason people are still talking about Portal after all these years is because of the imaginative world building that went into it. One of these little nuggets of creativity has stuck with [Alexander Isakov] long enough that it became his personal mission to bring it into the real world. No, it wasn’t the iconic “portal gun” or even one of the oft-quoted robotic turrets. It’s that little clock that plays a jingle when you first start the game.

Alright, so perhaps it isn’t the part of the game that we would be obsessed with turning into a real-life object. But for whatever reason, [Alexander] simply had to have that radio. Of course, being the 21st century and all his version isn’t actually a radio, it’s a Bluetooth speaker. Though he did go through the trouble of adding a fake display showing the same frequency as the one in-game was tuned to.

The model he created of the Portal radio in Fusion 360 is very well done, and available on MyMiniFactory for anyone who might wish to create their own Aperture Science-themed home decor. Though fair warning, due to its size it does consume around 1 kg of plastic for all of the printed parts.

For the internal Bluetooth speaker, [Alexander] used a model which he got for free after eating three packages of potato chips. That sounds about the best possible way to source your components, and if anyone knows other ways we can eat snack food and have electronics sent to our door, please let us know. Even if you don’t have the same eat-for-gear promotion running in your neck of the woods, it looks like adapting the model to a different speaker shouldn’t be too difficult. There’s certainly enough space inside, at least.

Over the years we’ve seen some very impressive Portal builds, going all the way back to the infamous levitating portal gun [Caleb Kraft] built in 2012. Yes, we’ve even seen somebody do the radio before. At this point it’s probably safe to say that Valve can add “Create cultural touchstone” to their one-sheet.

Improvising An EPROM Eraser

Wed, 01/17/2018 - 16:00

Back in the old days, when we were still twiddling bits with magnetized needles, changing the data on an EPROM wasn’t as simple as shoving it in a programmer. These memory chips were erased with UV light shining through a quartz window onto a silicon die. At the time, there were neat little blacklights in a box sold to erase these chips. There’s little need for these chip erasers now, so how do you erase and program a chip these days? Build your own chip eraser using components that would have blown minds back in the 70s.

[Charles] got his hands on an old 2764 EPROM for a project, but this chip had a problem — there was still data on it. Fortunately, old electronics are highly resistant to abuse, so he pulled out the obvious equipment to erase this chip, a 300 watt tanning lamp. This almost burnt down the house, and after a second round of erasing of six hours under the lamp, there were still unerased bits.

Our ability to generate UV light has improved dramatically over the last fifty years, and [Charles] remembered he had an assortment of LEDs, including a few tiny 5mW UV LEDs. Can five milliwatts do what three hundred watts couldn’t? Yes; the LED had the right frequency to flip a bit, and erasing an EPROM is a function of intensity and time. All you really need to do is shine a LED onto a chip for a few hours.

With this vintage chip erased, [Charles] slapped together an EPROM programmer — with a programming voltage of 21V — out of an ATMega and a bench power supply. It eventually worked, allowing [Charles]’ project, a vintage liquid crystal display, to have the right data using vintage-correct parts.

34C3: Reverse Engineering FPGAs

Wed, 01/17/2018 - 14:30

We once knew a guy who used to tell us that the first ten times he flew in an airplane, he jumped out of it. It was his eleventh flight before he walked off the plane. [Mathias Lasser] has a similar story. Despite being one of the pair who decoded the iCE40 bitstream format a few years ago, he admits in his 34C3 talk that he never learned how to use FPGAs. His talk covers how he reverse engineered the iCE40 and the Xilinx 7 series devices. You can see the video, below.

If you are used to FPGAs in terms of Verilog and VHDL, [Mathias] will show you a whole new view of rows, columns, and tiles. Even if you don’t ever plan to work at that level, sometimes understanding hardware at the low level will inspire some insights that are harder to get at the abstraction level.

In theory, the reverse engineering ought not be that hard. The device has some amount of resources and the bitstream identifies how those resources connect together and maybe program some lookup tables. In practice, though, it is difficult because there is virtually no documentation, including details about the resources you need to know at that level.

For example, in the video, you can see Lattice’s diagram for a logic cell. There are several options to do things like bypass the flip flop, set the look-up table, and so on. There’s any number of options available to set that configuration and that doesn’t even address how to connect the inputs and outputs to the routing resources.

Of course, you know he managed the iCE40 decoding since he and [Clifford Wolf] did the work behind the open source Lattice toolchain. We even used that toolchain in several of our FPGA tutorials.

Confessions Of A Reformed Frequency Standard Nut

Wed, 01/17/2018 - 13:01

Do you remember your first instrument, the first device you used to measure something? Perhaps it was a ruler at primary school, and you were taught to see distance in terms of centimetres or inches. Before too long you learned that these units are only useful for the roughest of jobs, and graduated to millimetres, or sixteenths of an inch. Eventually as you grew older you would have been introduced to the Vernier caliper and the micrometer screw gauge, and suddenly fractions of a millimetre, or thousandths of an inch became your currency.  There is a seduction to measurement, something that draws you in until it becomes an obsession.

Every field has its obsessives, and maybe there are bakers seeking the perfect cup of flour somewhere out there, but those in our community will probably focus on quantities like time and frequency. You will know them by their benches surrounded by frequency standards and atomic clocks, and their constant talk of parts per billion, and of calibration. I can speak with authority on this matter, for I used to be one of them in a small way; I am a reformed frequency standard nut.

That Annoying Final Counter Digit

Tuned circuits in a radio IF transformer. Chetvorno [CC0].You might ask how such an obsession might develop. After all, who needs a frequency standard accurate to an extremely tiny fraction of a Hz on their bench? The answer is that, unless your job depends upon it, you don’t. If you are a radio amateur, you really only need a standard good enough to ensure that you are within the band you are licensed to transmit upon, and able to stay on the frequency you choose without drifting away. But of course such sensible considerations don’t matter. If you’ve bought a frequency counter, you have an instrument with nagging seventh and eighth digits that show you how fast that crystal oscillator you thought was pretty stable is drifting. And there you are, teetering on the edge of that slippery slope.

The first electronic radio frequency oscillators used turned circuits, combinations of inductors and capacitors, to provide their frequency stability. A tuned circuit oscillator can be surprisingly stable once it has settled down, but it is still at the mercy of the thermal properties of the materials used in that tuned circuit. If the temperature goes up, the wire in the inductor expands, and its inductance changes. Older broadcast radios sometimes required constant manual retuning because of this, and very few radio transmitters rely on these circuits for their stability.

The answer to tuned circuit instability came in the form of piezoelectric quartz crystals. These will form a resonator with similar electrical properties to a tuned circuit, but with a much lower susceptibility to temperature-induced drift. They are stable enough that they have become the ubiquitous frequency standard behind most of today’s electronics: almost every microprocessor, microcontroller, or other synchronous circuit you will use is likely to derive its clock from a quartz crystal. Your 1957 FM radio might have needed a bit of tuning to stay on station, but its 2017 equivalent is rock-stable thanks to a crystal providing the reference for its tuning synthesiser.

A crystal oven installed in a Hewlett-Packard frequency counter. Yngvarr [CC BY-SA 3.0].Crystals are good — good enough for most everyday frequency reference purposes — but they are not without their problems. They may be less susceptible than a tuned circuit to temperature-induced drift but they still exhibit some. And while they are factory-tuned to a particular frequency they do not in reality oscillate at exactly that frequency. Crystal oscillators seeking that extra bit of accuracy will therefore reduce drift by placing the crystal in a temperature-regulated oven, and will often provide some means of making a minor adjustment to the frequency of oscillation in the form of a small variable capacitor.

If you have a crystal oscillator in an oven, you’re doing pretty well. You’ve reduced drift as far as you can, and you’ve adjusted it to the frequency you want. But of course, you can’t truly satisfy the last part of that sentence, because you lack the ability to measure frequency accurately enough. Your trusty frequency counter isn’t as trusty once you remember that its internal reference is simply another quartz crystal, so in essence you are just comparing two crystals of equivalent stability. How can you trust your counter?

At this point, we’re done with frequency standards based on physical dimensions of materials, and have to move up a level into the realm of atomic physics. All elements exhibit resonant frequencies that are fundamentals of the energy levels in their atomic structure, and these represent the most stable reference frequencies available: those against which our standard definitions of time and frequency are measured. There are a variety of atomic standards at the disposal of metrologists with large budgets, but the ones we will most commonly encounter use either caesium, or rubidium atoms. The caesium standard forms the basis of the international definition of time and frequency, while rubidium standards are a more affordable and accessible form of atomic standard.

Raise Your Own Standard My trusty Heathkit crystal calibrator.

One of the oldest and simplest ways to calibrate an oscillator to a standard frequency is to perform the task against that of a broadcast radio transmitter. You will hear an audible beat tone in the speaker of a receiver when the frequency of the oscillator or one of its harmonics is close enough to the station for their difference to be in the audible range, so it is a simple task to adjust the oscillator to the point at which the beat frequency stops. The lower frequency limit of human hearing allows a match to within a few tens of hertz, and a closer match can be achieved with the help of an oscilloscope.

A 100 kHz crystal calibration oscillator used to be a standard part of a radio amateur’s arsenal, and it could be matched to any suitable broadcast frequency standard worldwide. For a Brit like me back in the day it was convenient to use the caesium standard BBC Radio 4 long wave transmitter on 200 kHz to calibrate my 100 kHz oscillator, but sadly for me in 1988 when the ink was barely dry on my licence they reorganised long wave frequencies and moved it to 198 kHz.

When I was at the height of my quest for a pure frequency standard, the next most accessible source was to take a broadcast standard and use that as the reference source to discipline a crystal oscillator by means of a phase-locked loop. You could buy off-air frequency standard receivers as laboratory instruments, but as an impoverished student I opted to build my own.

Here in the UK, I had the choice of the aforementioned 198 kHz Radio 4 transmitter or the 60 kHz British MSF time signal, and I chose the former as I could cannibalise a long wave broadcast receiver for a suitable ready-wound ferrite rod antenna. This fed an FET front-end, which in turn fed a limiter and filter that provided a Schmitt trigger with what it needed to create a 198 kHz logic level square wave. Then with a combination of 74-series logic dividers and the ever-versatile 4046 PLL chip I was able to lock a 1 MHz crystal oscillator to it, and be happy that I’d created the ultimate in frequency standards. Except I hadn’t really. Despite learning a lot about PLLs and choosing a long time constant for my loop filter, I must have had an unacceptably high phase noise. Not the only time my youthful belief in my own work exceeded the reality.

A handy GPS module from Adafruit. Oomlout [CC BY-SA 2.0]Off-air standards are still an accessible option for the would-be frequency afficionado, but it is rather improbable that you would build one in 2017 because a far better option now exists. The network of GPS and similar navigation satellites is an accessible source of high-accuracy timing for everybody, with a multitude of affordable GPS hardware for all purposes. Thus it is simpler by far to opt for a GPS-disciplined crystal oscillator, and indeed we have seen them from time to time being used in the projects featured here.

GPS is very good, and the only way to get fancier is to go atomic. The once-impossible dream of having your own atomic standard is now surprisingly affordable, as the proliferation of mobile phone networks led to a large number of rubidium standards being deployed in their towers. As earlier generations of cell towers have been decommissioned, these components have found their way onto the second-hand market, and can be had from the usual sources without the requirement to mortgage your children.

The modules you can easily buy contain a crystal oscillator disciplined by reference to the rubidium standard itself. The standard monitors the intensity of monochromatic light from a rubidium lamp through a chamber of rubidium gas exposed to radio frequency matching the resonant frequency of the transition between ground states of the rubidium atom, and locks the radio frequency to the resonance observed as a dip in that intensity.

Seekers of the ultimate in standard frequency accuracy now have several options when it comes to calibration sources. Making an off-air standard is more trouble than a GPS-based one, and the more adventurous among you can find a rubidium-disciplined source. Or perhaps you already have. There’s no shame in excess precision, but we’re curious: do you really need such an accurate source of timing information? Or are you chasing that last digit just because it’s there?

Friday Hack Chat: Fashion! (Turn To The Left)

Wed, 01/17/2018 - 12:00

An underappreciated facet of the maker movement is wearable technology. For this week’s Hack Chat, we’re going to be talking all about wearable and fashion tech. This includes motors, lighting, biofeedback, and one significantly overlooked aspect of wearables, washability.

For this week’s Hack Chat, we’re sitting down with Kathryn Blair and Shannon Hoover to talk about the workability and washability of fashion tech. Over the last decade or so, wearable tech has become ever more popular, and these advances in the science aren’t just limited to amazing outfits lined with hundreds of Neopixels. Now, we’re dealing with biofeedback, clothing that regulates your body temperature monitors your vital signs, and necklaces that glow when the sun goes down.

Kathryn and Shannon are part of the team behind MakeFashion, a Calgary-based outfit that has produced over 60 wearable tech garments shown at 40 international events. MakeFashion is introducing designers to wearables through a series of hands-on workshops built around developing wearable electronics and electronic wearables.

One of the key technologies behind MakeFashion is the StitchKit, a development kit that’s now available on Kickstarter designed to add electronics to wearables. This means everything from uglier Christmas sweaters to interactive clothing.

During this Hack Chat, we’re going to be discussing the design and engineering behind fashion technology, including biofeedback, how motors and lighting work with a human body, and how to design for washability. If you have a question for this Hack Chat, add it to the discussion part of the event page.

Our Hack Chats are live community events on the Hackaday.io Hack Chat group messaging. This Hack Chat is going down Friday, January 19th at noon, Pacific time. Time Zones got you down? Here’s a handy countdown timer!

Click that speech bubble to the left, and you’ll be taken directly to the Hack Chat group on Hackaday.io.

You don’t have to wait until Friday; join whenever you want and you can see what the community is talking about.

Joykill: Previously Undisclosed Vulnerability Endangers User Data

Wed, 01/17/2018 - 11:00

Researchers have recently announced a vulnerability in PC hardware enabling attackers to wipe the disk of a victim’s computer. This vulnerability, going by the name Joykill, stems from the lack of proper validation when enabling manufacturing system tests.

Joykill affects the IBM PCjr and allows local and remote attackers to destroy the contents of the floppy diskette using minimal interaction. The attack is performed by plugging two joysticks into the PCjr, booting the computer, entering the PCjr’s diagnostic mode, and immediately pressing button ‘B’ on joystick one, and buttons ‘A’ and ‘B’ on joystick two. This will enable the manufacturing system test mode, where all internal tests are performed without user interaction. The first of these tests is the diskette test, which destroys all user data on any inserted diskette. There is no visual indication of what is happening, and the data is destroyed when the test is run.

A local exploit destroying user data is scary enough, but after much work, the researchers behind Joykill have also managed to craft a remote exploit based on Joykill. To accomplish this, the researchers built two IBM PCjr joysticks with 50-meter long cables.

Researchers believe this exploit is due to undocumented code in the PCjr’s ROM. This code contains diagnostics code for manufacturing burn-in, system test code, and service test code. This code is not meant to be run by the end user, but is still exploitable by an attacker. Researchers have disassembled this code and made their work available to anyone.

As of the time of this writing, we were not able to contact anyone at the IBM PCjr Information Center for comment. We did, however, receive an exciting offer for a Carribean cruise.

Custom Alexa Skill in a Few Minutes Using Glitch

Wed, 01/17/2018 - 10:01

As hackers, we like to think of ourselves as a logical bunch. But the truth is, we are as subject to fads as the general public. There was a time when the cool projects swapped green LEDs out for blue ones or added WiFi connectivity where nobody else had it. Now all the rage is to connect your project to a personal assistant. The problem is, this requires software. Software that lives on a publicly accessible network somewhere, and who wants to deal with that when you’re just playing with custom Alexa skills for the first time?

If you have a computer that faces the Internet, that’s fine. If you don’t, you can borrow one of Amazon’s, but then you need to understand their infrastructure which is a job all by itself. However, there is a very simple way to jump start an Alexa skill. I got one up and running in virtually no time using a website called Glitch. Glitch is a little bit of everything. It is a web hosting service, a programming IDE for Node.js, a code repository, and a few other things. The site is from the company that brought us Trello and helped to start Stack Overflow.

Glitch isn’t about making Alexa skills. It is about creating web applications and services easily. However, that’s about 90% of the work involved in making an Alexa skill. You’ll need an account on Glitch and an Amazon developer’s account. Both are free, at least for what we want to accomplish. Glitch has some templates for Google Home, as well. I have both but decided to focus on Alexa, for no particular reason.

Admittedly, my example skill isn’t performing a particularly difficult task, but it is a good starter and if you want to develop your own, it will give you a head start. Although I used a particular tool, there’s no reason you could not move from that tool to your own infrastructure later, if you had the need to do so. One nice thing about Glitch is that you can share code live and people can remix it. Think of a GitHub fork, but where you can try running my copy and your copy is instantly live, too. Turns out for the Alexa skills, having it live isn’t as useful as you’d like because you still have to tell Amazon about it. But it does mean there are example Alexa skills (including mine) that you can borrow to get yours started, just like I borrowed one to start mine.

Do You Need It?

The first question you might ask yourself is do you even need an Alexa skill? I recently got Alexa to control my 3D printers by using IFTTT with no software development at all required. However, if you really want to claim you work with a virtual assistant, you are going to have to write some code somewhere.

Of course, there’s the bigger philosophical question: do you need to do any of this? I’ll admit, I find it useful to have my 3D printers on voice control because I might be making adjustments with both hands and I don’t have to fumble with buttons to have the machine do a homing cycle, for example. I’m less sold that I need a virtual assistant to launch my drone. Then again, maybe that’s what you want to do and that’s up to you.

Getting Started

If you don’t already have one, you might as well go ahead and sign up for an Amazon developer’s account. Then head over to Glitch and register there, too. There are at least two templates for building an Alexa skill. There are a bare-bones one and a more involved one that retrieves weather forecasts. If you are looking at the page, it might not make much sense. Remember, the web server is meant to talk to Alexa, not people. In the top right corner is an icon with a pair of fish. If you click there, you can view the source or you can remix it.

I decided to remix the weather forecast service since I thought it was a better example. Then I cut away all the weather-related code (except some of the documentation) and wrote a simple Javascript function:

function get_fact() { var factArray = [ 'Hackers read Hackaday every day', 'You know you are 3D printing too much when you tell someone you are extruding mustard on your hot dog', 'The best microcontroller is the one already programmed to do what you want', 'We can all agree. All software has bugs. All software can be made simpler. Therefore, all programs can be reduced to one line of code that does not work.', 'I hate talking to flying robots. They just drone on.', 'If you call your morning cup of coffee installing Java, you need a hobby', 'I have heard that in C functions should not call each other because they usually have arguments', 'I refused to learn C plus plus. I saw see-out being shifted left hello world times and gave up', 'If cavemen had thought of binary we could all count to 1023 on our fingers', 'If you can\'t hack it, you don\'t own it.' ]; var randomNumber = Math.floor(Math.random()*factArray.length); return factArray[randomNumber]; }

The only thing left to do is to hook the code up to the Web service framework.

Express

Glitch automatically sets up a library called Express in this project. It essentially is a simple Web server. Once you create the main app object, you can set up routes to have your code execute when someone calls a particular web service. It also includes an object that represents an Alexa service. I didn’t have to write the code to set this up, but here it is:

app = express(), // Setup the alexa app and attach it to express before anything else. alexaApp = new alexa.app(""); // POST calls to / in express will be handled by the app.request() function alexaApp.express({ expressApp: app, checkCert: true, // sets up a GET route when set to true. This is handy for testing in // development, but not recommended for production. debug: true });

There are two methods I wanted to provide. One for when someone opens my skill (I called it Hacker Fact, by the way — it gives mildly humorous facts about hacking and related things). The other method will fire when someone says, “Alexa, tell Hacker Fact to give me a fact.” Or anything similar.

That last bit is known as an intent. Intents can have utterances (like “give me a fact”) and they can have slots. I didn’t use any slots (but the weather example did). Using slots you can have some part of the person’s command fed to you as an argument. For example, I could make it so you could say, “Alexa, tell Hacker Fact to give me a fact about Arduinos.” Then I could build the intent so that the next word after it hears “about” is a slot and parse it in my code.

You probably won’t need it, but if you are curious to learn more about Express, check out this video.

The Two Methods

Here are the two methods:

alexaApp.launch(function(request, response) { console.log("App launched"); response.say('Hi from your friends at Hackaday. Ask me for a fact to learn something interesting or possibly funny.'); }); alexaApp.intent("HackerFact", { "slots": { }, "utterances": [ "Tell me a hacker fact", "Give me a hacker fact", "tell me a fact", "give me a fact", "go", "fact" ] }, function(request, response) { console.log("In Fact intent"); response.say(get_fact()); } );<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>

Note the “HackerFact” intent has an array of utterances that will be sent to Amazon. You can find the entire HackerFact code online. Don’t forget, you can view the source or remix with the two fish icon at the top right of the page.

All of the code is in the index.js file. There are a few other files, but for this task, you probably won’t make any changes to them other than perhaps changing some text in the package file or documentation.

On to Amazon

Oddly enough, the next part is probably harder. From the front page of the Amazon developer’s site, you’ll want to select Alexa skills and then press the “Add a New” Skill button. A lot of the entries you’ll see have to do with more complex skills. Also, I’m not going to publish my skill, but it will still show up in my account. If you do some paperwork, you can submit your skill for testing with selected users or even publish it outright.

Here’s a rundown of the fields you need to fill in on the Skill Information tab:

  • Skill type = Custom Interaction Model
  • Name = Whatever display name you like
  • Invocation Name = This is what people will ask Alexa to use (e.g., “Alexa, tell Hacker Fact…” would mean my invocation name was Hacker Fact)

I’ve had bad luck, by the way, changing the invocation name after it has been set, so think hard about that one.

The next page (Interaction Model) looks complicated, but it isn’t, thanks to the libraries provided by Glitch. Open your Glitch project. If you are looking at the source, click “show” in the upper part of the screen. The default project causes a data dump to appear on the main page (which, normally, no one sees) that includes the information you need for this page.

The Intent Schema box needs everything after “Schema:” and before “Utterances:” from your main page. That includes the braces. My Intent Schema looks like this:

{ "intents": [ { "intent": "HackerFact" }, { "intent": "AMAZON.CancelIntent" }, { "intent": "AMAZON.StopIntent" } ] }

The rest of the page has some lines after “Utterances:” and those lines are what the last box needs. The middle box can remain empty, for this example.

More Steps to Connect Glitch with Amazon

In the Configuration tab, you can select HTTPS and then enter the URL from Glitch. To find that URL, open the project name at the top left of the screen and you’ll see it along with a copy button. Mine, for example, is https://hacker-fact.glitch.me. You don’t allow account linking and you can leave all the optional boxes alone.

On the next screen, you’ll want to pick “My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority” because that’s how Glitch works. That’s all you need on that page.

At this point, your skill should show up on any Alexa associated with your account (including Alexa apps like Reverb on your phone). You can also do tests here in the console to see if things work.

Release the Hounds

This is enough to get your code working with your Alexa. I won’t go into releasing the skill for others to use, but that is an option if you want to keep going on your own. You can use Glitch for other things too, including some Raspberry Pi and a few other IoT-type platforms. If you look around, you’ll probably find something interesting.

Of course, you could set up your own server to do all the things that Glitch is doing for you — maybe even on a Raspberry Pi. You can also let Amazon host your code as a Lambda function (that is, a web service with no fixed web server; see the video, below). If you get a taste for writing skills, the Amazon documentation isn’t bad, but sometimes there’s value to just jumping in and getting something working.

Four Pi Zeros, Four Cameras, One Really Neat 3D Scanner

Wed, 01/17/2018 - 07:00

Sometimes when you walk into a hackerspace you will see somebody’s project on the table that stands so far above the norm of a run-of-the-mill open night on a damp winter’s evening, that you have to know more. If you are a Hackaday scribe you have to know more, and you ask the person behind it if they have something online about it to share with the readership.

[Jolar] was working on his 3D scanner project on just such an evening in Oxford Hackspace. It’s a neatly self-contained unit in the form of a triangular frame made of aluminium extrusions, into thich are placed a stack of Raspberry Pi Zeros with attached cameras, and a very small projector which needed an extra lens from a pair of reading glasses to help it project so closely.

The cameras are arranged to have differing views of the object to be scanned, and the projector casts an array of randomly created dots onto it to aid triangulation from the images. A press of a button, and the four images are taken and, uploaded to a cloud drive in this case, and then picked up by his laptop for processing.

A Multi-view Stereo (MVS) algorithm does the processing work, and creates a 3D model. Doing the processing is VisualSFM, and the resulting files can then be viewed in MeshLab or imported into a CAD package. Seeing it in action the whole process is quick and seamless, and could easily be something you’d see on a commercial product. There is more to come from this project, so it is definitely one to watch.

Four Pi boards may seem a lot, but it is nothing to this scanner with 39 of them.

 

Intel Needs To Go Sit In A Corner And Think About Its Meltdown Fail

Wed, 01/17/2018 - 04:00

Big corporations shuffle people around all the time. More often than not, these reorganization efforts end up as a game of musical chairs where all the executives end up with more pay, everybody else’s work are disrupted, and nothing substantial actually changes. Intel just moved some high level people around to form a dedicated security group. Let’s all hope it will make a difference.

When news of Meltdown and Spectre broke, Intel’s public relations department applied maximum power to their damage control press release generators. The initial message was one of defiance, downplaying the impact and implying people are over reacting. This did not go over well. Since then, we’ve started seeing a trickle of information from engineering and even direct microcode updates for people who dare to live on the bleeding edge.

All the technical work to put out the immediate fire is great, but for the sake of Intel’s future they need to figure out how to avoid future fires. The leadership needs to change the company culture away from an attitude where speed is valued over all else. Will the new security group have the necessary impact? We won’t know for quite some time. For now, it is encouraging to see work underway. Fundamental problems in corporate culture require a methodical fix and not a hack.

Wrecked Civic Rides Again as Cozy Camp Trailer

Wed, 01/17/2018 - 01:00

It may not be the typical fare that we like to feature, but you can’t say this one isn’t a hack. It’s a camp trailer fashioned from the back half of a wrecked Honda Civic, and it’s a pretty unique project.

We don’t know about other parts of the world, but a common “rural American engineering” project is to turn the bed and rear axle of an old pickup truck into a trailer. [monickingbird]’s hacked Civic is similar to these builds, but with much more refinement. Taking advantage of the intact and already appointed passenger compartment of a 1997 Civic that had a really bad day, [monickingbird] started by lopping off as much of the front end as possible. Front fenders, the engine, transmission, and the remains of the front suspension and axle all fell victim to grinder, drill, and air chisel. Once everything in front of the firewall was amputated, the problem of making the trailer safely towable was tackled. Unlike the aforementioned pickup trailers, the Civic lacks a separate frame, so [monickingbird] had to devise a way to persuade the original unibody frame members to accept his custom trailer tongue assembly. Once roadworthy, the aesthetics were tackled — replacing the original interior with a sleeping area, installing electrics and sound, and a nice paint job. Other drivers may think the towing vehicle is being seriously tailgated, but it seems like a comfy and classy way to camp.

Now that the trailer is on the road, what to do with all those spare Civic parts? Sure, there’s eBay, but how about a nice PC case featuring a dashboard gauge cluster?

Overclock Your Raspberry Pi The Right Way

Tue, 01/16/2018 - 22:00

The Raspberry Pi came upon us as an educational platform. A credit card sized computer capable of running Linux from a micro SD card, the Raspberry Pi has proven useful for far more than just education. It has made its way into every nook and cranny of the hacker world. There are some cases, however, where it might be a bit slow or seem a bit under powered. One way of speeding the Raspi up is to overclock it.

[Dmitry] has written up an excellent overclocking guide based upon Eltech’s write up on the subject. He takes it a bit further and applies the algorithm to both Raspi 2 and Raspi 3. You’ll need a beefier power supply, some heat sinks and fans – all stuff you probably have lying around on your workbench. Now there’s no excuse stopping you from ratcheting up the MHz and pushing your Pi to the limit!

We’ve seen several guides to overclocking the Raspi here on Hackaday, including the current record holder. Be sure to check out [dmitry’s] IO page for the overclocking details, and let us know of any new uses you’ve found by overclocking your Raspi in the comment below.

Pages